You are on page 1of 1390

About This eBook
ePUB is an open, industry-standard format for eBooks. However, support of ePUB and its many
features varies across reading devices and applications. Use your device or app settings to customize
the presentation to your liking. Settings that you can customize often include font, font size, single or
double column, landscape or portrait mode, and figures that you can click or tap to enlarge. For
additional information about the settings and features on your reading device or app, visit the device
manufacturer’s Web site.
Many titles include programming code or configuration examples. To optimize the presentation of
these elements, view the eBook in single-column, landscape mode and adjust the font size to the
smallest setting. In addition to presenting code and configurations in the reflowable text format, we
have included images of the code that mimic the presentation found in the print book; therefore, where
the reflowable format may compromise the presentation of the code listing, you will see a “Click here
to view code image” link. Click the link to view the print-fidelity code image. To return to the
previous page viewed, click the Back button on your device or app.

CCNA Data Center
DCICT 640-916 Official Cert Guide

NAVAID SHAMSEE, CCIE No. 12625
DAVID KLEBANOV, CCIE No. 13791
HESHAM FAYED, CCIE No. 9303
AHMED AFROSE
OZDEN KARAKOK, CCIE No. 6331

Cisco Press
800 East 96th Street
Indianapolis, IN 46240

CCNA Data Center DCICT 640-916 Official Cert Guide
Navaid Shamsee, David Klebanov, Hesham Fayed, Ahmed Afrose, Ozden Karakok
Copyright © 2015 Pearson Education, Inc.
Published by:
Cisco Press
800 East 96th Street
Indianapolis, IN 46240 USA
All rights reserved. No part of this book may be reproduced or transmitted in any form or by any
means, electronic or mechanical, including photocopying, recording, or by any information storage
and retrieval system, without written permission from the publisher, except for the inclusion of brief
quotations in a review.
Printed in the United States of America
First Printing February 2015
Library of Congress Control Number: 2015930189
ISBN-13: 978-1-58714-422-6
ISBN-10: 1-58714-422-0

Warning and Disclaimer
This book is designed to provide information about the 640-916 DCICT exam for CCNA Data Center
certification. Every effort has been made to make this book as complete and as accurate as possible,
but no warranty or fitness is implied.
The information is provided on an “as is” basis. The authors, Cisco Press, and Cisco Systems, Inc.
shall have neither liability nor responsibility to any person or entity with respect to any loss or
damages arising from the information contained in this book or from the use of the discs or programs
that may accompany it.
The opinions expressed in this book belong to the authors and are not necessarily those of Cisco
Systems, Inc.

Trademark Acknowledgements
All terms mentioned in this book that are known to be trademarks or service marks have been
appropriately capitalized. Cisco Press or Cisco Systems, Inc., cannot attest to the accuracy of this
information. Use of a term in this book should not be regarded as affecting the validity of any
trademark or service mark.

Special Sales
For information about buying this title in bulk quantities, or for special sales opportunities (which
may include electronic versions; custom cover designs; and content particular to your business,
training goals, marketing focus, or branding interests), please contact our corporate sales department

at corpsales@pearsoned.com or (800) 382-3419.
For government sales inquiries, please contact governmentsales@pearsoned.com.
For questions about sales outside the U.S., please contact international@pearsoned.com.

Feedback Information
At Cisco Press, our goal is to create in-depth technical books of the highest quality and value. Each
book is crafted with care and precision, undergoing rigorous development that involves the unique
expertise of members from the professional technical community.
Readers’ feedback is a natural continuation of this process. If you have any comments regarding how
we could improve the quality of this book, or otherwise alter it to better suit your needs, you can
contact us through email at feedback@ciscopress.com. Please make sure to include the book title and
ISBN in your message.
We greatly appreciate your assistance.
Publisher: Paul Boger
Associate Publisher: Dave Dusthimer
Business Operation Manager, Cisco Press: Jan Cornelssen
Executive Editor: Mary Beth Ray
Managing Editor: Sandra Schroeder
Development Editor: Ellie Bru
Senior Project Editor: Tonya Simpson
Copy Editor: Barbara Hacha
Technical Editor: Frank Dagenhardt
Editorial Assistant: Vanessa Evans
Cover Designer: Mark Shirar
Composition: Mary Sudul
Indexer: Ken Johnson
Proofreader: Debbie Williams

Americas Headquarters
Cisco Systems, Inc.
San Jose, CA
Asia Pacific Headquarters
Cisco Systems (USA) Pte. Ltd.

Singapore
Europe Headquarters
Cisco Systems International BV
Amsterdam, The Netherlands
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed
on the Cisco Website at www.cisco.com/go/offices.
CCDE, CCENT, Cisco Eos, Cisco HealthPresence, the Cisco logo, Cisco Lumin, Cisco Nexus, Cisco
StadiumVision, Cisco TelePresence, Cisco WebEx, DCE, and Welcome to the Human Network are
trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks;
and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP,
CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo,
Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity,
Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me
Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort,
the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound,
MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels,
ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to
Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of
Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective
owners. The use of the word partner does not imply a partnership relationship between Cisco and any
other company. (0812R)

About the Authors
Navaid Shamsee, CCIE No.12625, is a senior solutions architect in the Cisco Services organization.
He holds a master’s degree in telecommunication and a bachelor’s degree in electrical engineering.
He also holds a triple CCIE in routing and switching, service provider, and data center technologies.
Navaid has extensive experience in designing and implementing many large-scale enterprise and
service provider data centers. In Cisco, Navaid is focused on security of data center, cloud, and
software-defined networking technologies. You can reach Navaid on Twitter: @NavaidShamsee.
David Klebanov, CCIE No.13791 (Routing and Switching) is a technical solutions architect with
Cisco Systems. David has more than 15 years of diverse industry experience architecting and
deploying complex network environments. In his work, David influences strategic development of the
industry-leading data center switching platforms, which lay the foundation for the next generation of
data center fabrics. David also takes great pride in speaking at industry events, releasing
publications, and working on patents. You can reach David on Twitter: @DavidKlebanov.
Hesham Fayed, CCIE No.9303 (Routing and Switching/Data Center), is a consulting systems
engineer for data center and virtualization based in California. Hesham has been with Cisco for more
than 9 years and has 18 years of experience in the computer industry, working with service providers
and large enterprises. His main focus is working with customers in the western region of the United
States to address their challenges by doing end-to-end data center architectures.
Ahmed Afrose is a solutions architect with the Cisco Cloud and IT Transformation (CITT) services
organization. He is responsible for providing architectural design guidance and leading complex
multitech service deliveries. Furthermore, he is involved in demonstrating the Cisco value
propositions in cloud software, application automation, software-defined data centers, and Cisco
Unified Computing System (UCS). He is experienced in server operating systems and virtualization
technologies and holds various certifications. Ahmed has a bachelor’s degree in Information Systems.
He started his career with Sun Microsystem-based technologies and has 15 years of diverse
experience in the industry. He was also directly responsible for setting up Cisco UCS Advanced
Services delivery capabilities while evangelizing the product in the region.
Ozden Karakok, CCIE No. 6331, is a technical leader from the data center products and
technologies team in the Technical Assistant Center (TAC). Ozden has been with Cisco Systems for
15 years and specializes in storage area and data center networks. Prior to joining Cisco, Ozden spent
five years working for a number of Cisco’s large customers in various telecommunication roles.
Ozden is a Cisco Certified Internetwork Expert in routing and switching, SNA/IP, and storage. She
holds VCP and ITIL certifications and is a frequent speaker at Cisco and data center events. She
holds a degree in computer engineering from Istanbul Bogazici University. Currently, she works on
Application Centric Infrastructure (ACI) and enjoys being a mother of two wonderful kids.

About the Technical Reviewer
Frank Dagenhardt, CCIE No. 42081, is a systems engineer for Cisco, focusing primarily on data
center architectures in commercial select accounts. Frank has more than 19 years of experience in
information technology and holds certifications from HP, Microsoft, Citrix, and Cisco. A Cisco
veteran of more than eight years, he works with customers daily, designing, implementing, and
supporting end-to-end architectures and solutions, such as workload mobility, business continuity,
and cloud/orchestration services. In recent months, he has been focusing on Tier 1 applications,
including VDI, Oracle, and SAP. He presented on the topic of UCS/VDI at Cisco Live 2013. When
not thinking about data center topics, Frank can be found backpacking, kayaking, or at home spending
time with his wife Sansi and sons Frank and Everett.

Dedications
Navaid Shamsee: To my parents, for their guidance and prayers. To my wife, Hareem, for her love
and support, and to my children, Ahasn, Rida, and Maria.
David Klebanov: This book is dedicated to my beautiful wife, Tanya, and our two wonderful
daughters, Lia and Maya. You fill my life with great joy and make everything worth doing! In the
words of a song: “Everything I do, I do it for you.” This book is also dedicated to all of you who
relentlessly pursue the path of becoming Cisco-certified professionals. Through your dedication you
build your own success. You are the industry leaders!
Hesham Fayed: This book is dedicated to my lovely wife, Eman, and my beautiful children, Ali,
Laila, Hamza, Fayrouz, and Farouk. Thank you for your understanding and support, especially when I
was working on the book during our vacation to meet deadlines. Without your support and
encouragement I would never have been able to finish this book. I can’t forget to acknowledge my
parents; your guidance, education, and making me always strive to be better is what helped in my
journey.
Ahmed Afrose: I dedicate this book to my parents, Hilal and Faiziya. Without them, none of this
would have been possible, and also to those people who are deeply curious and advocate self
advancement.
Ozden Karakok: To my loving husband, Tom, for his endless support, encouragement, and love.
Merci beaucoup, Askim.
To Remi and Mira, for being the most incredible miracles of my life and being my number one source
of happiness.
To my wonderful parents, who supported me in every chapter of my life and are an inspiration for
life.
To my awesome sisters, Gulden and Cigdem, for their unconditional support and loving me just the
way I am.
To the memory of Steve Ritchie, for being the best mentor ever.

Acknowledgements
To my co-authors, Afrose, David, Hesham, and Ozden: Thank you for your hard work and dedication.
It was my pleasure and honor to work with such a great team of co-authors. Without your support, this
book would not have been possible.
To our technical reviewer, Frank Dagenhardt: Thank you for providing us with your valuable
feedback and suggestions. Your input helped us improve the quality and accuracy of the content.
To Mary Beth Ray, Ellie Bru, Tonya Simpson, and the Cisco Press team: A big thank you to the entire
Cisco Press team for all their support in getting this book published. Special thanks to Mary Beth and
Ellie for keeping us on track and guiding us in the journey of writing this book.
—Navaid Shamsee
My deepest gratitude goes to all the people who have shared my journey over the years and who have
inspired me to always strive for more. I have been truly blessed to have worked with the most
talented group of individuals.
—David Klebanov
I’d like to thank my co-authors Navaid, Afrose, Ozden, and David for working as a team to complete
this project. A special thanks goes to Navaid for asking me to be a part of this book. Afrose, thank
you for stepping in to help me complete Chapter 18 so we could meet the deadline; really, thank you.
Mary Beth and Ellie, thank you both for your patience and support through my first publication. It has
been an honor working with you both, and I have learned a lot during this process.
I want to thank my family for their support and patience while I was working on this book. I know it
was not nice, especially on holidays and weekends, when I had to stay to work on the book rather
than taking you out.
—Hesham Fayed
The creation of this book, my first publication, has certainly been a tremendous experience, a source
of inspiration, and a whirlwind of activity. I do hope to impart more knowledge and experience in the
future on different pertinent topics in this ever-changing world of data center technologies and
practices.
Navaid Shamsee has been a good friend and colleague. I am so incredibly thankful to him for this
awesome opportunity. He has helped me achieve a milestone in my career by offering me the
opportunity to be associated with this publication and the world-renowned Cisco Press.
I’m humbled by this experienced and talented team of co-authors: Navaid, David, Hesham, and
Ozden. I enjoyed working with a team of like-minded professionals.
I am also thankful to all our professional editors, especially Mary Beth Ray and Ellie Bru for their
patience and guidance every step of the way. A big thank you to all the folks involved in production,
publishing, and bringing this book to the shelves.
I would also like to thank our technical reviewer, Frank Dagenhardt, for his keenness and attention to
detail; it helped gauge depth, consistency, and improved the quality of the material overall for the
readers.
And last, but not least, a special thanks to my girlfriend, Anna, for understanding all the hours I

needed to spend over my keyboard almost ignoring her. You still cared, kept me energized, and
encouraged me with a smile. You were always in my mind; thanks for the support, honey.
—Ahmed Afrose
This book would never have become a reality without the help, support, and advice of a great number
of people.
I would like to thank my great co-authors Navaid, David, Afrose, and Hesham. Thank you for your
excellent collaboration, hard work, and priceless time. I truly enjoy working with each one of you. I
really appreciate Navaid taking the lead and being the glue of this diverse team. It was a great
pleasure and honor working with such professional engineers. It would have been impossible to
finish this book without your support.
To our technical reviewer, Frank Dagenhardt: Thank you for providing us with your valuable
feedback, suggestions, hard work, and quick turnaround. Your excellent input helped us improve the
quality and accuracy of the content. It was a great pleasure for all of us to work with you.
To Mary Beth, Ellie, and the Cisco Press team: A big thank you to the entire Cisco Press team for all
their support in getting this book published. Special thanks to Mary Beth and Ellie for keeping us on
track and guiding us in the journey of writing this book.
To the extended teams at Cisco: Thank you for responding to our emails and phone calls even during
the weekends. Thank you for being patient while our minds were in the book. Thank you for covering
the shifts. Thank you for believing in and supporting us on this journey. Thank you for the world-class
organization at Cisco.
To my colleagues Mike Frase, Paul Raytick, Mike Brown, Robert Hurst, Robert Burns, Ed Mazurek,
Matthew Winnett, Matthias Binzer, Stanislav Markovic, Serhiy Lobov, Barbara Ledda, Levent
Irkilmez, Dmitry Goloubev, Roland Ducomble, Gilles Checkroun, Walter Dey, Tom Debruyne, and
many wonderful engineers: Thank you for being supportive in different stages of my career. I learned
a lot from you.
To our Cisco customers and partners: Thank you for challenging us to be the number one IT company.
To all extended family and friends: Thank you for your patience and giving us moral support in this
long journey.
—Ozden Karakok

Contents at a Glance
Introduction
Getting Started
Part I Data Center Networking
Chapter 1 Data Center Network Architecture
Chapter 2 Cisco Nexus Product Family
Chapter 3 Virtualizing Cisco Network Devices
Chapter 4 Data Center Interconnect
Chapter 5 Management and Monitoring of Cisco Nexus Devices
Part I Review
Part II Data Center Storage-Area Networking
Chapter 6 Data Center Storage Architecture
Chapter 7 Cisco MDS Product Family
Chapter 8 Virtualizing Storage
Chapter 9 Fibre Channel Storage-Area Networking
Part II Review
Part III Data Center Unified Fabric
Chapter 10 Unified Fabric Overview
Chapter 11 Data Center Bridging and FCoE
Chapter 12 Multihop Unified Fabric
Part III Review
Part IV Cisco Unified Computing
Chapter 13 Cisco Unified Computing System Architecture
Chapter 14 Cisco Unified Computing System Manager
Chapter 15 Cisco Unified Computing System Pools, Policies, Templates, and Service Profiles
Chapter 16 Administration, Management, and Monitoring of Cisco Unified Computing System
Part IV Review
Part V Data Center Server Virtualization
Chapter 17 Server Virtualization Solutions
Chapter 18 Cisco Nexus 1000V and Virtual Switching
Part V Review

Part VI Data Center Network Services
Chapter 19 Cisco ACE and GSS
Chapter 20 Cisco WAAS and Application Acceleration
Part VI Review
Part VII Final Preparation
Chapter 21 Final Review
Part VIII Appendices
Appendix A Answers to the “Do I Know This Already?” Questions
Glossary
Index
On the CD:
Appendix B Memory Tables
Appendix C Memory Tables Answer Key
Appendix D Study Planner

Contents
Introduction
Getting Started
A Brief Perspective on the CCNA Data Center Certification Exam
Suggestions for How to Approach Your Study with This Book
This Book Is Compiled from 20 Short Read-and-Review Sessions
Practice, Practice, Practice—Did I Mention: Practice?
In Each Part of the Book You Will Hit a New Milestone
Use the Final Review Chapter to Refine Skills
Set Goals and Track Your Progress
Other Small Tasks Before Getting Started
Part I Data Center Networking
Chapter 1 Data Center Network Architecture
“Do I Know This Already?” Quiz
Foundation Topics
Modular Approach in Network Design
The Multilayer Network Design
Access Layer
Aggregation Layer
Core Layer
Collapse Core Design
Spine and Leaf Design
Port Channel
What Is Port Channel?
Benefits of Using Port Channels
Port Channel Compatibility Requirements
Link Aggregation Control Protocol
Port Channel Modes
Configuring Port Channel
Port Channel Load Balancing
Verifying Port Channel Configuration
Virtual Port Channel
What Is Virtual Port Channel?
Benefits of Using Virtual Port Channel
Components of vPC

vPC Data Plane Operation
vPC Control Plane Operation
vPC Limitations
Configuration Steps of vPC
Verification of vPC
FabricPath
Spanning Tree Protocol
What Is FabricPath?
Benefits of FabricPath
Components of FabricPath
FabricPath Frame Format
FabricPath Control Plane
FabricPath Data Plane
Conversational Learning
FabricPath Packet Flow Example
Unknown Unicast Packet Flow
Known Unicast Packet Flow
Virtual Port Channel Plus
FabricPath Interaction with Spanning Tree
Configuring FabricPath
Verifying FabricPath
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 2 Cisco Nexus Product Family
“Do I Know This Already?” Quiz
Foundation Topics
Describe the Cisco Nexus Product Family
Cisco Nexus 9000 Family
Cisco Nexus 9500 Family
Chassis
Supervisor Engine
System Controller
Fabric Modules
Line Cards
Power Supplies

Fan Trays
Cisco QSFP Bi-Di Technology for 40 Gbps Migration
Cisco Nexus 9300 Family
Cisco Nexus 7000 and Nexus 7700 Product Family
Cisco Nexus 7004 Series Switch Chassis
Cisco Nexus 7009 Series Switch Chassis
Cisco Nexus 7010 Series Switch Chassis
Cisco Nexus 7018 Series Switch Chassis
Cisco Nexus 7706 Series Switch Chassis
Cisco Nexus 7710 Series Switch Chassis
Cisco Nexus 7718 Series Switch Chassis
Cisco Nexus 7000 and Nexus 7700 Supervisor Module
Cisco Nexus 7000 Series Supervisor 1 Module
Cisco Nexus 7000 Series Supervisor 2 Module
Cisco Nexus 7000 and Nexus 7700 Fabric Modules
Cisco Nexus 7000 and Nexus 7700 Licensing
Cisco Nexus 7000 and Nexus 7700 Line Cards
Cisco Nexus 7000 and Nexus 7700 Series Power Supply Options
Cisco Nexus 7000 and Nexus 7700 Series 3.0-KW AC Power Supply Module
Cisco Nexus 7000 and Nexus 7700 Series 3.0-KW DC Power Supply Module
Cisco Nexus 7000 Series 6.0-KW and 7.5-KW AC Power Supply Module
Cisco Nexus 7000 Series 6.0-KW DC Power Supply Module
Cisco Nexus 6000 Product Family
Cisco Nexus 6001P and Nexus 6001T Switches and Features
Cisco Nexus 6004 Switches Features
Cisco Nexus 6000 Switches Licensing Options
Cisco Nexus 5500 and Nexus 5600 Product Family
Cisco Nexus 5548P and 5548UP Switches Features
Cisco Nexus 5596UP and 5596T Switches Features
Cisco Nexus 5500 Products Expansion Modules
Cisco Nexus 5600 Product Family
Cisco Nexus 5672UP Switch Features
Cisco Nexus 56128P Switch Features
Cisco Nexus 5600 Expansion Modules
Cisco Nexus 5500 and Nexus 5600 Licensing Options
Cisco Nexus 3000 Product Family
Cisco Nexus 3000 Licensing Options

Cisco Nexus 2000 Fabric Extenders Product Family
Cisco Dynamic Fabric Automation
Summary
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 3 Virtualizing Cisco Network Devices
“Do I Know This Already?” Quiz
Foundation Topics
Describing VDCs on the Cisco Nexus 7000 Series Switch
VDC Deployment Scenarios
Horizontal Consolidation Scenarios
Vertical Consolidation Scenarios
VDCs for Service Insertion
Understanding Different Types of VDCs
Interface Allocation
VDC Administration
VDC Requirements
Verifying VDCs on the Cisco Nexus 7000 Series Switch
Describing Network Interface Virtualization
Cisco Nexus 2000 FEX Terminology
Nexus 2000 Series Fabric Extender Connectivity
VN-Tag Overview
Cisco Nexus 2000 FEX Packet Flow
Cisco Nexus 2000 FEX Port Connectivity
Cisco Nexus 2000 FEX Configuration on the Nexus 7000 Series
Summary
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 4 Data Center Interconnect
“Do I Know This Already?” Quiz
Foundation Topics
Introduction to Overlay Transport Virtualization
OTV Terminology

OTV Control Plane
Multicast Enabled Transport Infrastructure
Unicast-Enabled Transport Infrastructure
Data Plane for Unicast Traffic
Data Plane for Multicast Traffic
Failure Isolation
STP Isolation
Unknown Unicast Handling
ARP Optimization
OTV Multihoming
First Hop Redundancy Protocol Isolation
OTV Configuration Example with Multicast Transport
OTV Configuration Example with Unicast Transport
Summary
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 5 Management and Monitoring of Cisco Nexus Devices
“Do I Know This Already?” Quiz
Foundation Topics
Operational Planes of a Nexus Switch
Data Plane
Store-and-Forward Switching
Cut-Through Switching
Nexus 5500 Data Plane Architecture
Control Plane
Nexus 5500 Control Plane Architecture
Control Plane Policing
Control Plane Analyzer
Management Plane
Nexus Management and Monitoring Features
Out-of-Band Management
Console Port
Connectivity Management Processor
Management Port (mgmt0)
In-band Management

Role-Based Access Control
User Roles
Rules
User Role Policies
RBAC Characteristics and Guidelines
Privilege Levels
Simple Network Management Protocol
SNMP Notifications
SNMPv3
Remote Monitoring
RMON Alarms
RMON Events
Syslog
Embedded Event Manager
Event Statements
Action Statements
Policies
Generic Online Diagnostics (GOLD)
Smart Call Home
Nexus Device Configuration and Verification
Perform Initial Setup
In-Service Software Upgrade
ISSU on the Cisco Nexus 5000
ISSU on the Cisco Nexus 7000
Important CLI Commands
Command-Line Help
Automatic Command Completion
Entering and Exiting Configuration Context
Display Current Configuration
Command Piping and Parsing
Feature Command
Verifying Modules
Verifying Interfaces
Viewing Logs
Viewing NVRAM logs
Exam Preparation Tasks
Review All Key Topics

Complete Tables and Lists from Memory
Define Key Terms
Part I Review
Part II Data Center Storage-Area Networking
Chapter 6 Data Center Storage Architecture
“Do I Know This Already?” Quiz
Foundation Topics
What Is a Storage Device?
What Is a Storage-Area Network?
How to Access a Storage Device
Block-Level Protocols
File-Level Protocols
Putting It All Together
Storage Architectures
Scale-up and Scale-out Storage
Tiered Storage
SAN Design
SAN Design Considerations
1. Port Density and Topology Requirements
2. Device Performance and Oversubscription Ratios
3. Traffic Management
4. Fault Isolation
5. Control Plane Scalability
SAN Topologies
Fibre Channel
FC-0: Physical Interface
FC-1: Encode/Decode
Encoding and Decoding
Ordered Sets
FC-2: Framing Protocol
Fibre Channel Service Classes
Fibre Channel Flow Control
FC-3: Common Services
Fibre Channel Addressing
Switched Fabric Address Space
Fibre Channel Link Services

Fibre Channel Fabric Services
FC-4: ULPs—Application Protocols
Fibre Channel—Standard Port Types
Virtual Storage-Area Network
Dynamic Port VSAN Membership
Inter-VSAN Routing
Fibre Channel Zoning and LUN Masking
Device Aliases Versus Zone-Based (FC) Aliases
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 7 Cisco MDS Product Family
“Do I Know This Already?” Quiz
Foundation Topics
Once Upon a Time There Was a Company Called Andiamo
The Secret Sauce of Cisco MDS: Architecture
A Day in the Life of a Fibre Channel Frame in MDS 9000
Lossless Networkwide In-Order Delivery Guarantee
Cisco MDS Software and Storage Services
Flexibility and Scalability
Common Software Across All Platforms
Multiprotocol Support
Virtual SANs
Intelligent Fabric Applications
Network-Assisted Applications
Cisco Data Mobility Manager
I/O Accelerator
XRC Acceleration
Network Security
Switch and Host Authentication
IP Security for FCIP and iSCSI
Cisco TrustSec Fibre Channel Link Encryption
Role-Based Access Control
Port Security and Fabric Binding
Zoning

Availability
Nondisruptive Software Upgrades
Stateful Process Failover
ISL Resiliency Using Port Channels
iSCSI, FCIP, and Management-Interface High Availability
Port Tracking for Resilient SAN Extension
Manageability
Open APIs
Configuration and Software-Image Management
N-Port Virtualization
Autolearn for Network Security Configuration
FlexAttach
Cisco Data Center Network Manager Server Federation
Network Boot for iSCSI Hosts
Internet Storage Name Service
Proxy iSCSI Initiator
iSCSI Server Load Balancing
IPv6
Traffic Management
Quality of Service
Extended Credits
Virtual Output Queuing
Fibre Channel Port Rate Limiting
Load Balancing of Port Channel Traffic
iSCSI and SAN Extension Performance Enhancements
FCIP Compression
FCIP Tape Acceleration
Serviceability, Troubleshooting, and Diagnostics
Cisco Switched Port Analyzer and Cisco Fabric Analyzer
SCSI Flow Statistics
Fibre Channel Ping and Fibre Channel Traceroute
Call Home
System Log
Other Serviceability Features
Cisco MDS 9000 Family Software License Packages
Cisco MDS Multilayer Directors
Cisco MDS 9700

Cisco MDS 9500
Cisco MDS 9000 Series Multilayer Components
Cisco MDS 9700–MDS 9500 Supervisor and Crossbar Switching Modules
Cisco MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module
Cisco MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module
Cisco MDS 9000 18/4-Port Multiservice Module (MSM)
Cisco MDS 9000 16-Port Gigabit Ethernet Storage Services Node (SSN) Module
Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module
Cisco MDS 9000 8-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module
Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module
Cisco MDS 9700 48-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module
Cisco MDS 9000 Family Pluggable Transceivers
Cisco MDS Multiservice and Multilayer Fabric Switches
Cisco MDS 9148
Cisco MDS 9148S
Cisco MDS 9222i
Cisco MDS 9250i
Cisco MDS Fibre Channel Blade Switches
Cisco Prime Data Center Network Manager
Cisco DCNM SAN Fundamentals Overview
DCNM-SAN Server
Licensing Requirements for Cisco DCNM-SAN Server
Device Manager
DCNM-SAN Web Client
Performance Manager
Authentication in DCNM-SAN Client
Cisco Traffic Analyzer
Network Monitoring
Performance Monitoring
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 8 Virtualizing Storage
“Do I Know This Already?” Quiz
Foundation Topics

What Is Storage Virtualization?
How Storage Virtualization Works
Why Storage Virtualization?
What Is Being Virtualized?
Block Virtualization—RAID
Virtualizing Logical Unit Numbers
Tape Storage Virtualization
Virtualizing Storage-Area Networks
N-Port ID Virtualization (NPIV)
N-Port Virtualizer (NPV)
Virtualizing File Systems
File/Record Virtualization
Where Does the Storage Virtualization Occur?
Host-Based Virtualization
Network-Based (Fabric-Based) Virtualization
Array-Based Virtualization
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 9 Fibre Channel Storage-Area Networking
“Do I Know This Already?” Quiz
Foundation Topics
Where Do I Start?
The Cisco MDS NX-OS Setup Utility—Back to Basics
The Power On Auto Provisioning
Licensing Cisco MDS 9000 Family NX-OS Software Features
License Installation
Verifying the License Configuration
On-Demand Port Activation Licensing
Making a Port Eligible for a License
Acquiring a License for a Port
Moving Licenses Among Ports
Cisco MDS 9000 NX-OS Software Upgrade and Downgrade
Upgrading to Cisco NX-OS on an MDS 9000 Series Switch

Downgrading Cisco NX-OS Release on an MDS 9500 Series Switch
Configuring Interfaces
Graceful Shutdown
Port Administrative Speeds
Frame Encapsulation
Bit Error Thresholds
Local Switching
Dedicated and Shared Rate Modes
Slow Drain Device Detection and Congestion Avoidance
Management Interfaces
VSAN Interfaces
Verify Initiator and Target Fabric Login
VSAN Configuration
Port VSAN Membership
Verify Fibre Channel Zoning on a Cisco MDS 9000 Series Switch
About Smart Zoning
About LUN Zoning and Assigning LUNs to Storage Subsystems
About Enhanced Zoning
VSAN Trunking and Setting Up ISL Port
Trunk Modes
Port Channel Modes
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Part II Review
Part III Data Center Unified Fabric
Chapter 10 Unified Fabric Overview
“Do I Know This Already?” Quiz
Foundation Topics
Challenges of Today’s Data Center Networks
Cisco Unified Fabric Principles
Convergence of Network and Storage
Scalability and Growth
Security and Intelligence

Inter-Data Center Unified Fabric
Fibre Channel over IP
Overlay Transport Virtualization
Locator ID Separation Protocol
Reference List
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 11 Data Center Bridging and FCoE
“Do I Know This Already?” Quiz
Foundation Topics
Data Center Bridging
Priority Flow Control
Enhanced Transmission Selection
Data Center Bridging Exchange
Quantized Congestion Notification
Fibre Channel Over Ethernet
FCoE Logical Endpoint
FCoE Port Types
FCoE ENodes
Fibre Channel Forwarder
FCoE Encapsulation
FCoE Initialization Protocol
Converged Network Adapter
FCoE Speed
FCoE Cabling Options for the Cisco Nexus 5000 Series Switches
Optical Cabling
Transceiver Modules
SFP Transceiver Modules
SFP+ Transceiver Modules
QSFP Transceiver Modules
Unsupported Transceiver Modules
Delivering FCoE Using Cisco Fabric Extender Architecture
FCoE and Cisco Nexus 2000 Series Fabric Extenders
FCoE and Straight-Through Fabric Extender Attachment Topology
FCoE and vPC Fabric Extender Attachment Topology
FCoE and Server Fabric Extenders

Adapter-FEX
Virtual Ethernet Interfaces
FCoE and Virtual Fibre Channel Interfaces
Virtual Interface Redundancy
FCoE and Cascading Fabric Extender Topologies
Adapter-FEX and FCoE Fabric Extenders in vPC Topology
Adapter-FEX and FCoE Fabric Extenders in Straight-Through Topology with vPC
Adapter-FEX and FCoE Fabric Extenders in Straight-Through Topology
Reference List
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 12 Multihop Unified Fabric
“Do I Know This Already?” Quiz
Foundation Topics
First Hop Access Layer Consolidation
Aggregation Layer FCoE Multihop
FIP Snooping
FCoE-NPV
Full Fibre Channel Switching
Dedicated Wire Versus Unified Wire
Cisco FabricPath FCoE Multihop
Dynamic FCoE FabricPath Topology
Dynamic FCoE Virtual Links
Dynamic Virtual Fibre Channel Interfaces
Redundancy and High Availability
Reference List
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Part III Review
Part IV Cisco Unified Computing
Chapter 13 Cisco Unified Computing System Architecture
“Do I Know This Already?” Quiz
Foundation Topics
Evolution of Server Computing in a Nutshell

Cloud-Ready Infrastructure Describing the Cisco UCS Product Family Cisco UCS Computer Hardware Naming Cisco UCS Fabric Interconnects What Is a Unified Port? Cisco UCS Fabric Interconnect—Expansion Modules Cisco UCS 5108 Blade Server Chassis Cisco UCS 5108 Blade Server Chassis—Power Supply and Redundancy Modes Cisco UCS I/O Modules (FEX) Cisco UCS B-series Blade Servers Cisco UCS B-series Best Practices for Populating DRAM Cisco Adapters (Mezzanine Cards) for UCS B-series Blade Servers Cisco Converged Network Adapters (CNA) Cisco Virtual Interface Cards Cisco UCS Port Expander for VIC 1240 What Is Cisco Adapter FEX? What Is Cisco Virtual Machine-Fabric Extender (VM-FEX)? Cisco UCS Storage Accelerator Adapters Cisco UCS C-series Rackmount Servers Cisco UCS C-series Best Practices for Populating DRAM Cisco Adapters for UCS C-series Rackmount Servers Cisco UCS C-series RAID Adapter Options Cisco UCS C-series Virtual Interface Cards and OEM CNA Adapters What Is SR-IOV (Single Root-IO Virtualization)? Cisco UCS C-series Network Interface Card Cisco UCS C-series Host Bus Adapter What Is N_Port ID Virtualization (NPIV)? Cisco UCS Storage Accelerator Adapters .What Is a Socket? What Is a Core? What Is Hyperthreading? Understanding Server Processor Numbering Value of Cisco UCS in the Data Center One Unified System Unified Fabric Unified Management and Operations Cisco UCS Stateless Computing Intelligent Automation.

SAN Connectivity Ethernet Switching Mode Fibre Channel Switching Mode Cisco UCS I/O Module and Fabric Interconnect Connectivity Cisco UCS I/O Module Architecture Fabric and Adapter Port Channel Architecture Cisco Integrated Management Controller Architecture Reference List Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Chapter 14 Cisco Unified Computing System Manager “Do I Know This Already?” Quiz Foundation Topics Cabling a Cisco UCS Fabric Interconnect HA Cluster Cisco UCS Fabric Interconnect HA Architecture Cisco UCS Fabric Interconnect Cluster Connectivity Initial Setup Script for Primary Fabric Interconnect Initial Setup Script for Subordinate Secondary Fabric Interconnect Verify Cisco UCS Fabric Interconnect Cluster Setup Changing Cluster Addressing via Command-Line Interface Command Modes Changing Cluster IP Addresses via Command-Line Interface Changing Static IP Addresses via Command-Line Interface .Cisco UCS E-series Servers Cisco UCS “All-in-One” Cisco UCS Invicta Series—Solid State Storage Systems Cisco UCS Software Cisco UCS Manager Cisco UCS Central Cisco UCS Director Cisco UCS Platform Emulator Cisco goUCS Automation Tool Cisco UCS Connectivity Architecture Cisco UCS 5108 Chassis to Fabric Interconnect Physical Connectivity Cisco UCS C-series Rackmount Server Physical Connectivity Cisco UCS 6200 Fabric Interconnect to LAN.

Cisco UCS Manager Operations Cisco UCS Manager Functions Cisco UCS Manager GUI Layout Cisco UCS Manager: Navigation Pane—Tabs Equipment Tab Servers Tab LAN Tab SAN Tab VM Tab Admin Tab Device Discovery in Cisco UCS Manager Basic Port Personalities in the Cisco UCS Fabric Interconnect Verifying Device Discovery in Cisco UCS Manager Reference List Exam Preparation Tasks Review All Key Topics Define Key Terms Chapter 15 Cisco Unified Computing System Pools. Templates. and Service Profiles “Do I Know This Already?” Quiz Foundation Topics Cisco UCS Service Profiles Cisco UCS Hardware Abstraction and Stateless Computing Contents of a Cisco UCS Service Profile Other Benefits of Cisco UCS Service Profiles Ease of Incremental Deployments Plan and Preconfigure Cisco UCS Solutions Ease of Server Replacement Right-Sized Server Deployments Cisco UCS Templates Organizational Awareness with Service Profile Templates Cisco UCS Service Profile Templates Cisco UCS vNIC Templates Cisco UCS vHBA Templates Cisco UCS Logical Resource Pools UUID Identity Pools MAC Address Identity Pools WWN Address Identity Pools . Policies.

and Monitoring of Cisco Unified Computing System “Do I Know This Already?” Quiz Foundation Topics Cisco UCS Administration Organization and Locales .WWNN Pools WWPN Pools Cisco UCS Physical Resource Pools Server Pools Cisco UCS Server Qualification and Pool Policies Server Auto-Configuration Policies Creation of Policies for Cisco UCS Service Profiles and Service Profile Templates Commonly Used Policies Explained Equipment Tab—Global Policies BIOS Policy Boot Policy Local Disk Configuration Policy Maintenance Policy Scrub Policy Host Firmware Packages Adapter Policy Cisco UCS Chassis and Blade Power Capping What Is Power Capping? Cisco UCS Power Capping Strategies Static Power Capping (Explicit Blade Power Cap) Dynamic Power Capping (Implicit Chassis Power Cap) Power Cap Groups (Explicit Group) Utilizing Cisco UCS Service Profile Templates Creating Cisco UCS Service Profile Templates Modifying a Cisco UCS Service Profile Template Utilizing Cisco UCS Service Profile Templates for Different Workloads Creating Service Profiles from Templates Reference List Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Chapter 16 Administration. Management.

and Default Authentication Settings Communication Management UCS Firmware.Role-Based Access Control Authentication Methods LDAP Providers LDAP Provider Groups Domain. Realm. Backup. and License Management Firmware Terminology Firmware Definitions Firmware Update Guidelines Host Firmware Packages Cross-Version Firmware Support Firmware Auto Install UCS Backups Backup Automation License Management Cisco UCS Management Operational Planes In-Band Versus Out-of-Band Management Remote Accessibility Direct KVM Access Advanced UCS Management Cisco UCS Manager XML API goUCS Automation Toolkit Using the Python SDK Multi-UCS Management Cisco UCS Monitoring Cisco UCS System Logs Fault. and Audit Logs (Syslogs) System Event Logs UCS SNMP UCS Fault Suppression Collection and Threshold Policies Call Home and Smart Call Home Reference List Exam Preparation Tasks Review All Key Topics . Event.

Define Key Terms Part IV Review Part V Data Center Server Virtualization Chapter 17 Server Virtualization Solutions “Do I Know This Already?” Quiz Foundation Topics Brief History of Server Virtualization Server Virtualization Components Hypervisor Virtual Machines Virtual Switching Management Tools Types of Server Virtualization Full Virtualization Paravirtualization Operating System Virtualization Server Virtualization Benefits and Challenges Server Virtualization Benefits Server Virtualization Challenges Reference List Exam Preparation Tasks Review All Key Topics Define Key Terms Chapter 18 Cisco Nexus 1000V and Virtual Switching “Do I Know This Already?” Quiz Foundation Topics Evolution of Virtual Switching Before Server Virtualization Server Virtualization with Static VMware vSwitch Virtual Network Components Virtual Access Layer Standard VMware vSwitch Overview Standard VMware vSwitch Operations Standard VMware vSwitch Configuration VMware vDS Overview VMware vDS Configuration .

VMware vDS Enhancements VMware vSwitch and vDS Cisco Nexus 1000V Virtual Networking Solution Cisco Nexus 1000V System Overview Cisco Nexus 1000V Salient Features and Benefits Cisco Nexus 1000V Series Virtual Switch Architecture Cisco Nexus 1000V Virtual Supervisory Module Cisco Nexus 1000V Virtual Ethernet Module Cisco Nexus 1000V Component Communication Cisco Nexus 1000V Management Communication Cisco Nexus 1000V Port Profiles Types of Port Profiles in Cisco Nexus 1000V Cisco Nexus 1000V Administrator View and Roles Cisco Nexus 1000V Verifying Initial Configuration Cisco Nexus 1000V VSM Installation Methods Initial VSM Configuration Verification Verifying VMware vCenter Connectivity Verify Nexus 1000V Module Status Cisco Nexus 1000V VEM Installation Methods Initial VEM Status on ESX or ESXi Host Verifying VSM Connectivity from vCenter Server Verifying VEM Agent Running Verifying VEM Uplink Interface Status Verifying VEM Parameters with VSM Configuration Validating VM Port Groups and Port Profiles Verifying Port Profile and Groups Key New Technologies Integrated with Cisco Nexus 1000V What Is the Difference Between VN-Link and VN-Tag? What Is VXLAN? What Is vPath Technology? Reference List Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Part V Review .

Part VI Data Center Network Services Chapter 19 Cisco ACE and GSS “Do I Know This Already?” Quiz Foundation Topics Server Load-Balancing Concepts What Is Server Load Balancing? Benefits of Server Load Balancing Methods of Server Load Balancing ACE Family Products ACE Module Control Plane Data Plane ACE 4710 Appliances Global Site Selector Understanding the Load-Balancing Solution Using ACE Layer 3 and Layer 4 Load Balancing Layer 7 Load Balancing TCP Server Reuse Real Server Server Farm Server Health Monitoring In-band Health Monitoring Out-of-band Health Monitoring (Probes) Load-Balancing Algorithms (Predictor) Stickiness Sticky Groups Sticky Methods Sticky Table Traffic Policies for Server Load Balancing Class Maps Policy Map Configuration Steps ACE Deployment Modes Bridge Mode Routed Mode One-Arm Mode ACE Virtualization .

Contexts Domains Role-Based Access Control Resource Classes ACE High Availability Management of ACE Remote Management Simple Network Management Protocol XML Interface ACE Device Manager ANM Global Load Balancing Using GSS and ACE Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Chapter 20 Cisco WAAS and Application Acceleration “Do I Know This Already?” Quiz Foundation Topics Wide-Area Network Optimization Characteristics of Wide-Area Networks Latency Bandwidth Packet Loss Poor Link Utilization What Is WAAS? WAAS Product Family WAVE Appliances Router-Integrated Form Factors Virtual Appliances (vWAAS) WAAS Express (WAASx) WAAS Mobile WAAS Central Manager WAAS Architecture WAAS Solution Overview Key Features and Benefits of Cisco WAAS .

Cisco WAAS WAN Optimization Techniques Transport Flow Optimization TCP Window Scaling TCP Initial Window Size Maximization Increased Buffering Selective Acknowledgement Binary Increase Congestion TCP Compression Data Redundancy Elimination LZ Compression Application-Specific Acceleration Virtualization Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Part VI Review Part VII Final Preparation Chapter 21 Final Review Advice About the Exam Event Learn the Question Types Using the Cisco Certification Exam Tutorial Think About Your Time Budget Versus the Number of Questions A Suggested Time-Check Method Miscellaneous Pre-Exam Suggestions Exam-Day Advice Exam Review Take Practice Exams Practicing Taking the DCICT Exam Advice on How to Answer Exam Questions Taking Other Practice Exams Find Knowledge Gaps Through Question Review Practice Hands-On CLI Skills Other Study Tasks Final Thoughts Part VIII Appendices Appendix A Answers to the “Do I Know This Already?” Questions .

Glossary Index On the CD: Appendix B Memory Tables Appendix C Memory Tables Answer Key Appendix D Study Planner .

Icons Used in This Book .

boldface indicates commands that are manually input by the user (such as a show command). . Braces ({ }) indicate a required choice. Vertical bars (|) separate alternative. Square brackets ([ ]) indicate an optional element. Italic indicates arguments for which you supply actual values. The Command Reference describes these conventions as follows: Boldface indicates commands and keywords that are entered literally as shown. In actual configuration examples and output (not general command syntax).Command Syntax Conventions The conventions used to present command syntax in this book are the same conventions used in the IOS Command Reference. Braces within brackets ([{ }]) indicate a required choice within an optional element. mutually exclusive elements.

you’ve probably already decided to pursue your Cisco CCNA Data Center certification. Figure I-1 Path to Cisco CCNA Data Center Certification The DCICN and DCICT exams differ quite a bit in terms of the topics covered. Exams That Help You Achieve CCNA Data Center Certification Cisco CCNA Data Center is an entry-level Cisco data center certification that is also a prerequisite for other Cisco Data Center certifications. DCICN explains the basics of networking. DCICN focuses on networking technology. and data networking features unique to the Cisco Nexus series of switches. it overlaps quite a bit with the topics in the ICND1 100-101 exam. To achieve the CCNA Data Center certification. CCNA Data Center itself has no other prerequisites. which leads to the Cisco Certified Entry Network Technician (CCENT) certification. as shown in Figure I-1. Unified Computing (the term used by Cisco for its server products and services). In fact. focusing on Ethernet switching and IP routing. The DCICT exam instead focuses on technologies specific to the data center. If you want to succeed as a technical person in the networking industry in general. Unified Fabric. Cisco dominates the networking marketplace. network services. you must pass two exams: 640-911 Introduction to Cisco Data Center Networking (DCICN) and 640-916 Introduction to Cisco Data Center Technologies (DCICT). Getting your CCNA Data Center certification is a great first step in building your skills and becoming a recognized authority in the data center field. Cisco has achieved significant market share and has become one of the primary vendors for server hardware. and in data centers in particular. and after a few short years of entering the server marketplace. The only data center focus in the DCICN exam is that all the configuration and verification examples use Cisco Nexus data center switches.Introduction About the Exam Congratulations! If you are reading far enough to look at this book’s Introduction. . you need to know Cisco. These technologies include storage networking.

Interestingly. The last two types. You can watch and even experiment with these command types using the Cisco Exam Tutorial. simlets require you to analyze both working and broken networks. You left-click the mouse to hold the item. for any test. ever since the early days of school. Before the exam timer begins. After the exam starts. For some questions. and the testing software prevents you from choosing too many answers. single answer Multiple choice. these questions begin with a broken configuration. they include one or more multiple-choice questions.Types of Questions on the Exams Cisco certification exams follow the same general format. the two types enable Cisco to assess two very different skills. Basically. sim questions generally describe a problem. Cisco wants the candidates to know the . while your task is to configure one or more routers and switches to fix it. sim and simlet questions. all based on the same larger scenario. both use a network simulator to ask questions. The testlet asks you several multiple-choice questions. you might need to put a list of items in the proper order or sequence. go to www. These questions require you to use the simulator to examine network behavior by interpreting the output of show commands you decide to leverage to answer the question. on the PC screen.” What’s on the DCICT Exam? Everyone wants to know what is on the test. At the testing center.cisco. you can take a sample quiz just to get accustomed to the PC and the testing engine. move it to another area. one at a time. The exam then grades the question based on the configuration you changed or added. you can complete a few other tasks on the PC. and release the mouse button to place the item in its destination (usually into a list). Anyone who has basic skills in getting around a PC should have no problems with the testing environment. DND questions require you to move some items around in the graphical user interface (GUI). The questions typically fall into one of the following categories: Multiple choice. you sit in a quiet room in front of the PC. Cisco openly publishes the topics of each of its certification exams. Cisco traditionally tells you how many answers you need to choose. for example. Whereas sim questions require you to troubleshoot problems related to configuration. To find the Cisco Certification Exam Tutorial. to get the question correct. but instead of answering the question by changing or adding the configuration. The multiple-choice format requires you to point to and click a circle or square beside the correct answer(s).com and search for “exam tutorial. multiple answers Testlet Drag-and-drop (DND) Simulated lab (sim) Simlet The first three items in the list are all multiple-choice questions. correlating show command output with your knowledge of networking theory and configuration commands. First. you will be presented with a series of questions. Simlet questions also use a network simulator. and you must fix it to answer the question correctly.

variety of topics and get an idea about the kinds of knowledge and skills required for each topic.com/go/ccna. one topic might begin with “Describe. Tables I-2 through I-7 list the details of the exam topics. Cisco makes an effort to keep the exam questions within the confines of the stated exam topics. DCICT 640-916 Exam Topics The exam topics for both the DCICN and DCICT exams can be easily found at Cisco. In the past.com by searching. shown in Table I-1. the weighting probably does not change how you study.” and another with “Troubleshoot.” another with “Verify. Cisco also numbers the exam topics.. with one table for each of the major topic . however. and be able to verify it (to find the root cause of the problem). you probably will not pass.. Additionally. Cisco’s posted exam topics. Exam topics are very specific and the verb used in their description is very important. Table I-1 Six Major Topic Areas in the DCICT 640-916 Exam Note that while the weighting of each topic area tells you something about the exam. be able to configure it (to see what’s wrong with the configuration). you can go to www. Cisco lists the weighting for each of the major topic headings.. but otherwise. All six topic areas hold enough weighting so that if you completely ignore an individual topic. which gets you to the page for CCNA Routing and Switching..” another with “Configure. making it easier to refer to specific ones.. For example. which should come from each major topic area. plan to study all the exam topics. in the authors’ opinion.. where you can easily navigate to the nearby CCNA Data Center page.. including for the DCICN and DCICT exam topics. because to troubleshoot. The weighting might indicate where you should spend a little more time during the last days before taking the exam. Furthermore.. data center technologies require you to put many concepts together. Over time.. More often today.” Questions beginning with “Troubleshoot” require the highest skills level.. The verb tells us to what degree the topic must be understood and what skills are required. Cisco has begun making two stylistic improvements to the posted exam topics.cisco. Alternatively. you must understand the topic. The DCICT contains six major headings with their respective weighting. the topics were simply listed as bullets with indentation to imply subtopics under a major topic. so you need all the pieces before you can understand the holistic view. and we know from talking to those involved that every question is analyzed for whether it fits the stated exam topic. are only guidelines. Pay attention to the question verbiage... Cisco’s disclaimer language mentions that fact. The weighting tells the percentage of points from your exam.

Table I-2 Exam Topics in the First Major DCICT Exam Topic Area . Note that these tables also list the book chapters that discuss each of the exam topics.areas listed in Table I-1.

Table I-3 Exam Topics in the Second Major DCICT Exam Topic Area Table I-4 Exam Topics in the Third Major DCICT Exam Topic Area Table I-5 Exam Topics in the Fourth Major DCICT Exam Topic Area Table I-6 Exam Topics in the Fifth Major DCICT Exam Topic Area .

and to prove that you have retained your knowledge of those topics. which is the second and final exam to achieve CCNA Data Center certification. the methods used in this book to help you pass the exam are also designed to make you much more knowledgeable in the general field of the data center and help you in your daily job responsibilities.com/go/certifications and navigate to the CCNA Data Center page). this book does not try to help you pass the exams only by memorization. This book helps you pass the CCNA exam by using the following methods: Helping you discover which exam topics you have not mastered Providing explanations and information to fill in your knowledge gaps . the book’s title would be misleading! At the same time. This book’s companion title. and it would be a disservice to you if this book did not help you truly learn the material. Book Features The most important and somewhat obvious objective of this book is to help you pass the DCICT exam and help you achieve the CCNA Data Center certification. Importantly. it might be worth the time to double-check the exam topics as listed on the Cisco website (www. to help you improve your knowledge and skills with those topics. In fact.cisco.Table I-7 Exam Topics in the Sixth Major DCICT Exam Topic Area Note Because it is possible that the exam topics may change over time. This book uses several tools to help you discover your weak topic areas. discusses the content needed to pass the 640-911 DCICN certification exam. CCNA entry-level certification is the foundation for many of the Cisco professional-level certifications. but by truly learning and understanding the topics. We strongly recommend that you plan and structure your learning to align with both exam requirements. About the Book This book discusses the content and skills needed to pass the 640-916 DCICT certification exam. if the primary objective of this book were different. CCNA Data Center DCICN 640-911 Official Cert Guide.

Define Key Terms: Although the exam might be unlikely to ask a question such as. Each chapter includes the activities that make the most sense for studying the topics in that chapter. Each book part contains several related chapters. Foundation Topics: These are the core sections of each chapter. Exam Preparation Tasks: At the end of the “Foundation Topics” section of each chapter. This section lists the most important terms from the chapter. uncovering what you truly understood and what you did not quite yet understand. Although the content of the entire chapter could appear on the exam. The activities include the following: Review Key Topics: The Key Topic icon is shown next to the most important items in the “Foundation Topics” section of the chapter. The part reviews list tasks. allowing you to complete the table or list. The part review includes sample test questions that require you to apply the concepts from multiple chapters in that part. the core chapters have several features that help you make the best use of your time: “Do I Know This Already?” Quizzes: Each chapter begins with a quiz that helps you determine the amount of time you need to spend studying the chapter. The following list explains the most common tasks you will see in the part review: Review “Do I Know This Already?” (DIKTA) Questions: Although you have already seen the DIKTA questions from the chapters in a part. along with checklists. so that you can track your progress. “Define this term. many of the more important lists and tables from the chapter are included in a document on the CD. References: Some chapters contain a list of reference links for additional information and details on the topics discussed in that particular chapter. the “Exam Preparation Tasks” section lists a series of study activities that should be completed at the end of the chapter. answering those questions again can be a useful way to review facts. but use the Pearson IT Certification Practice Test (PCPT) exam software that comes .” the CCNA exams require that you learn and know a lot of networking terminology. Complete Tables and Lists from Memory: To help you exercise your memory and memorize certain lists of facts. asking you to write a short definition and compare your answer to the Glossary at the end of this book. This document lists only partial information. The “Review Key Topics” activity lists the key topics from the chapter and their corresponding page numbers. Part Review The part review tasks help you prepare to apply all the concepts you learned in that part of the book. The “Part Review” section suggests that you repeat the DIKTA questions. They explain the protocols. you should definitely know the information listed in each key topic. and configuration for the topics in the chapter.Supplying exercises that enhance your ability to grasp topics and deduce the answers to subjects related to the exam Providing practice exercises on the topics and the testing process via test questions on the CD Chapter Features To help you customize study time using this book. concepts.

Check this site regularly for new and updated postings written by the authors that provide further insight into the more troublesome topics on the exam. we have included a special offer on a coupon card inserted in the CD sleeve in the back of the book.” lists a series of tasks that you can use for your final preparation before taking the exam. The core chapters are organized into sections. Chapters. “Final Review.pearsonitcertification. or Nook or other e-reader). “Define this term. and Mobi (the native Kindle version)—you will receive additional practice test questions and enhanced practice test features. This offer enables you to purchase the CCNA Data Center DCICT 640-916 Official Cert Guide. Open Self-Assessment Questions: The exam is unlikely to ask a question such as.with the book. Companion website: The website www. This will help you develop a thorough understanding of important exam topics pertaining to that part. videos. eBook: If you are interested in obtaining an e-book version of this title. sometimes from multiple chapters. One exam database holds part review questions. Check out the great CCNA articles. to help build the skills needed for the more challenging analysis questions on the exams. EPUB (for reading on your tablet. Book Organization.com/title/9781587144226 posts up-to-theminute material that further clarifies complex exam topics. These questions purposefully include multiple concepts in each question. PearsonITCertification. The core chapters cover the following topics: . Other Features In addition to the features in each of the core chapters. has additional study resources.ciscopress.com is a great resource for all things IT-certification related. with Chapter 21 including some suggestions for how to approach the actual exams. Review Key Topics: Yes. and Appendixes This book contains 20 core chapters—Chapters 1 through 20.com: The website www. including the following: CD-based practice exam: The companion CD contains the powerful PCPT exam engine. Answer Part Review Questions: The PCPT exam software includes several exam databases. this book. Premium Edition e-book and practice test at a 70 percent discount off the list price. and other certification preparation tools from the industry’s best authors and trainers. mobile device. Final Prep Tasks Chapter 21. for extra practice in answering multiple-choice questions on a computer.” but the CCNA exams require that you learn and know a lot of technology concepts and architectures. This section asks you some open questions that you should try to describe or explain in your own words. You can answer the questions in study mode or take simulated DCICT exams with the CD and activation code included in this book. written specifically for part reviews. blogs. Each core chapter covers a subset of the topics on the 640-916 DCICT exam. In addition to three versions of the e-book—PDF (for reading on your computer). again! They are indeed the most important topics in each chapter. as a whole.

Chapter 8. using Virtual Device Contexts (VDC) and Network Interface Virtualization (NIV). which is called Overlay Transport Virtualization (OTV). Chapter 2. and how it is configured. Chapter 5. Chapter 3. “Data Center Network Architecture”: This chapter provides an overview of the data center networking architecture and design practices relevant to the exam. This chapter goes directly into the key concepts of the Cisco Storage Networking product family. Nexus 3000.Part I: Data Center Networking Chapter 1. “Virtualizing Cisco Network Devices”: This chapter covers the virtualization capabilities of the Nexus 7000 and Nexus 5000. and management plane. how it works. “Data Center Interconnect”: This chapter covers the latest Cisco innovations in the data center extension solutions and in LAN extension in particular. Chapter 7. The Nexus platform provides several methods for device configuration and management. “Management and Monitoring of Cisco Nexus Devices”: This chapter provides an overview of operational planes of the Nexus platform. “Virtualizing Storage”: Storage virtualization is the process of combining multiple . Part II: Data Center Storage-Area Networking Chapter 6. The chapter explains the different specifications of each product. Cisco MDS Multiservice and Multilayer Fabric Switches. These methods are also discussed with some important commands for initial setup. and Cisco FabricPath. how it works. such as port channel. It goes into the detail of multilayer data center network design and technologies. Commands to configure and how to verify the operation of the OTV are also covered in this chapter. Chapter 4. Fibre Channel. virtual port channel. “Cisco Nexus Product Family”: This chapter provides an overview of the Nexus switches product family. and Data Center Network Management. and how it is deployed. Nexus 5000. which includes Nexus 9000. control plane. Basic configuration and verification commands for these technologies are also included in this chapter. and verification. It also provides an overview of out-of-band and in-band management interfaces. This chapter also identifies the mechanism available in the Nexus platform to protect the control plane of the switch. Cisco MDS Software and Storage Services. Nexus 6000. configuration. and Nexus 2000. It compares Small Computer System Interface (SCSI). and storage-area network (SAN). Also covered in this chapter is the NIV—what it is. The edge/core layer of the SAN design is also included. “Cisco MDS Product Family”: This chapter provides an overview of the Cisco Multilayer Data Center Switch (MDS) product family in four main areas: Cisco MDS Multilayer Directors. It goes into detail about OTV. network-attached storage (NAS) connectivity for remote server storage. It details the VDC concept and the VDC deployment scenarios. and the focus is on the current generation of modules and storage services solutions. Nexus 7000. It explains the functions performed by each plane and the key features of data plane. The different VDC types and commands used to configure and verify the setup are also included in the chapter. It covers Fibre Channel protocol and operations in detail. “Data Center Storage Architecture”: This chapter provides an overview of the data center storage-networking technologies.

It then describes the process of hardware and software discovery in Cisco UCS. Part III: Data Center Unified Fabric Chapter 10. It also takes a look at some of the technologies allowing extension of the Unified Fabric environment beyond the boundary of a single data center. and setting up an ISL port using the command-line interface. This chapter guides you through storage virtualization concepts answering basic what. and how questions. . It goes directly into the key concepts of storage virtualization. “Fibre Channel Storage-Area Networking”: This chapter provides an overview of how to configure Cisco MDS 9000 Series multilayer switches and how to update software licenses. and software portfolio. It also details the Cisco Integrated Management Controller architecture and purpose. followed by an introduction to the Cisco UCS value proposition. Chapter 11. This chapter goes directly into the practical configuration steps of the Cisco MDS product family and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. Network virtualization is using network resources through a logical segmentation of a single physical network. why. Part IV: Cisco Unified Computing Chapter 13. fabric domain. secure. the fabric login. as well as their benefits and challenges. It also explains how to monitor and verify this process. and intelligent services. “Unified Fabric Overview”: This chapter provides an overview of challenges faced by today’s data centers. Server virtualization is partitioning a physical server into smaller virtual servers. Chapter 14. VSAN trunking. Then the chapter explains UCS architecture in terms of component connectivity options and unification of blade and rackmount server connectivity. Chapter 12.network storage devices into what appears to be a single storage unit. “Multihop Unified Fabric”: This chapter discusses various options for multihop FCoE topologies to extend the reach of Unified Fabric beyond the single-hop boundary. It also describes how to verify virtual storage-area networks (VSAN). “Cisco Unified Computing System Manager”: This chapter starts by describing how to set up. It also covers the power-on auto provisioning (POAP). It also discusses the use of converged Unified Fabric leveraging Cisco FabricPath technology. “Data Center Bridging and FCoE”: This chapter introduces principles behind IEEE data center bridging standards and explains how they enhance traditional Ethernet environments to support convergence of network and storage traffic over a single Unified Fabric. It takes a look at characteristics of each option. It also takes a look at different cabling options to support a converged Unified Fabric environment. It focuses on how Cisco Unified Fabric architecture addresses those challenges by converging traditionally disparate network and storage environments while providing a platform for scalable. and verify the Cisco UCS Fabric Interconnect cluster. Application virtualization is decoupling the application and its data from the operating system. zoning. It discusses FCoE protocol and its operation over numerous switching topologies leveraging a variety of Cisco Fabric Extension technologies. hardware. configure. “Cisco Unified Computing System Architecture”: This chapter begins with a quick fly-by on the evolution of server computing. Chapter 9.

GSS and ACE integration is also explained using an example. It also details different deployment modes of ACE and their application within the data center design. goUCS automation toolkit. The Glossary contains definitions for all the terms listed in the “Definitions of Key Terms” . “Final Review”: This chapter suggests a plan for exam preparation after you have finished the core parts of the book. “Answers to the ‘Do I Know This Already?’ Quizzes”: Includes the answers to all the questions from Chapters 1 through 20. the Cisco vision—Cisco Nexus 1000V. virtual supervisory modules. The methods used to optimize the WAN traffic are also discussed in detail. “Cisco Nexus 1000V and Virtual Switching”: The chapter starts by describing the challenges of current virtual switching layers in data centers. Templates. It evaluates the benefits and challenges of server virtualization while offering approaches to mitigate performance and security concerns. tips. and the well-documented UCS XML API using the Python SDK. “Server Virtualization Solutions”: This chapter takes a peek into the history of server virtualization and discusses fundamental principles behind different types of server virtualization technologies. It describes Cisco Application Control Engine (ACE) and Global Site Selector (GSS) products and their key features. and then introduces the distributed virtual switches and.Chapter 15. It explains the Cisco WAAS solution and describes the concepts and features that are important for the DCICT exam. and integration with VMware vCenter Server. Management. followed by explanations of logical and physical resource pools and the essentials to create templates and aid rapid deployment of service profiles. Part V: Data Center Server Virtualization Chapter 17. An overview of these products is also provided in this chapter. Chapter 20. Chapter 18. and Service Profiles”: The chapter starts by explaining the hardware abstraction layer in more detail and how it relates to stateless computing. Policies. Appendix A. commands to verify initial configuration of virtual Ethernet modules. “Administration. Part VII: Final Preparation Chapter 21. It also gives you an introduction to Cisco UCS XML. and most relevant features. “Cisco Unified Computing System Pools. which helps in choosing the correct solution for the load balancing requirements in the data center. Cisco supports a range of hardware and software platforms that are suitable for various wide-area network optimization and application acceleration requirements. “Cisco WAAS and Application Acceleration”: This chapter gives you an overview of the challenges associated with a wide-area network. Part VI: Data Center Network Services Chapter 19. in particular. As a bonus. and Monitoring of Cisco Unified Computing System”: This chapter starts by explaining some of the important features used when administering and monitoring Cisco UCS. you also see notes. The chapter explains installation options. “Cisco ACE and GSS”: This chapter gives you an overview of server load balancing concepts required for the DCICT exam. Chapter 16. in particular explaining the many study options available in the book.

as a memory exercise. Appendixes On the CD Appendix B. “Study Planner”: A spreadsheet with major study milestones enabling you to track your progress through your study. complete the tables and lists.section at the conclusion of Chapters 1 through 20. “Memory Tables”: Holds the key tables and lists from each chapter. Appendix C. The goal is to help you memorize facts that can be useful on the exams. with some of the content removed. You can print this appendix and. “Memory Tables Answer Key”: Contains the answer key for the exercises in Appendix B. . Appendix D.

Do not lose the activation code. You may read these when you first use the book. Respond to windows prompts as with any typical software installation process.Reference Information This short section contains a few topics available for reference elsewhere in the book. fill-in-the-blank. If you have already installed the PCPT software from another Pearson product. The paper lists the activation code for the practice exam associated with this book. and testlet questions). In particular. you will find a one-time-use coupon code that will give you 70% off the purchase of the CCNA Data Center DCICT 640-916 Official Cert Guide. Install the Software from the CD The software installation process is routine as compared with other software installation processes. you do not need to reinstall the software. Using the PCPT engine. Simply launch the software on your desktop and proceed to activate the practice exam from this book by using the activation code included in the CD sleeve. which lists several contact details. you can either study by going through the questions in study mode. make sure to note the final page of this introduction. or you can take a simulated DCICT exam that mimics real exam conditions. drag-and-drop. Premium Edition eBook and Practice Test. Install the Pearson IT Certification Practice Test Engine and Questions The CD in the book includes the Pearson IT Certification Practice Test (PCPT) engine (software that displays and grades a set of exam-realistic multiple-choice. The CD in the back of this book has a recent copy of the PCPT engine. Note Also on this same piece of paper. Note The cardboard CD case in the back of this book includes both the CD and a piece of thick paper. The installation process gives you the option to activate your exam with the activation code supplied . The installation process requires two major steps. The practice exam (the database of DCICT exam questions) is not on the CD. The software that automatically runs is the Cisco Press software to access and use all CDbased features. including the exam engine. Insert the CD into your PC. Step 3. After you install the software. the PCPT software will use your Internet connection and download the latest versions of both the software and the question databases for this book. including how to get in touch with Cisco Press. click the option to Install the Exam Engine. Step 2. From the main menu. but you may also skip these topics and refer to them later. on the opposite side from the exam activation code. The following steps outline the installation process: Step 1.

both in study mode and practice exam mode. For instance. only a few steps are required. “Final Review. after they have finished reading the entire book. If you do not see the exam. Updating your exams will ensure that you have the latest changes and updates to the exam data. Activating Other Products The exam software installation process and the registration process have to happen only once. Start the PCPT software from the Windows Start menu or from your desktop shortcut icon. all you have to do is start PCPT (if not still up and running) and perform Steps 2 through 4 from the previous list. This process requires that you establish a Pearson website login. from the My Products or Tools tab. there is no need to register again. if you buy another new Cisco Press Official Cert Guide or Pearson IT Certification Cert Guide. To update a particular product’s exams that you have already activated and downloaded. select the Tools tab and click the Update Products button. The two modes inside PCPT give you better options for study versus practicing a timed exam event. using study mode. make sure that you have selected the My Products tab on the menu. . Activate and Download the Practice Exam When the exam engine is installed. Here is a suggested study plan: During part review. During part review. From there. extract the activation code from the CD sleeve in the back of that book. Then click the Activate button. To activate and download the exam associated with this book. When the activation process completes. Step 4. Click Next. the My Products tab should list your new exam. click the Activate button. If you want to check for updates to the PCPT software. you do not even need the CD at this point. the PCPT software downloads the latest version of all these exam databases. Save the remaining exams to use with Chapter 21. Just select the exam and click the Open Exam button. use PCPT to review the DIKTA questions for that part. However. At this point. At the next screen. and then click Finish. select the Tools tab and click the Update Application button. Then for each new product. When you install the PCPT software and type in the activation code. You can choose to use any of these exam databases at any time.” or four different sets of questions. The activation process will download the practice exam.on the paper in the CD sleeve. using study mode. With the DCICT book alone. The questions come in different exams or exam databases. the software and practice exam are ready to use. enter the activation key from paper inside the cardboard CD holder in the back of the book. Just use your existing login. so please do register when prompted. This will ensure that you are running the latest version of the software engine. many people find it best to save some of the exams until exam review time. you should then activate the exam associated with this book (if you did not do so during the installation process) as follows: Step 1. Step 3. you get four different “exams.” using practice exam mode. PCPT Exam Databases with This Book This book includes an activation code that allows you to load a set of practice questions. If you already have a Pearson website login. use the questions built specifically for part review (the part review questions) for that part of the book. Step 2. You will need this login to activate the exam.

From the main (home) menu. click at the bottom of the screen to deselect all objectives. How to View Only Part Review Questions by Part The exam databases you get with this book include a database of questions created solely for study during the part review process. DIKTA questions focus more on facts. select the item for this product. follow the same process as you did with DIKTA/book questions. The top of the next window should list some exams. you can view questions from the chapters in only one part of the book. The top of the next window that appears should list some exams. you can choose a subset of the questions in an exam database. Step 5. Although you could simply scan the book pages to review these questions. This tells the PCPT software to give you part review questions from the selected part. Step 6. from all chapters. The part review questions instead focus more on application and look more like real exam questions. and click Open Exam. Practice exam mode creates an event somewhat like the actual exam. Select any other options on the right side of the window. It gives you a preset number of questions. Step 4. select the box beside Part Review Questions. it is slightly better to review these questions from inside the PCPT software. the DIKTA questions from the beginning of each chapter). and click Open Exam. . but select the part review database instead of the book database. To view these questions. Step 4. On this same window. and deselect the other boxes. Click Start to start reviewing the questions. Step 2. Start the PCPT software. select the item for this product. with basic application. Step 5. follow these steps: Step 1. Start the PCPT software. and then select (check) the box beside the book part you want to review. This selects the questions intended for part-ending review. click at the bottom of the screen to deselect all objectives (chapters). Select any other options on the right side of the window. Click Start to start reviewing the questions. Step 6. with a name like DCICT 640916 Official Cert Guide. In this same window. with a timer event. select the box beside DCICT Book Questions. then select the box beside each chapter in the part of the book you are reviewing. Also.In study mode. just to get a little more practice in how to read questions from the testing software. This selects the “book” questions (that is. To view these DIKTA (book) questions inside the PCPT software. Step 2. and deselect the other boxes. you can see the answers immediately. From the main (home) menu. How to View Only DIKTA Questions by Part Each “Part Review” section asks you to repeat the Do I Know This Already (DIKTA) quiz questions from the chapters in that part. for instance. Step 3. with a name like DCICT 640916 Official Cert Guide. so you can study the topics more easily. Practice exam mode also gives you a score for that timed event. as follows: Step 1. Step 3.

ciscopress. Just go to the website.com/go/certification for the latest details. and type your message. We at Cisco Press believe that this book certainly can help you achieve CCNA data center certification. select Contact Us. Cisco might make changes that affect the CCNA data center certification from time to time. but the real work is up to you! We trust that your time will be well spent.com.cisco. This is the DCICT exam prep book from the only Cisco-authorized publisher. The CCNA Data Center DCICT 640-916 Official Cert Guide helps you attain the CCNA data center certification. .For More Information If you have any comments about the book. submit them via www. You should always check www.

by any means. A Brief Perspective on the CCNA Data Center Certification Exam Cisco sets the bar somewhat high for passing its certification exams. It requires a bit of practice every day. but at the same time it also encourages you to continue your journey to become a better data center networking professional. What I Talk About When I Talk About Running is a memoir by Haruki Murakami in which he writes about his interest and participation in long-distance running. and troubleshoot networking features. and analyze emerging technology trends. In long-distance running the only opponent you have to . and so on. I’m at an ordinary—or perhaps more like mediocre—level. compute. Running day after day. piling up the races. and today you are making a very significant step in achieving your goal. bit-by-bit I raise the bar. what do you need to do to be ready to pass the exam? We encourage you to build your theoretical knowledge by thoroughly reading this book and taking time to remember the most critical facts. and security to the test. Think about studying for the CCNA data center certification exam as preparing for a marathon. and plan accordingly to review material for exam topics you are less comfortable with. running is both exercise and a metaphor. Murakami started running in the early 1980s. You will need certain analytical problem-solving skills. We welcome your decision to become a Cisco Certified Network Associate Data Center professional! You are probably wondering how to get started. we also encourage you to develop your practical knowledge by having hands-on experience in the topics covered in the CCNA data center certification exam. At least that’s why I’ve put in the effort day after day: to raise my own level. These exams challenge you from many angles.Getting Started It might look like this is yet another certification exam preparation book—indeed. he has competed in more than 20 marathons and an ultramarathon. Such skills require you to prepare by doing more than just reading and memorizing this book’s material. what is the missing ingredient of the solution. many people pass them every day. including all the exams related to CCNA certifications. such as your ability to analyze. Suggestions for How to Approach Your Study with This Book Although Cisco certification exams are challenging. For example. virtualization. At the same time. he wrote in his book: “For me. The point is whether or not I improved over yesterday. Take time to review it before starting your CCNA data center certification journey. understand relevant technology landscape. what needs to be considered when designing that topology. a question might give you a data center topology diagram and then ask what needs to be configured to make that setup work. storage. A team of five data center professionals came together to carefully pick the content for this book to send you on your CCNA data center certification journey. configure. But that’s not the point. This book offers a solid foundation for the knowledge required to pass the CCNA data center certification exam. I’m no great runner. Many questions in the CCNA data center certification exam apply to connectivity between the data center devices. such as CCNA data center. Strategizing about your studying is a key element to your success. On running as a metaphor. The CCNA data center exam puts your knowledge in network. it is. This “Getting Started” chapter guides you through building a strategy toward success. So. Spend time thinking about key milestones you want to reach. since then. and by clearing each level I elevate myself.

and Storage Area Networks in a Data Center. Plan your time accordingly and make progress every day. view the book as having six major milestones. Based on your score in the “Do I Know This Already?” (DIKTA) quiz. one for each major topic. you can choose to read the entire core (Foundation Topics) section of the chapter or just skim the chapter. Practice.” Similar principles apply here. remembering terms. Table 1 shows the suggested study approach for each content chapter. and doing them at the end of the chapter. Practice.beat is yourself. regardless of whether you skim or read thoroughly. Practice—Did I Mention: Practice? Second. This book organizes the content into topics of a more manageable size to give you something more digestible to build your knowledge. These topics are structured to cover main data center infrastructure technologies. Review those sections carefully and make sure you are thoroughly familiar with their content. do the study tasks in the “Exam Preparation Tasks” section at the end of the chapter.” Doing these tasks. The following list describes the majority of the activities you will find in “Exam Preparation Tasks” sections: Review key topics Complete memory tables Define key terms Approach each chapter with the same plan. such as Network. This book has 20 content chapters covering the topics you need to know to pass the exam. This Book Is Compiled from 20 Short Read-and-Review Sessions First. and the only opponent you have to beat is yourself. DIKTA is a self-assessment quiz appearing at the beginning of each content chapter. Each chapter ends with practice and study tasks under the heading “Exam Preparation Tasks. Table 1 Suggested Study Approach for the Content Chapter In Each Part of the Book You Will Hit a New Milestone Third. each on a relatively small set of related topics. Take time to review this book chapter by chapter and topic by topic. the way you used to be. Computer. look at your study as a series of read-and-review tasks. really helps you get ready. Each chapter consists of numerous “Foundation Topics” sections and “Exam Preparation Tasks” at the end. Server Virtualization. Each chapter marks key topics you should pay extra attention to. Do not put off using these tasks until later! The chapter-ending “Exam Preparation Tasks” section helps you with the first phase of deepening your knowledge and skills of the key topics. Remember. plan to use the practice tasks at the end of each chapter. . and linking the concepts in your brain so that you can remember how it all fits together.

you need to know what tasks you plan to do. configuration. The tasks in the chapter also help you find your weak areas. this chapter’s tasks give you activities to make further progress. Consider setting a goal date for finishing each part of the book and reward yourself for achieving this goal! Plan breaks—some personal time out—to get refreshed and motivated for the next part. depending on your personality. Completing each part means that you have completed a major area of study. Although making lists of tasks might or might not appeal to you.Beyond the more obvious organization into chapters. and troubleshooting. verification. the “Part Review” tasks section. Many of the questions are purposely designed to test your knowledge in areas where the most common mistakes and misconceptions occur. take the time to make a plan. or the “Final Review” . To set the goals. Table 2 lists the six parts in this book. Many questions require that you cross-connect ideas about design. it helps you further develop the analytical skills you need to answer the more complicated questions in the exam. Each “Part Review” instructs you to repeat the DIKTA questions while using the PCPT software. Acknowledge yourself for completing major milestones. and be ready to track your progress. goal setting can help everyone studying for these exams. More reading will not necessarily develop these skills. Table 2 Six Major Milestones: Book Parts Note that the “Part Review” directs you to use the Pearson Certification Practice Test (PCPT) software to access the practice questions. do the tasks outlined at the end of the book in Chapter 21. Set Goals and Track Your Progress Finally. such as putting down every single task in the “Exam Preparation Tasks” section. “Final Review. set some goals. uncovering any gaps in your knowledge. this book also organizes the chapters into six major topic areas called book parts.” gives some recommendations on how to best use those questions for your final exam preparation. This final element gives you repetition with high-challenge exam questions. The list of tasks does not have to be very detailed. helping you avoid some of the pitfalls people experience with the actual exam.” Note that the PCPT software and exam databases included with this book also give you the rights to additional questions. take extra time to complete “Part Review” tasks and ask yourself where your weak and strong areas are. Chapter 21 has two major goals. It also instructs you how to access a specific set of questions reserved for reviewing concepts at the “Part Review. Use the Final Review Chapter to Refine Skills Fourth. before you start reading the book. At the end of each part. Chapter 21. First.

chapter. find some PDFs. and so on. Register. Pick reasonable dates that you can meet.” Table 3 shows a sample of a task planning table for Part I of this book. later. Table 3 Sample Excerpt from a Planning Table Use your goal dates as a way to manage your study. think about how fast you read and the length of each chapter’s “Foundation Topics” section. You can complete these tasks now or do them in your spare time when you need a study break during the first few chapters of the book. listing only the major tasks can be sufficient. and do not get discouraged if you miss a date. as listed in the Table of Contents. This mailing list allows you to lurk and participate in discussions about CCNA data center topics. do not start skipping the tasks listed at the ends of the chapters! Instead. you have sufficient time to resolve them before you need the tool. Do not delay. do not forget to list tasks for “Part Reviews” and the “Final Review. should you encounter software installation problems. think about what is impacting your schedule—family commitments. move up the next few goal dates. Other Small Tasks Before Getting Started You will need to complete a few overhead tasks. and so on—and either adjust your goals or work a little harder on your study. when you have time to read.com) and join the CCNA data center study group. Instead. for both the DCICN and DCICT exams. If you finish a task sooner than planned. and set up an e-mail filter to redirect the messages to a separate folder. work commitments. http://learningnetwork. When setting your goals. You should track at least two tasks for each typical chapter: reading the “Foundation Topics” section and doing the “Exam Preparation Tasks” section at the end of the chapter. Of course. you can browse through the posts to find . such as install software. If you miss a few dates. Register (for free) at the Cisco Learning Network (CLN.cisco. Even if you do not spend time reading all the posts yet. join the group.

You can also search the posts from the CLN website.” under the heading “Install the Pearson Certification Practice Test Engine and Questions. All the recorded sessions last a maximum of one hour. There are many recorded CCNA data center sessions from the technology experts. Install the PCPT exam software and activate the exams.relevant topics. For more details on how to load the software. which you can listen to as much as you want. refer to the “Introduction. and enjoy the ride! .” Keep calm.

Data Plane. and Management Plane Initial Setup of Nexus Switches Nexus Device Configuration and Verification Chapter 1: Data Center Network Architecture Chapter 2: Cisco Nexus Product Family Chapter 3: Virtualizing Cisco Network Devices Chapter 4: Data Center Interconnect Chapter 5: Management and Monitoring of Cisco Nexus Devices . Aggregation.Part I: Data Center Networking Exam topics covered in Part I: Modular Approach in Network Design Core. and Access Layer Collapse Core Model Port Channel Virtual Port Channel FabricPath Describe the Cisco Nexus Product Family Describe FEX Products Describe Device Virtualization Identify Key Differentiator Between DCI and Network Interconnectivity Control Plane.

” Table 1-1 “Do I Know This Already?” Section-to-Question Mapping . network plays an important role in the availability of data center resources. It is assumed that you are familiar with the basics of networking concepts outlined in the Introducing Cisco Data Center Networking (DCICN). from a branch office over the widearea network (WAN). “Answers to the ‘Do I Know This Already?’ Quizzes.Chapter 1. Network architecture is an important subject that addresses many functional areas of the data center to ensure that data center services are running around the clock. Basic network configuration and verification commands are also included in this chapter. and Access Layer Collapse Core Model Port Channel Virtual Port Channel FabricPath Data centers are designed to host critical computing resources in a centralized place. Therefore. Aggregation. Table 1-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. Data Center Network Architecture This chapter covers the following exam topics: Modular Approach in Network Design Core. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. This chapter discusses the data center networking architecture and design practices relevant to the exam. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. You can find the answers in Appendix A. read the entire chapter. and from the remote locations over the public Internet. These resources can be utilized from a campus building of an enterprise network. This chapter goes directly into the key concepts of data center networking and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification.

Which of the following are benefits of port channels? a.Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. The purpose of the aggregation layer is to provide high-speed packet forwarding. High availability c. Where will you connect a server in a data center? a. you should mark that question as wrong for purposes of the self-assessment. Which of the following statements about the data center aggregation layer is correct? a. and core layers are collapsed. Increased capacity b. Scalability b. Core layer d. High availability . Provides connectivity to servers c. Collapse Core is a design where only one core switch is used. 1. Quality of Service d. Used to enforce network security 3. c. 4. If you do not know the answer to a question or are only partially sure of the answer. What is a Collapse Core data center network design? a. Access layer b. Collapse Core design is a nonredundant design. access layer. distribution layer. Security d. In Collapse Core design. In Collapse Core design. Load balancing c. Simplicity 2. b. c. distribution and core layers are collapsed. High availability d. d. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. It provides policy-based connectivity. It provides connectivity to network-based services (load balancer and firewall). What are the benefits of modular approach in network design? a. Services layer 5. d. 6. Provides connectivity to aggregation layer b. This layer is optional in data center design. Aggregation layer c. b. Which of the following are key characteristics of data center core layer? a.

Cisco Nexus switches support which of the following port channel load-balancing methods? a. Which of the following are benefits of using vPC? a. None of the above 12. Catalyst 6500 switches b. Progressive b. The port responds to LACP packets that it receives but does not initiate LACP negotiation. Source and destination MAC address c. The port initiates negotiations with a peer switch by sending LACP packets. Duplex c. Standby c. LACP is enabled on a port.7. It uses all available uplink bandwidth. Which of the following are valid port channel modes? a. b. It eliminates Spanning Tree Protocol (STP) blocked ports. Passive d. Source and destination IP address b. ON e. Which of the following attributes must match to bundle ports into a port channel? a. Flow control 8. Active 11. LACP is enabled on a port. Passive d. OFF 9. Source and destination TCP/UDP port number d. Active d. . Which LACP mode is used? a. Progressive c. Passive c. LACP port priority d. ON b. ON 10. Active b. All Cisco switches 13. Speed b. Which LACP mode is used? a. a. Identify the Cisco switching platform that supports vPC. Nexus 7000 Series switches c. Nexus 5000 Series switches d.

It is used to exchange Layer 2 forwarding tables between vPC peers. c. MAC address 19. In a FabricPath network. d. 16. In a FabricPath network. What is the name of the FabricPath control plane protocol? a. FabricPath routes the frames based on which of the following parameters? a. 17. Only two c. It performs consistency and compatibility checks. you can configure as many as you want. b.c. Destination MAC address c. Only one b. vPC peer link and vPC peer keepalive link 15. IS-IS . It performs load balancing of traffic between two vPC switches. Root priority b. Cisco Fabric Services over Ethernet (CFSoE) performs which of the following operations? a. Cisco Fabric Service over Ethernet (CFSoE) uses which of the following links? a. vPC peer link c. It provides fast convergence if either the link or the device fails. IP address of packet 20. which protocol is used to automatically assign a unique switch ID to each FabricPath switch? a. How many vPC domains can be configured on a switch or VDC? a. System ID c. It monitors the status of vPC member ports. Source switch ID in outer source address (OSA) b. d. 18. There is no limit. Destination switch ID in outer destination address (ODA) d. 14. vPC peer keepalive link b. In vPC setup. Dynamic Resource Allocation Protocol (DRAP) d. Cisco Fabric Services over Ethernet (CFSoE) c. Dynamic Host Configuration Protocol (DHCP) b. the root of the first multidestination tree is elected based on which of the following parameters? a. 255 d. Switch ID d. In FabricPath. the switch ID must be allocated manually. It is supported on all Cisco switches. vPC ports d.

so these blocks are separated from each other and communicate through specific network connections between the blocks. configure. and incremental changes in the network. The concept seems obvious. the network is divided into distinct building blocks based on features and behaviors. It allows for the considerable growth or reduction in the size of a network without making drastic changes or needing any redesign. These layers can be physical or logical. Therefore. adds. and troubleshoot. Figure 1-1 shows functional layers of a multilayer network design. This approach has several advantages. Scalable data center network design is achieved by using the principle of hierarchy and modularity. Modularity also enables faster moves. and Access. and changes (MACs). EIGRP Foundation Topics Modular Approach in Network Design In a modular design. these blocks are independent from each other. the network is divided into three functional layers: Core. Modular networks are scalable. These blocks or modules can be added and removed from the network without redesigning the entire data center network. Aggregation. a change in one block does not affect other blocks. A network must always be kept as simple as possible. Modular designs are simple to design. modular design provides easy control of traffic flow and improved security. but it is very often forgotten. The main advantage occurs during the changes in the network. OSPF c. The Multilayer Network Design When you design a data center using a modular approach. Modularity also implies isolation of building blocks. In other words. BGP d. Because of the hierarchical topology of a modular design. .b. the addressing is also simplified within the data center network. These building blocks are used to construct a network topology within the data center. This section describes the function of each layer in the data center.

The hierarchical model divides networks or their modular blocks into three layers.Figure 1-1 Functional Layers in the Hierarchical Model The hierarchical network model provides a modular framework that enables flexibility in network design and facilitates ease of implementation and troubleshooting. Table 1-2 describes the key features of each layer within the data center. .

An advantage of the multilayer network design is scalability. and Layer 3 switching in the core.Table 1-2 Data Center Functional Layers The multilayer network design consists of several building blocks connected across a network backbone. including segmentation. The multilayer network design takes maximum advantage of many Layer 3 services. Layer 2 switching is used in the access layer. The redundancy of the building block is extended with redundancy in the backbone. Layer 2 and Layer 3 switching in the distribution layer. load balancing. In the most general model. . and failure recovery. New buildings and server farms can be easily added or removed without changing the design. Figure 1-2 shows sample topology of a multilayer network design.

and servers. Aggregation Layer The aggregation (or distribution) layer aggregates the uplinks from the access layer to the data center core. depending on whether you use Layer 2 or Layer 3 access. firewalls. This layer is the critical point for control and application services. there are physical loops in the topology that must be managed by the STP. With Layer 2 at the aggregation layer. which requires the servers to be Layer 2 adjacent. . Layer 3.Figure 1-2 Multilayer Network Design Access Layer The access layer is the first point of entry into the network for edge devices. This design also enables the use of Layer 2 clustering. With a dual-homing network interface card (NIC). which allows better sharing of service devices across multiple servers. A mix of both Layer 2 and Layer 3 access models permits a flexible solution and enables application environments to be optimally positioned. the default gateway for the servers can be configured at the aggregation layer. With Layer 2 access. The VLAN or trunk supports the same IP address allocation on the two server links to the two separate switches. new designs try to confine Layer 2 to the access layer. SSL offloading devices. and mainframe connectivity. This design lowers TCO and reduces complexity by reducing the number of components that you need to configure and manage. Rapid Per-VLAN Spanning Tree Plus (Rapid PVST+) is recommended to ensure a logically loop-free topology over the physical topology. The access layer in the data center is typically built at Layer 2. This topology provides access layer high availability. end stations. you need two access layer switches with the same VLAN extended across both access switches using a trunk port. The data center access layer provides Layer 2. and IPS devices) are often deployed as a module in the aggregation layer. The switches in the access layer are connected to two separate distribution layer switches for redundancy. Security and application service devices (such as load-balancing devices. The default gateway would be implemented at the aggregation layer switch. The design of the access layer varies. Although Layer 2 at the aggregation layer is tolerated for traditional designs.

and maintenance). Key core characteristics include the following: Distributed forwarding architecture Low-latency switching 10 Gigabit Ethernet scalability Scalable IP multicast support Collapse Core Design Although there are three functional layers in a network design. The following drivers are used to determine whether a core solution is appropriate: 40 and 100 Gigabit Ethernet density: Are there requirements for higher bandwidth connectivity. the boundary between Layer 2 and Layer 3 at the aggregation layer can be in the multilayer switches. ACLs. If the same VLAN is required across multiple access layer devices. Core Layer Implementing a data center core is a best practice for large data centers. Anticipation of future development: The impact that can result from implementing a separate data center core layer at a later date might make it worthwhile to install the core layer at the beginning. service devices that are deployed at the aggregation layer are shared among all the servers. it helps ease network expansion and avoid disruption to the data center environment. these layers do not have to be physical layers. administration. the Cisco Nexus 7000 Series switches can now support much higher bandwidth densities. The data center typically connects to the campus core using Layer 3 links. but can be logical layers. Service devices that are deployed at the access layer provide benefit only to the servers that are directly attached to the specific access switch. troubleshooting. or the content switching devices. The aggregation layer typically provides Layer 3 connectivity to the core. When the core is implemented in an initial data center design. and implementation of policies (such as quality of service. such as 40 or 100 Gigabit Ethernet? With the introduction of the Cisco Nexus 7000 M-2 Series of modules. 10 Gigabit Ethernet density: Without a data center core. the aggregation layer might also need to support a large Spanning Tree Protocol (STP) processing load. will there be enough 10 Gigabit Ethernet ports on the campus core switch pair to support both the campus distribution and the data center aggregation modules? Administrative domains and policies: Separate cores help to isolate campus distribution layers from data center aggregation layers for troubleshooting. This topic describes the reasons for combining the core and . Depending on the requirements. The data center network is summarized.Note In traditional multilayer data center design. the firewalls. and the core injects a default route into the data center network.

With this option. In this mixed approach you can build a scalable access layer using spine and leaf architecture. such as servers and other edge devices. Figure 1-3 Collapse Core Design The three-tier architecture previously defined is traditionally used for a larger enterprise campus. Figure 1-4 Spine and Leaf Architecture The following are key characteristics of spine and leaf design: . server farm. This design can have only two layers that are spine and leaf. An alternative would be to use a two-tier architecture in which the distribution and the core layer are combined. adding more spine switches can increase the capacity of the network. connect to leaf switches. a leaf switch acts as the access layer. The spine provides connectivity between the leaf switches. the three-tier architecture might be excessive. and it connects to all the spine switches.aggregation layers. while keeping other layers of the data center intact. and adding more leaf switches can increase the port density on the access layer. therefore. Figure 1-4 shows a sample spine and leaf topology. Figure 1-3 shows sample topology of a collapse core network design. and edge modules. In the collapse core design. Spine and Leaf Design Most large-scale modern data centers are now using spine and leaf design. For the campus of a small or medium-sized business. the small or medium-sized business can reduce costs because the same switch performs both distribution and core functions. a threetier architecture should be considered. Spine and leaf design can also be mixed with traditional multilayer data center design. In the spine and leaf design. The end hosts. One disadvantage of two-tier architecture is scalability. Examples of spine and leaf architectures are FabricPath and Application Centric Infrastructure (ACI). the core switches would provide direct connections to the access-layer switches. As the small campus begins to scale.

you learn about the important concepts of a port channel. and up to 16 ports on the F-Series module. duplex. Note At the time of writing this book. Future proofing for higher-performance network through the selection of higher port count switches at the spine. The current power consumption of a 40-Gbps optic is more than ten times a single 10-Gbps optic. spine and leaf topology is supported only by Cisco Nexus switches. These switches are connected to each other using physical network links. data center network topology is built using multiple switches. Port Channel In this section. Figure 1-5 shows how multiple physical interfaces are combined to create a logical port channel between two switches. you can group these links to form a logical bundle. This logical group is called EtherChannel. flow control. What Is Port Channel? As you learned in the previous section. a port channel is an aggregation of multiple physical interfaces that creates a logical interface. Network congestion results when multiple packets inside a switch request the same outbound port because of application architecture. Therefore. . bandwidth. On the Nexus 5000 Series switch you can bundle up to 16 links into a port channel. The two devices connected via this logical group of ports see it as a single. When you connect two devices using multiple physical links. On the Nexus 7000 Series switch. big network pipe between the two devices. IP address. shared buffer-switching platform for leaf node. Reduced network congestion using a large.Multiple design and deployment options enabled using a variable-length spine with ECMP to leverage multiple available paths between leaf and spine. This new interface is a logical representation of all the member ports of the port channel. or port channel. you can bundle eight active ports on an M-Series module. Figure 1-5 Port Channel When a port channel is created. Lower power consumption using multiple 10 Gbps between spine and leaf instead of a single 40-Gbps link. and physical interfaces participating in this group are called member ports of the group. The port channel interface can be configured with its own speed. you will see a new interface in the switch configuration. delay.

. In case of a physical link failure. and so on. you cannot put a 1 G interface and a 10 G interface into the same port channel. and because of these benefits you will find it commonly used within data center networks. increasing the efficiency of your network. Media type. You can force ports with incompatible parameters to join the port channel if the following parameters are the same: Speed. it simplifies the network topology by avoiding the STP calculation and reducing network complexity by reducing the number of links between switches. VLANs. This will help you increase network availability in the case of line card failure. On the Nexus platform. port channels can be used to simplify network topology by aggregating multiple links into one logical link. Simplified network topology: In some cases. to avoid network loops. Therefore. You can also shut down the port channel interface. its speed must match with other member interfaces in this port channel. Typically.maximum transmission unit (MTU). If you configure a member port with an incompatible attribute. As an example. In the case of port channels. therefore. and interface description. when you add an interface to a port channel. multiple Ethernet links are bundled together to create a logical link. these interfaces must meet the compatibility requirements. Other interface attributes are checked by switch software to ensure that the interface is compatible with the channel group. multiple links are combined into one logical link. the software suspends that port in the port channel. For example. the capacity of the link can be increased. the port channel continues to operate even if a single member link is alive. Duplex. it automatically increases availability of the network. when creating a port channel on a modular network switch. if you have multiple links between a pair of switches. In case of an incompatible attribute. port channel creation will fail. Port Channel Compatibility Requirements To bundle multiple switch interfaces into a port channel. Benefits of Using Port Channels There are several benefits of using port channels. For example. Port channel load balancing configuration is discussed later in this chapter. MTU. Some of these attributes are Speed. This enables you to distribute traffic across multiple physical interfaces. STP blocks all the links except one. eight 10-Gbps links can be combined to create a logical link with bandwidth of 80 Gbps. High availability: In a port channel. Port mode. you can use the show port-channel compatibility-parameters command to see the full list of compatibility checks. Load balancing: The switch distributes traffic across all operational interfaces in the port channel. which will result in shutting down all member ports. In other words. it is recommended that you use member ports from different line cards. Flow-control. Some of these benefits are as follows: Increased capacity: By combing multiple Ethernet links into one logical link. Duplex and Flow-control.

Port Channel Modes Member ports in a port channel are configured with channel mode. Network devices use LACP to negotiate an automatic bundling of links by sending LACP packets to the peer. LACP enables you to configure up to 16 interfaces into a port channel. To run LACP. When you configure a static port channel without specifying a channel mode. Flow-control. allowed VLAN) to form a bundle. a maximum of 8 interfaces can be in active state. Nexus switches support three port channel modes: on. you can bundle up to 16 active links into a port channel on the F Series module. This protocol ensures that both sides of the link have compatible configuration (Speed. LACP was initially part of the IEEE 802.3ad standard and later moved to the IEEE 802.1. LACP must be enabled on switches at both ends of the link.Link Aggregation Control Protocol The Link Aggregation Control Protocol (LACP) provides a standard mechanism to ensure that peer switches exchange information and agree on the necessary details to bundle ports into a port channel. active. Duplex. Table 1-3 shows the port channel modes supported by Nexus switches. On M Series modules. You can configure channel mode active or passive for individual links in the LACP channel group when you are adding the links to the channel group. On Nexus 7000 switches. Starting from NX-OS Release 5. the channel mode is set to on. and a maximum of 8 interfaces can be placed in a standby state. Then you enable LACP for each channel by setting the channel mode for each interface to active or passive. you first enable the LACP feature globally on the device.1AX standard. and passive. Note Nexus switches do not support Cisco proprietary Port Aggregation Protocol (PAgP) to create port channels. .

it does not join the LACP channel group. Both the passive and active modes enable LACP to negotiate between ports to determine whether they can form a port channel. or passive within the channel-group command. based on criteria such as the port speed and the trunking state. . 2. Figure 1-6 Port Channel Mode Compatibility Configuring Port Channel Port channel configuration on the Cisco Nexus switches includes the following steps: 1. The passive mode is useful when you do not know whether the remote system. active. When an LACP attempts to negotiate with an interface in the on state. it does not receive any LACP packets. and it becomes an individual link for that interface. as shown in Figure 1-6. Configure the physical interface of the switch with the channel-group command and specify the channel number. After LACP is enabled globally. supports LACP. therefore. the device returns an error message. or partner. This command automatically creates an interface port channel with the number that you specified in the command. This step is required only if you are using active mode or passive mode. You can also specify the channel mode on.Table 1-3 Port Channel Modes If you attempt to change the channel mode to active or passive before enabling the LACP feature on the switch. Enable the LACP feature. you can enable LACP on ports by configuring each interface in the channel for the channel mode as either active or passive. Ports can form an LACP port channel when they are in different LACP modes as long as the modes are compatible.

Figure 1-7 shows configuration of a port channel between two switches. Figure 1-7 Port Channel Configuration Example Port Channel Load Balancing Nexus switches distribute the traffic across all operational ports in a port channel. such as description. trunk configuration. Configure the newly created interface port channel with the appropriate configuration.3. You can configure the device to use one of the following methods to load balance across the port channel: Destination MAC address Source MAC address Source and destination MAC address Destination IP address Source IP address Source and destination IP address Source TCP/UDP port number Destination TCP/UDP port number Source and destination TCP/UDP port number Verifying Port Channel Configuration Several show commands are available on the Nexus switch to check port channel configuration. hence. These commands are helpful to verify port channel configuration and troubleshoot the port channel issue. . Table 1-4 provides some useful commands. This load balancing is done using a hashing algorithm that takes addresses in the frame as input and generates a value that selects one of the links in the channel. they are always received in order by the remote device. allowed VLANs. and so on. This provides load balancing across all member ports of a port channel. but it also ensures that frames from a particular address are always forwarded on the same port.

All the member ports of a port channel belong to the same network switch. Cisco vPC technology allows dual-homing of a downstream device to two upstream switches. A vPC enables the extension of a port channel across two physical switches. It means that member ports in a virtual port channel can be from two different network switches. . so a port channel can be extended across the two devices within this virtual domain. Figure 1-8 shows a downstream switch connected to two upstream switches in a vPC configuration. What Is Virtual Port Channel? You learned in the previous section that a port channel bundles multiple physical links into a logical link. The upstream switches present themselves to the downstream device as one switch from the port channel and Spanning Tree Protocol (STP) perspective.Table 1-4 Verify Port Channel Configuration Virtual Port Channel In this section. In Layer 2 network design. you learn about important concepts of virtual port channel (vPC). These two switches work together to create a virtual domain.

Detailed . A virtualized server running hypervisor software. STP does not provide an efficient way to manage large Layer 2 networks. Note A brief summary of Spanning Tree is provided later in the FabricPath section.Figure 1-8 Virtual Port Channel In a conventional network design. and Citrix XenServer. Although STP can support very large network environments. These loops can cause a meltdown of network resources. These requirements lead to the use of large Layer 2 networks. A Layer 2 loop is a condition where packets are circulating within the redundant network topology. Building redundancy is an important design consideration to protect against device or link failure. There are some other applications. for an Ethernet network this redundant topology can result in a Layer 2 loop. STP became very mature and stable. it still has a fundamental issue that makes it suboptimal for the network. within and across the data center locations. therefore. requires Layer 2 Ethernet connectivity between the physical servers to support seamless workload mobility. with the current data center virtualization trend. it is important to migrate away from STP as a primary loop management technology toward new technologies such as vPC and Cisco FabricPath. it was necessary to develop protocols and control mechanisms to limit the disastrous effects of a Layer 2 topology loop in the network. It means that no matter how many connections exist between network devices. that require member servers to be on the same Layer 2 Ethernet network to support private interprocess communication and public client facing communication using a virtual IP of the cluster. FabricPath technology is discussed later in this chapter. and after going though a number of enhancements and extensions. data center network architecture is shifting from a highly scalable Layer 3 network model to a highly scalable Layer 2 model. such as VMware ESX Server. only one will be active at any given time. Over a period of time. smaller Layer 2 segments are preferred. As a result. The innovation of new technologies to manage large Layer 2 network environments is the result of this shift in the data center architecture. This issue is related to the STP principle that allows only one active network path from one device to another to break the topology loops in a network. However. causing high CPU and link utilization. Spanning Tree Protocol (STP) was the primary solution to this problem because it provides loop detection and loop management capability for Layer 2 Ethernet networks. such as high availability clusters. However. therefore. A large Layer 2 network is built using many network switches connected to each other in a redundant topology using multiple network links. Microsoft Hyper-V. this preference is changing.

This environment combines the benefits of hardware redundancy with the benefits of port channel. The limitation of the classic port channel is that it operates between only two devices. so it looks like a single logical entity to port channel–attached devices.discussion on STP is available in Introducing Cisco Data Center Networking (DCICN). the alternative path is often connected to a different network switch in a topology that would cause a loop. Port channel technology has another great benefit. Port channel technology discussed earlier in this chapter was a key enhancement to Layer 2 Ethernet networks. In a port channel. These switches collaborate only for the purpose of creating a vPC. It also provides the capability to equally balance traffic by sending packets to all available links using a load-balancing algorithm. It can recover from a link loss in the bundle within a second. This enhancement enables the use of multiple Ethernet links to forward traffic between the two network devices. Figure 1-9 shows a comparison of STP topology and vPC topology. It means that if you configure a port channel between two network devices. Figure 1-9 shows the comparison between STP and vPC topology using the same devices and network links. there is only one logical path between the two switches. with minimal loss of traffic and no effect on the active network topology. Figure 1-9 Virtual Port Channel Bi-Section Bandwidth The two switches that act as the logical port channel endpoint are still two separate devices. resulting in efficient use of network capacity. This configuration keeps the switches from forwarding broadcast and multicast traffic back to the same link. Layer 2 loops are managed by bundling multiple physical links into one logical link. Virtual Port Channel (vPC) addresses this limitation by allowing a pair of switches acting as a virtual endpoint. It is important to observe that there is no blocking port in the vPC topology. In large networks with redundant devices. avoiding loops. Benefits of Using Virtual Port Channel Virtual Port Channel provides the following benefits: .

and management planes. it is important to understand the components of vPC and the function of these components.” Before going into the detail of vPC control and data plane operation. data. The control planes of these switches also exchange information so it appears as a single logical Layer 2 switch to a downstream device.Enables a single device to use a port channel across two upstream switches Eliminates STP blocked ports Provides a loop-free topology Uses all available uplink bandwidth Provides fast convergence in the case of link or device failure Provides link-level resiliency Helps ensure high availability Components of vPC In vPC configuration. data. a downstream network device that connects to a pair of upstream Nexus switches sees them as a single Layer 2 switch. Figure 1-10 shows the components of a virtual port channel. and management planes. This subject is covered in Chapter 5. “Management and Monitoring of Cisco Nexus Devices. These two upstream Nexus switches operate independently with their own control. . Note The operational plane of a switch includes control. the data planes of these switches are modified to ensure optimal packet forwarding in vPC configuration. However.

This pair of switches acts as a single logical switch. If a vPC device is also a Layer 3 switch. which enables other devices to connect to the two chassis using vPC. and for the traffic of orphaned ports. The vPC peer link is the most important connectivity element in the vPC system. this link is built using two physical interfaces in port channel configuration. vPC peer link: This is a link between two vPC peers to synchronize state. . the peer link also carries Hot Standby Router Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP) packets.Figure 1-10 Virtual Port Channel Components The vPC architecture consists of the following components: vPC peers: The two Cisco Nexus switches combined to build a vPC architecture are referred to as vPC peers. This link creates the illusion of a single control plane by forwarding bridge protocol data units (BPDUs) and Link Aggregation Control Protocol (LACP) packets to the primary vPC switch from the secondary vPC switch. The peer link provides the necessary transport for data plane traffic. such as multicast traffic. Typically. The peer link is also used to synchronize MAC address tables between the vPC peers and to synchronize Internet Group Management Protocol (IGMP) entries for IGMP snooping.

To help ensure that the vPC peer link communication for the CFSoE protocol is always available. Orphan device: The term orphan device is used for any device that is connected to only one switch within a vPC domain. vPC peer link is not used for the regular vPC traffic and is considered to be an extension of . These VLANs must also be allowed on the vPC peer link. A vPC domain is created using two vPC peer devices that are connected together using a vPC peer keepalive link and a vPC peer link. It uses the vPC peer link to exchange information between peer switches. A peer switch can join only one vPC domain. Spanning Tree has been modified to keep the peer-link ports always forwarding. In a normal network condition. The vPC peer keepalive link is not used for any data or synchronization traffic.Cisco Fabric Services: The Cisco Fabric Services is a reliable messaging protocol used between the vPC peer switches to synchronize control and data plane information. It means that the orphan device is not using port channel configuration. The peer keepalive link is used to monitor the status of the vPC peer when the vPC peer link goes down. vPC VLANs: The VLANs that are allowed on vPC are called vPC VLANs. vPC peer keepalive link: This is a Layer 3 communication path between vPC peer switches. CFSoE carries messages and packets for many features linked with vPC. When you enable the vPC feature. A numerical domain ID is assigned to the vPC domain. The device on the other end of vPC sees the vPC peer switches as a single logical switch. vPC domain: This is a pair of vPC peer switches combined using vPC configuration. No CFSoE configuration is required for vPC to function properly. It helps the vPC switch to determine whether the peer has failed or whether the vPC peer switch has failed entirely. Orphan port: This is a switch port that is connected to an orphan device. such as STP and IGMP. it only sends IP packets to make sure that the originating switch is operating and running vPC. There is no need for this device to support vPC itself. vPC: This is a Layer 2 port channel that spans the two vPC peer switches. This situation can happen in a failure condition where a device connected to vPC loses all its connection to one of the vPC peer switches. This device can be configured to use LACP or a static port channel configuration. and for this device there is no vPC configured on the peer switches. Peer keepalive can also use the management port of the switch and often runs over an out-of-band (OOB) network. vPC member port: This port is a member of a virtual port channel on the vPC peer switch. the device automatically enables Cisco Fabric Services over Ethernet (CFSoE). It connects to the vPC peer switches using a regular port channel. vPC Data Plane Operation vPC technology is designed to always forward traffic locally when possible. This term is also used for the vPC member ports that are connected to only one vPC switch. Non-vPC VLANs: These VLANs are not part of any vPC and are not present on a vPC peer link. Peer keepalive is used as a secondary test to make sure that the remote peer switch is operating properly.

Cisco Fabric Services over Ethernet (CFSoE) is used to synchronize the Layer 2 forwarding tables between the vPC peer switches. In this special case. and then goes to other peer switches via peer link. is not allowed to exit the switch on the vPC member port. One of the most important forwarding rules for vPC is that a packet that enters the vPC peer switch via the vPC member port. Figure 1-11 vPC Loop Avoidance It is important to understand vPC data plane operation for the orphan ports. the vPC loop avoidance rule is disabled. multicast. the vPC peer switch will be allowed to forward the traffic that is received on the peer link to one of the remaining active vPC member ports. For this type of orphan port. Therefore. but the other peer switch has lost all the associated vPC member ports. This packet can exit on any other type of port. such as Cisco FSoE. To implement the specific vPC forwarding behavior. such as broadcast. and LACP messages Flooding traffic. This method of learning is not used for the ports that are not part of vPC configuration. The second type of orphan port is the one that is a member of a vPC configuration. vPC performs loop avoidance at the data plane by implementing forwarding rules in the hardware. CFSoEbased MAC address learning is applicable only to the vPC ports. This behavior of vPC enables the solution to scale because the bandwidth requirement for the vPC peer link is not directly related to the total bandwidth of all vPC ports. BPDUs. and unknown unicast traffic Traffic from orphan ports It means that vPC peer link does not typically forward data packets. The vPC loop avoidance rule is depicted in Figure 1-11. it is used specifically for switch management traffic and occasionally for the data packets from failed network ports.the control plane between the vPC peer switches. The traffic for this type of orphan port can use a vPC peer link as a transit link to reach the devices on the other vPC peer switch. normal switch forwarding rules are applied. The vPC peer link carries the following type of traffic: vPC control traffic. vPC Control Plane Operation . The first type of orphan port is the one that is connected to an orphan device and is not part of any vPC configuration. This rule prevents the packets that are received on a vPC from being flooded back onto the same vPC by the other peer switch. such as an L3 port or an orphan port. there is no dependency on the regular MAC address learning between the vPC peer switches. For this type of orphan port.

On the Nexus 5000 Series switch.0(2). If both switches are operational and the peer link is down. This role is a control plane function. This new method of MAC address learning using CFSoE replaces the conventional method of MAC address learning for vPC. the secondary switch shut downs all vPC member ports and all switch virtual interfaces (SVI) that are associated with the vPC VLANs. Switch modifies the forwarding behavior so that all traffic received on the peer link for that vPC is now forwarded locally on the vPC port. If the primary vPC peer device fails. and for Nexus 7000 it is implemented in Cisco NXOS Software Version 4. This mechanism protects from split brain between the vPC peer switches. It is helpful in reducing data traffic on the vPC peer link by forwarding the traffic to the local vPC member ports. CFSoE performs the following vPC control plane operations: Exchange the Layer 2 forwarding table between the vPC peers: It means that if one vPC peer learns a new MAC address. this feature is implemented in Cisco NX-OS Software Release 5. such as CFSoE.2(6). it triggers the synchronization of the multicast entry on both vPC peer switches. Synchronize the Address Resolution Protocol (ARP) table: This feature enables faster convergence time upon reload of a vPC switch. the peer keepalive mechanism is used to find whether the peer switch is still operational. In the case of peer link failure. vPC is also subject to the port compatibility checks. and it defines which switch will be primarily responsible for the generation and processing of spanning tree BPDUs. and LACP messages. Agree on LACP and STP parameters: In a vPC domain. two peer switches agree on LACP . both vPC peer switches know that MAC address. and it remains primary even if the original device is restored. an election is held to determine the primary and secondary role of the peer switches. the secondary vPC peer device takes the primary role. CFSoE is used to validate that vPC member ports are compatible and can be combined to form a port channel. Synchronize the IGMP snooping status: In vPC configuration. the vPC peer link is used for control plane messaging. CFSoE monitors the status of vPC member ports. When IGMP traffic enters a vPC peer switch.As discussed earlier. It checks the switch configuration for vPC as well as switch-wide parameters that need to be configured consistently on both peer switches. This vPC role is nonpreemptive. IGMP snooping behavior is modified to synchronize the IGMP entries between the peer switches. Monitor the status of the vPC member ports: For a smooth vPC operation during failure conditions. and this MAC is programmed on the Layer 2 Forwarding (L2F) table of both peer devices. Perform consistency and compatibility checks: The peer switches exchange information using CFSoE to ensure that vPC configuration is consistent across both switches. When all member ports of a vPC go down on one of the peer switches. The ARP table is synchronized between the vPC peer devices using CFSoE. BPDUs. Similar to the conventional port channel. Determine primary and secondary vPC devices: When you combine the vPC peer switches in a vPC domain. It also helps to stabilize the vPC operation during failure conditions. The CFSoE is used as the primary control plane protocol for vPC. the other peer switch is notified that its ports are now orphan ports.

both switches send and process BPDU. This domain ID must be unique . It is not possible for a switch or VDC to participate in more than one vPC domain. but the configuration is entirely independent. When the peer-switch option is used. Create the vPC domain and assign a domain ID to this domain. and that a Layer 3 path is available between the two peer switches. It is not possible to add more than two switches or Virtual Device Context (VDC) to a vPC domain. In this case. Some of the key points about vPC limitations are outlined next: Only two switches per vPC domain: A vPC domain by definition consists of a pair of switches identified by a shared vPC domain ID. vPC configuration on the Cisco Nexus switches includes the following steps: 1. the behavior depends on the peer-switch option. If the peer-switch option is not used. Enable the vPC feature. vPCs can be configured in multiple virtual device contexts (VDC). vPC Limitations It is important to know the limitations of vPC technology. Only one vPC domain ID per switch: Only one vPC domain ID can be configured on a single switch or VDC. For LACP. which must be enabled on both peer switches. For STP. a common LACP system ID is generated from a reserved pool of MAC addresses. Dynamic routing from the vPC peers to routers connected on a vPC is not supported. and it uses its own bridge ID. vPC domains cannot be stretched across multiple VDCs on the same switch. A separate vPC peer link and vPC peer keepalive link is required for each of the VDCs. Configuration Steps of vPC When you are configuring a vPC. Use the following command to enable the vPC feature: feature vpc 2. which are combined with the vPC domain ID. vPC is a Layer 2 port channel: A vPC is a Layer 2 port channel. It is recommended that routing adjacencies be established on separate routed links. the secondary switch only relays BPDUs and does not generate any BPDU. make sure that peer switches are connected to each other via physical links that can support the requirements of a vPC peer link. Before you start vPC configuration. Each VDC is a separate switch: vPC is a per-VDC function on the Cisco Nexus 7000 Series switches. both primary and secondary switches use the same bridge ID to present themselves as a single switch. only the primary vPC switch sends and processes BPDUs. In this case. The vPC technology does not support the configuration of Layer 3 port channels.and STP parameters to present themselves as a single logical switch to the device that is connected on a vPC. Peer-link is always 10 Gbps or more: Only 10 Gigabit Ethernet ports can be used for the vPC peer link. It is recommended that you use at least two 10 Gigabit Ethernet ports in dedicated mode on two different I/O modules. the configuration must be done on both peer switches. and all ports for a given vPC must be in the same VDC.

including management VRF. This port channel must be trunked to enable multiple VLANs over the vPC.and should be configured on both vPC switches. . Use the following commands under the port channel to configure the peer link: Click here to view code image Interface port-channel <peer link number> switchport mode trunk vpc peer-link 5. This configuration must be the same on both peer switches. Create a regular port channel and add the vpc command under the port channel configuration to create a vPC. The peer keepalive link can be in any VRF. A peer keepalive link must be configured before you set up the vPC peer link. The vPC peer link is configured in a port channel configuration. Use the following command to create the vPC domain: vpc domain <domain-id> 3. Use the following command under the vPC domain to configure the peer keepalive link: Click here to view code image peer-keepalive destination <remote peer IP> source <local IP> vrf mgmt. Create the vPC itself. Establish peer keepalive connectivity. Create a peer link. 4. Click here to view code image Interface port-channel <vPC number> vpc <vPC number> Figure 1-12 shows configuration of vPC to connect an access layer switch to a pair of distribution layer switches.

These commands are helpful in verifying vPC configuration and troubleshooting the vPC issue. several show commands are available for vPC. Table 1-5 shows some useful vPC commands. you can use all port channel show commands discussed earlier. . In addition.Figure 1-12 vPC Configuration Example Verification of vPC To verify the status of vPC.

Figure 1-13 shows the STP topology. server virtualization and seamless workload mobility is a leading trend in the data center. STP builds a treelike structure by blocking certain ports in the network. A root bridge is elected. it is important to discuss limitations of STP. and flexible Layer 2 fabric is needed in most data centers to support distributed and virtualized application workloads.Table 1-5 Verify Port Channel Configuration FabricPath As mentioned earlier in this chapter. STP is a Layer 2 control plane protocol that runs on switches to ensure that you do not create topology loops when you have these redundant paths in the network. and traffic does not necessarily take the optimal path. To understand FabricPath. and this information is used to elect a root bridge and perform subsequent network configuration by selecting only the shortest path to the root bridge. highly scalable. Spanning Tree Protocol To support high availability within a Layer 2 domain. Switches exchange information using bridge protocol data units (BPDU). This tree topology implies that certain links are unused. and STP ensures that only one network path to the root bridge exists. A large-scale. All network ports connecting to the root bridge via an alternative network path are blocked. switches are normally interconnected using redundant links. To create loop-free topology. .

In addition to the large forwarding table. As you know. A simple example of inefficient path selection is shown in Figure 1-13. Such failures can be extremely dangerous for the switched network because a topology loop can consume all network bandwidth. This leads to inefficient utilization of network resources. Therefore. however. however. A switch has limited space to store Layer 2 forwarding information. Therefore. when there are changes in the spanning tree topology the switches will clear and rebuild their forwarding table. Rapid Spanning Tree Protocol (RSTP) has improved the STP convergence to a few seconds. and all the redundant links are blocked. packets can continue circulating in the network with no time out. certain failure scenarios are complex and can destabilize the network for longer periods of time. where host A and host B cannot use the direct link between the two switches because it is blocked by STP. the Layer 2 MAC addressing is nonhierarchical. causing a major network outage. . Inefficient path selection: STP does not always provide a shortest path between two switches. This results in more flooding to learn new MAC addresses. there is no safety against STP failure or misconfiguration. the demand to store more addresses is also growing with computing capacity and the introduction of server virtualization. all the switches build only one shortest path to the root switch. This limit is increasing with improvements in the silicon technology. Slow convergence: A link failure or any other small change in network topology can cause large changes in the spanning tree that must be propagated to all switches and could take about 30 seconds in older versions of STP. and it is not possible to optimize Layer 2 forwarding tables by summarizing the Layer 2 addresses.Figure 1-13 Spanning Tree Topology Key limitations of STP are as follows: Lack of multipathing: In STP. packet flooding for broadcast and unknown unicast packets populates all end host MAC addresses in all switches in the network. so in a Layer 2 loop condition. Limited scalability: Several factors limit scalability of classic Ethernet. equal cost multipath cannot be used to efficiently load balance traffic across multiple network links. Furthermore. Protocol safety: A Layer 2 packet header does not have a time to live (TTL).

Figure 1-14 shows the FabricPath topology. as soon as the feature is enabled and ports are assigned to the FabricPath network. Intermediate System-to-Intermediate System (IS-IS) Protocol is used to provide fabric-wide intelligence and ties the elements together. FabricPath brings the stability and performance of Layer 3 routing to the Layer 2 switched network and builds a highly resilient. they can be moved around within the data center in a nondisruptive manner without any constrains on the design. FabricPath can be deployed in any arbitrary topology. therefore. hence. The Ethernet fabric can use all the available paths in the network. and shortest path selection from any host to any other host in the network. A FabricPath network is created by connecting a group of switches into an arbitrary topology. Figure 1-14 FabricPath Topology Benefits of FabricPath FabricPath converts the entire data center into one big switching fabric. However. fast convergence. This protocol requires less configuration than the STP in the conventional Layer 2 network. a typical spine and leaf topology is used in the data center. In addition to ECMP. massively scalable. multicast forwarding. combining them in a fabric using a few simple commands.What Is FabricPath? Cisco FabricPath is an innovation to build a simplified and scalable Layer 2 fabric. offering a much more scalable and flexible network fabric. It combines the benefits of Layer 3 routing with the advantages of Layer 2 switching. overall complexity of the solution is reduced significantly. Reliability based on proven technologies: Cisco FabricPath uses a reliable and proven . the network latency is reduced and higher performance can be achieved. and traffic can be load balanced across these paths. There is no need to run Spanning Tree Protocol within the FabricPath network. FabricPath supports the shortest path to the destination. and VLAN pruning. however. Flexibility: FabricPath provides a single domain Layer 2 fabric to connect all the servers. A single control protocol is used for unicast forwarding. Layer 3 creates network segmentation. and flexible Layer 2 fabric. High performance: High performance in a FabricPath network is achieved using an equal-cost multipath (ECMP) feature. Benefits of Cisco FabricPath technology are as follows: Operational simplicity: Cisco FabricPath is simple to configure and easy to operate. multipathing. The switch control plane protocol for FabricPath starts automatically. The level of enhanced flexibility translates into lower operational costs. Because all servers within this fabric reside in one common Layer 2 network. Layer 3 routing provides scalability. which makes it less flexible for workload mobility within a virtualized data center.

High scalability and high availability: FabricPath delivers highly scalable bandwidth with capabilities such as ECMP and multi-topology forwarding. The FabricPath frames includes a time-to-live (TTL) field similar to the one used in IP. Figure 1-15 shows components of a FabricPath network. it is important to understand the components of FabricPath. which is required as computing and virtualization density continue to grow. the Ethernet MAC address cannot be summarized to reduce the size of the forwarding table. Furthermore. large networks with many hosts require a large forwarding table. In addition.control plane mechanism. spine switches are deployed to provide connectivity between leaf switches. Figure 1-15 Components of FabricPath Spine switches: In a two-tier architecture. FabricPath provides increased redundancy and high availability with the multiple paths between the end hosts. allowing scaling the network beyond the limits of the MAC address table of individual switches. FabricPath data plan also provides robust controls for loop prevention and mitigation. This protocol provides fast convergence that has been proven in large service provider networks. enabling IT architects to start with a small Layer 2 network and build a much bigger data center fabric. The role of . Components of FabricPath To understand FabricPath control and data plane operation. and it increases the cost of switches. FabricPath learns MAC addresses selectively at the edge. Spine switches act as the backbone of the switching fabric. driving scale requirements up in the data center. As the network size grows. which is based on the IS-IS routing protocol. Efficiency: Conventional Ethernet switches learn the MAC addresses from the frames they receive on the network. and a Reverse Path Forwarding (RPF) check is applied for multidestination frames. It means incremental levels of resiliency and bandwidth capacity. Therefore. you might require equipment upgrades.

Classic Ethernet (CE): Conventional Ethernet using transparent bridging and running STP is referred to as classic Ethernet. To allow a VLAN over the FabricPath network. When you create a VLAN. packets are forwarded based on a new header. The hosts on two leaf switches use spine switches to communicate with each other. a typical FabricPath network is built using a spine and leaf topology. The FabricPath header is added to the Layer 2 packet when a packet enters the FabricPath network at the ingress switch. In a case where a MAC address table is pointing to a remote switch. Dynamic Resource Allocation Protocol: Dynamic Resource Allocation Protocol (DRAP) is an extension to FabricPath IS-IS that ensures networkwide unique and consistent Switch IDs and FTAG values. This link state protocol is the core of the FabricPath control plane. and it is stripped when the packet leaves the FabricPath network at the egress switch. Because these ports are part of classic Ethernet. It contains Switch IDs and next-hop interfaces to reach the switch. which includes information such as switch ID and hop count. and you can connect any Ethernet device to these ports. The packet is then encapsulated in the FabricPath .1Q trunk. If a MAC address table has a local switch interface. Edge port: Edge ports are switch interfaces that are part of classic Ethernet. with some ports participating in FabricPath and some ports in a classic Ethernet network. FabricPath VLANs: The VLANs that are allowed on the FabricPath network are called FabricPath VLANs. Switch table: The switch table provides Layer 2 routing information based on Switch ID. Leaf switches: Leaf switches provide access layer connectivity for servers and other access layer devices. and packets are transmitted using standard IEEE 802. An edge port can carry both CE as well as FabricPath VLANs. MAC learning is performed as usual. Therefore. These interfaces behave like normal Ethernet ports. devices connected on an edge port in FabricPath VLAN can send traffic to a remote host over the FabricPath network.spine switches is to provide a transit network between the leaf switches. In this FabricPath network. Core port: Ports that are part of a FabricPath network are called core ports. by default it operates in Classic Ethernet (CE) mode. FabricPath core ports always forward Ethernet frames encapsulated in a FabricPath header.1Q VLAN tag and can be considered as a trunk port. The source could be a local switch interface or a remote switch. Nexus switches can be part of both a FabricPath network and a classic Ethernet network at the same time. FabricPath network: A network that connects the FabricPath switches is called a FabricPath network.3 Ethernet frames. FabricPath IS-IS: FabricPath switches run a Shortest Path First (SPF) routing protocol based on standard IS-IS to build their forwarding table similar to a Layer 3 network. Ethernet frames transmitted on core ports always carry an IEEE 802. you must go to VLAN configuration and change the mode to FabricPath. the packet is delivered using classic Ethernet format. In a data center. Only FabricPath VLANs are allowed on core ports. You can configure an edge port as an access port or as an IEEE 802. a lookup is done in the switch table to find the next-hop interface. MAC address table: This table contains MAC address entries and the source from where these MAC addresses are learned.

header and sent to the next hop within the FabricPath network. or security. The ODA and SDA contain a 12bit unique identifier called SwitchID that is used to forward packets within the FabricPath network. FabricPath Frame Format When an Ethernet frame enters the FabricPath network. . FabricPath topology: Similar to STP. The FabricPath tag contains the new Ethertype for the FabricPath header. Any multidestination addresses have this bit set. Figure 1-16 FabricPath Frame Format Endnode ID: This field is currently not in use. where a virtual or physical end host located behind a single port can uniquely identify itself to the FabricPath network. The function of the OOO (out of order)/don’t learn (DL) bit varies depending on whether it is used in outer source address (OSA) or outer destination address (ODA). in FabricPath a set of VLANs that have the same forwarding behavior will be mapped to a topology or forwarding instance. U/L bit: FabricPath switches set this bit in all unicast OSA and ODA fields. This header includes a 48-bit outer MAC destination address (ODA). a forwarding tag called FTAG that uniquely identifies a forwarding graph and the TTL field that has the same functionality as in the IP header. the switch encapsulates the frame with a 16byte FabricPath header. This is required because the OSA and ODA fields are not MAC addresses and do not uniquely identify a particular hardware component the way a standard MAC address would. The purpose of this field is to provide future capabilities. indicating that the MAC address is a locally administered address. This could be used for traffic engineering. The 48-bit source and destination MAC address contains a 12-bit unique identifier called SwitchID that is used for forwarding in the FabricPath network. OOO/DL: This field is currently not in use. The purpose of this field is to provide future capabilities for perpacket load sharing when multiple equal cost paths are available. administrative purposes. and different topologies could be constructed for a different set of VLANs. where a set of VLANs could belong to one or a different instance. Figure 1-16 shows the FabricPath frame format. and a 32-bit FabricPath tag. I/G bit: The I/G bit serves the same function in FabricPath as in standard Ethernet. a 48bit outer MAC source address (OSA).

In the outer destination address (OSD) this field identifies the destination switch for the FabricPath packet. vPC+ is discussed later in this chapter. In the outer source address (OSA) of the FabricPath header. length. LID: Local ID (LID) is also known as port ID. Sub-SwitchID: Sub-SwitchID is a unique identifier assigned to an emulated switch within the FabricPath SwitchID. The system selects the FTAG value for each multidestination tree. IS-IS devices can exchange information about virtually anything. TTL operates in the same manner within FabricPath as it does in traditional IP forwarding. broadcast. this field is used to identify the switch that originated the packet. The value of this field in the FabricPath header is locally significant to each switch. Provides Shortest Path First (SPF) routing: Excellent topology building and reconvergence characteristics.Switch ID: The switch ID is a 12-bit unique identifier for a switch within the FabricPath network. Easily extensible: Using custom type. The Cisco FabricPath Layer 2 IS-IS is a separate process from Layer 3 IS-IS. the TTL prevents Layer 2 frames from circulating endlessly within the network. FabricPath IS-IS provides the following benefits: No IP dependency: No need for IP reachability to form adjacency between devices. Each FabricPath switch hop decrements the TTL by 1. In the absence of vPC+. For unicast traffic it identifies the FabricPath topology the frame is traversing. TTL: This field represents the Time to Live (TTL) of the FabricPath frame. The FabricPath control plane uses IS-IS Protocol to perform the following functions: Switch ID allocation: FabricPath switches establish IS-IS adjacency to the directly connected neighboring switches on the core ports. There is no need to run STP within the FabricPath network. FTAG: FabricPath forwarding tag (FTAG) is a 10-bit traffic identification tag within the FabricPath header. This switch ID can be assigned manually within the switch configuration or allocated automatically using Dynamic Resource Allocation Protocol (DRAP). The forwarding decision at the destination switch can use the LID to send the packet to the correct edge port without any additional lookups on the MAC table. and frames with an expired TTL are discarded. In the case of multicast and broadcast traffic. FabricPath Control Plane Cisco FabricPath uses a single control plane that functions for unicast. The system selects a unique FTAG for each topology configured within FabricPath. These switches then start a negotiation process that . value (TLV) settings. In case of temporary loops during protocol convergence. the FTAG identifies a multidestination forwarding tree within a given topology. Ethertype: An Ethertype value of 0x8903 is used for FabricPath. This emulated switch represents a vPC+ port-channel interface associated with a pair of FabricPath switches. this field is always set to 0. LID is used to identify the specific physical or logical edge port from which the packet is originated or is destined to. and multicast packets by leveraging the Layer 2 Intermediate System-to-Intermediate System (IS-IS) Protocol.

the core interfaces are brought up but not added to the FabricPath topology. up to 16 next hops are stored in the FabricPath routing table. which includes the FabricPath switch ID and its next hop. this root switch selects the root for each additional multidestination tree in the topology and assigns each multidestination tree a unique FTAG value. first a root switch is elected for tree number 1 in a topology. two such trees are supported per topology.2(2). While the initial negotiation is in progress. the FabricPath calculates its unicast routing table. and unknown unicast traffic. multidestination trees are built within the FabricPath network. Unicast routing: After IS-IS adjacencies are built and routing information is exchanged. Multidestination trees: To ensure loop-free topology for broadcast. By default.ensures all FabricPath switches have a unique switch ID. As of NX-OS release 6. You can modify the number of next hops installed using the maximum paths command in FabricPath-domain configuration mode. a configuration command is also provided for the network administrator to statically assign a switch ID to a FabricPath switch. multicast. After a switch becomes the root for the first tree. FabricPath switches use the following three parameters to elect a root for the first tree. However. It also ensures that the type and number of FTAG values in use are consistent. where switch S1 is the root for tree number 1 and switch S4 is the root for tree number 2. Dynamic Resource Allocation Protocol (DRAP) automatically performs this task. Figure 1-17 shows a network with two multidestination trees. Figure 1-17 FabricPath Multidestination Trees Within the FabricPath domain. FabricPath IS-IS automatically creates multiple multidestination trees connecting to all FabricPath switches and uses them to forward multidestination traffic. and no data plane traffic is passed on these interfaces. This routing table stores the best path to the destination FabricPath switch. If multiple equal-cost paths to a particular switch ID exist. and select .

This forwarding logic is based on the role that particular switch plays relative to the traffic flow within the FabricPath network. unknown unicast. Multiple equal cost paths could exist to a destination switch ID. In this case. these trees are further pruned based on IGMP snooping at the edges of the FabricPath network. the packets are flow-based load balanced to distribute traffic over multiple equal cost paths. . When a unicast packet enters the FabricPath network. It is composed of the VDC MAC address. System ID: A 48-bit value. For optimized IP multicast forwarding. Therefore. it goes though different lookups and forwarding logic. You can influence root bridge election by using the root-priority command. Root priority: An 8-bit value. Encapsulates frames with the FabricPath header and sends it to the core port for the next-hop FabricPath switch. Performs switch ID table lookup to determine FabricPath next-hop interface. This allows switches to prune off subtrees that do not have downstream receivers. Following are three general roles and the forwarding logic for FabricPath traffic: Ingress FabricPath switch Receives the classic Ethernet (CE) frame from a FabricPath edge port.root for any additional trees. A higher value of these parameters is preferred. spine switches are the best candidates for the root. Broadcast and unknown unicast forwarding is also done based on a multidestination tree computed by IS-IS for each topology. FabricPath IS-IS computes the SwitchID reachability for each switch in the network. multicast packet. or broadcast packets. and as it passes different switches. The default root priority is 64. The FTAG uniquely identifies a topology because different topology could have different broadcast trees. Switch ID: The unique 12-bit switch ID. Decrements TTL and sends the frame to the core port for the next-hop FabricPath switch. FabricPath uses IS-IS to propagate multicast receiver locations within the FabricPath network. FabricPath Data Plane The Layer 2 packets forwarded through a FabricPath network are of the following types: known unicast packet. It is recommended to configure a centrally connected switch as root switch. Performs switch ID table lookup to determine on which next-hop interface frame it is destined to. Performs MAC table lookup to identify the FabricPath switch ID. Known unicast forwarding is done based on the mapping between the unicast MAC address and the switch ID that is already learned by the network. Core FabricPath switch Receives a FabricPath-encapsulated frame on a FabricPath core port. Multicast forwarding is done based on multidestination trees computed by IS-IS.

to determine on which FabricPath edge port the frame should be forwarded to. This selective MAC address learning is based on bidirectional conversation of end hosts. For example. FabricPath is configured correctly and the network is fully converged. Host A connected to the leaf switch S10 would like to communicate with the Host B connected to the leaf switch S15. Spine switches are S1 and S2. S14. and S15. In other words. For Ethernet frames received from a directly connected access or trunk port. broadcast frames are used to update any existing MAC entries already in the table. you review an example of packet flow within a FabricPath network. Uses the LID value in the outer destination address (ODA). For unicast frames received with FabricPath encapsulation. Multicast frames trigger learning on FabricPath switches (both edge and core). the switch unconditionally learns the source MAC address as a local MAC entry. FabricPath Packet Flow Example In this section. FabricPath VLANs use conversational MAC address learning. Unknown Unicast Packet Flow In the beginning. or a MAC address table lookup. the switch learns remote MAC addresses only if the remote device is having a bidirectional “conversation” with a locally connected device. The MAC address table of switch S15 has only one local entry for Host B pointing to port 1/2. Performs switch ID table lookup to determine whether it is the egress FabricPath switch. allowing a significant reduction in the size of the MAC address table. There are two spine switches and three leaf switches in this sample topology. it is called conversational learning. The following are MAC learning rules FabricPath uses: Only FabricPath edge switches populate the MAC table and use MAC table lookups to forward frames. if a host moves from one switch to another and sends a gratuitous ARP to update the Layer 2 forwarding tables. because several key LAN protocols (such as HSRP) rely on source MAC learning from multicast frames to facilitate proper forwarding. FabricPath switches receiving that broadcast will update an existing entry for the source MAC address. and classic Ethernet VLANs use traditional MAC address learning. FabricPath core switches generally do not learn any MAC addresses. and frames are forwarded based on the switch ID in the outer destination address (ODA). The following steps explain . the switch learns the source MAC address of the frame as a remote MAC entry only if the destination MAC address matches an already learned local MAC entry. and leaf switches are S10. therefore. the MAC address table of switch S10 and S14 is empty. Conversational Learning FabricPath switches learn MAC addresses selectively. This MAC learning rule is the same as the classic Ethernet switch. In a FabricPath switch.Egress FabricPath switch Receives a FabricPath-encapsulated frame on a FabricPath core port. The unknown unicast frames being flooded in the FabricPath network do not necessarily trigger learning on edge switches. however. Broadcast frames do not trigger learning on edge switches.

The FabricPath frame is received by switch S14 and switch S15. The first entry is local for Host B pointing toward port 1/2. S10 encapsulates the packet into the FabricPath header and floods it on the FabricPath multidestination tree.unknown unicast packet flow originated from Host A: 1. This flow is unknown unicast because switch S10 does not know how to reach destination Host B. 7. 2. This lookup is missed because no entry exists for Host B in the switch S10 MAC address table. Switch S14 has no entry in its MAC address table. therefore. 5. Host A sends an unknown unicast packet to switch S10 on port 1/1. some MAC address tables are populated. and the Host A MAC address is added to the MAC table on switch S15. . Host B sends a known unicast packet to Host A on switch S15 port 1/2. S15 receives the packet and performs a lookup for Host B in its MAC address table. The MAC address table on S15 shows Host A pointing to switch S10. S10 updates the classic Ethernet MAC address table with the entry for the source MAC address of Host A and the port 1/1 where the host is connected. 8. S14 receives the packet and performs a lookup for Host B in its MAC address table. Switch S10 has only one entry for Host A connected on port 1/1. This flow is known unicast because switch S15 knows how to reach the destination MAC address of Host A. The MAC address lookup is missed. and the second entry is remote for Host A pointing toward switch S10. The following steps explain known unicast packet flow originated from Host B: 1. 6. This packet has switch S10 switch ID in its outer source address (OSA). The MAC address lookup finds a hit. 4. The packet is delivered to port 1/2 on switch S15. Figure 1-18 FabricPath Unknown Unicast Known Unicast Packet Flow After the unknown unicast packet flow from switch S10 to S14 and S15. A lookup is performed for the destination MAC address of Host B. Figure 1-18 illustrates these steps for an unknown unicast packet flow. 3. Switch S15 has two entries in its MAC address table. the Host A MAC address is not learned on switch S14. This MAC address is learned on FabricPath and the switch ID in OSA is S10.

the MAC address table is updated with the switch ID of the second vPC switch. Switch S15 performs the switch ID lookup and selects a FabricPath core port to send the packet. the Layer 2 MAC addresses are learned only by the edge switches. 5. This emulated switch is a logical way of combining two or more FabricPath switches to emulate a single switch to the rest of . and it can destabilize the network. 4. On this remote FabricPath switch. a dual-connected host can send a packet into the FabricPath network from two edge switches. This behavior is called MAC flapping. S15 performs the MAC address lookup and finds a hit pointing toward switch S10. The MAC address table on S10 shows Host B pointing to switch S15. The packet traverses the FabricPath network to reach the destination switch. 3. This MAC address is learned on FabricPath and the switch ID in OSA is S15. with S15 in the OSA and S10 in the ODA. To address this problem. Figure 1-19 illustrates these steps for a known unicast packet flow. In a FabricPath network. which is switch S10. In a vPC configuration. the next traffic flow can go via a second vPC switch. When a packet is received at a remote FabricPath switch. Switch S10 performs the MAC address table lookup for Host A. and the packet is delivered on the port 1/1. it sees it as MAC move. with the switch ID of the first vPC switch in OSA. Because the virtual port channel performs load balancing. Figure 1-19 FabricPath Known Unicast Virtual Port Channel Plus A virtual port channel plus (vPC+) enables a classical Ethernet (CE) vPC domain and Cisco FabricPath to interoperate. The packet is encapsulated in FabricPath header.2. FabricPath implements an emulated switch. When the remote FabricPath switch receives a packet for the same MAC address with the switch ID of the second vPC switch in its OSA. the MAC address table of this remote FabricPath switch is updated with the switch ID of the first vPC switch. and it is performed in the data plane. The packet is received by switch S10 and a MAC address of Host B is learned by switch S10. The learning consists of mapping the Layer 2 MAC address to a switch ID.

no BPDUs are forwarded or tunneled through the FabricPath network. or BID (C84C. coming from S14 as well as from S15. remote switch S10 sees the Host B MAC address entry pointing to the emulated switch.75FA. This results in MAC flapping at switch S10. all FabricPath switches share a common bridge ID. if multiple FabricPath edge switches connect to the same STP domain. Additionally. and these packets are seen at the remote switch S10. configure switch S14 and S15 with a FabricPath emulated switch using vPC+ configuration. This BID is statically defined and is not user configurable. BPDUs including TCNs are not transmitted on FabricPath core ports and as a result. simply configure the vPC peer-link as a FabricPath core port. The entire FabricPath network appears as a single STP bridge to any connected STP domains. The packets are load-balanced across the vPC link. and there should be a peer keepalive path between the two switches. To configure vPC+.the FabricPath network. and assign an emulated switch ID to the vPC. It means that the two-emulating switch must be directly connected via peer link. Host B is connected to switch S14 and S15 in vPC configuration. If you are connecting STP devices to the FabricPath network. Figure 1-20 FabricPath with vPC+ In this example. Figure 1-20 shows an example of vPC+ configuration. The other FabricPath switches are not aware of this emulation and simply see the emulated switch reachable through both of these emulating switches. make sure . FabricPath isolates STP domains. To solve this problem. FabricPath Interaction with Spanning Tree FabricPath edge ports support direct Classic Ethernet host connections and connections to the traditional Spanning-Tree switches. The emulated switch implementations in FabricPath where two FabricPath edge switches provide a vPC to a thirdparty device is called vPC+. However. FabricPath switches transmit and process STP BPDUs on FabricPath edge ports. To appear as one STP bridge. Therefore. By default. Each FabricPath edge switch must be configured as root for all FabricPath VLANs. make sure you configure all edge switches as STP root. As a result of this configuration. The FabricPath packets originated by the two emulating switches are sourced with the emulated switch ID.6000).

You perform these steps on all FabricPath switches.1 (Nexus 7000) / NX-OS 5. all the VLANs are in Classic Ethernet mode. The Enhanced Layer 2 license is required to run FabricPath. Before you start FabricPath configuration. To enable the FabricPath feature set.3 (Nexus 5500) software release. Figure 1-21 shows an example of FabricPath topology and its configuration. and identifying FabricPath VLANs. install the license using the following command: install license <filename> Install the FabricPath feature set by issuing install feature-set fabricpath. These ports connect to other FabricPath switches to establish IS-IS adjacency. You can also configure a static switch ID using the fabricpath switch-id <ID number> command. and switches will start forwarding traffic. These steps involve enabling the FabricPath feature. Before you start configuring the FabricPath on a switch. The following command is used to configure an interface in FabricPath mode: Click here to view code image interface <name> switchport mode fabricpath After performing this configuration on all FabricPath switches in the topology. Use the following command to change the VLAN mode for a range of VLANs: vlan <range> mode fabricpath 3. There is no need to configure IS-IS protocol for FabricPath. Define FabricPath VLANs: identify the VLANs that will be extended across the FabricPath network.that those edge switches use the same bridge priority value. identifying FabricPath interfaces. If required. Configuring FabricPath FabricPath can be configured using a few simple steps. FabricPath Dynamic Resource Allocation Protocol (DRAP) automatically assigns a switch ID to each switch within FabricPath network. The unicast and multicast routing information is exchanged. Identify FabricPath interfaces: identify the FabricPath core ports. If a superior BPDU is received on a FabricPath edge port. You can change the VLAN mode to FabricPath by going into the VLAN configuration. You have the appropriate license for FabricPath. . The system is running minimum NX-OS 5. the port is placed in the L2 Gateway Inconsistent state until the condition is cleared. The FabricPath configuration steps are as follows: 1. use the following command in switch global configuration mode: feature-set fabricpath 2. FabricPath devices will form IS-IS adjacencies.1.1. By default. all FabricPath edge ports have the STP rootguard function enabled implicitly. the FabricPath feature set must be enabled. ensure the following: You have Nexus devices that support FabricPath. Configuring a port in FabricPath mode automatically enables IS-IS protocol on the port. To ensure that the FabricPath network acts as STP root.

The command shows local addresses with a pointer to the interface on which the address was learned. all paths will be installed in the Cisco FabricPath routing table to provide ECMP. If multiple equal paths are available between two switches. For remote addresses. The show fabricpath route command can be used to view the Cisco FabricPath routing table that results from the Cisco FabricPath IS-IS SPF calculations. .Figure 1-21 FabricPath Configuration Example Verifying FabricPath The show mac address table command can be used to verify that MAC addresses are learned on Cisco FabricPath edge devices. Table 1-6 lists some useful commands to verify FabricPath. it provides a pointer to the remote switch from which this address was learned. The Cisco FabricPath routing table shows the best paths to all the switches in the fabric.

.Table 1-6 Verify FabricPath Configuration Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. Table 1-7 lists a reference for these key topics and the page numbers on which each is found. noted with the key topics icon in the outer margin of the page.

and complete the tables and lists from memory. includes completed tables and lists to check your work.” also on the disc. Define Key Terms Define the following key terms from this chapter.Table 1-7 Key Topics for Chapter 1 Complete Tables and Lists from Memory Print a copy of Appendix B. and check your answers in the glossary: Virtual LAN (VLAN) Virtual Port Channel (vPC) Network Interface Card (NIC) Application Centric Infrastructure (ACI) Maximum Transmit Unit (MTU) Link Aggregation Control Protocol (LACP) Spanning Tree Protocol (STP) . Appendix C.” (found on the disc) or at least the section for this chapter. “Memory Tables. “Memory Tables Answer Key.

Virtual Device Context (VDC) Cisco Fabric Services over Ethernet (CFSoE) Hot Standby Router Protocol (HSRP) Virtual Router Redundancy Protocol (VRRP) Bridge Protocol Data Unit (BPDU) Internet Group Management Protocol (IGMP) Time To Live (TTL) Dynamic Resource Allocation Protocol (DRAP) .

“Answers to the ‘Do I Know This Already?’ Quizzes. Nexus 5000. You learn the different specifications of each product. If you do not know the answer to a question or are only partially sure of the answer. scalability. and so on. such as FCoE. FC. Table 2-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. it is covered in detail in Chapter 18. This chapter focuses on the Cisco Nexus product family. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics.” Table 2-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. iSCSI. read the entire chapter.Chapter 2. You can find the answers in Appendix A. Cisco Nexus products provide the performance. including different types of data center protocols. 1. and Nexus 2000. and availability to accommodate diverse data center needs. Note Nexus 1000V is not covered in this chapter.) . Nexus 3000. Cisco Nexus Product Family This chapter covers the following exam topics: Describe the Cisco Nexus Product Family Describe FEX Products The Cisco Nexus product family offers a broad portfolio of LAN and SAN switching products spanning software switches that integrate with multiple types of hypervisors to hardware data center core switches and campus core switches.” “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. “Cisco Nexus 1000V and Virtual Switching. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. you should mark that question as wrong for purposes of the self-assessment. Nexus 7000. Which of the following Nexus 9000 hardwares can work as a spine in a Spine-Leaf topology? (Choose three. which includes Nexus 9000. Nexus 6000.

24 c. False 6. How many strands do the 40F BiDi optics use? a. a. True or false? The Nexus N9K-X9736PQ can be used in standalone mode when the Nexus 9000 modular switches use NX-OS. False 3.a. 4+1 7. Nexus 9336PQ 2. 48 d. 16 4. Nexus 9396PX d. Nexus 9516 c. False 8. a. 8 b. 20 b. True b. 2+1 b. 64 . Nexus 93128TX e.6 Tbps 5. you can use FCoE with F2 cards. True or false? The F3 modules can be interchanged between the Nexus 7000 and Nexus 7700. 8+1 d. True or false? By upgrading SUP1 to the latest software and using the correct license. 2. How many unified ports does the Nexus 6004 currently support? a. Nexus 9508 b. 1. a. What is the current bandwidth per slot for the Nexus 7700? a. 220 Gbps c. True b.3 Tbps d. 12 d. 2 c. True b. 16+1 c. 550 Gbps b. How many VDCs can a SUP2 have? a.

which is the Unified Fabric. Nexus 2248PQ c. Which vendors support the B22 FEX? a. Nexus 2232TM d. highly secure network fabrics. IBM c. Modern data center designs need the following properties: . a. The Nexus 5548P has 32 fixed ports. 8 d. How many unified ports does the Nexus 5672 have? a. True or false? The 40-Gbps ports on the Nexus 6000 can be used as 10 Gbps. d. The Nexus 5548P has 32 fixed ports. 11. All the above 13.9. 16 c. whereas the Nexus 5548UP has all ports unified. which are Ethernet only. Using the Cisco Nexus products. False 10. The Nexus 5548P has 32 fixed ports and can have 16 unified ports. It offers high-density 10 G. The Nexus 5548P has 48 fixed ports and Nexus 5548UP has 32 fixed ports and 16 unified ports. Nexus 2248TP-E Foundation Topics Describe the Cisco Nexus Product Family The Cisco Nexus product family is a key component of Cisco unified data center architecture. and 100 G ports as well. 12 b.) a. HP b. True b. you can build end-to-end data center designs based on three-tier architecture or based on Spine-Leaf architecture. 40 G. Fujitsu e. Which of the following statements are correct? (Choose two. which are Ethernet ports only. 24 12. Dell c. Nexus 2232PP b. b. c. and the Nexus 5548UP has 32 fixed unified ports. and Nexus 5548UP has 32 fixed FCoE only. Which models from the following list of switches support FCoE ? (Choose two.) a. The objective of the Unified Fabric is to build highly available.

with consistent network and security policies. Using the concept of a service profile and booting from SAN in the Cisco Unified Computing system will reduce the time to instantiate new servers. . for example. or the design is limiting us to use Active/Standby NIC teaming. In this chapter you will understand the Cisco Nexus product family. This is addressed today using Layer 2 multipathing technologies such as FabricPath and virtual port channels (vPC). configuration changes. Hosts. Figure 2-1 shows the different product types available at the time this chapter was written. using Cisco virtual interface cards. or adding components to the data center environment. These products are explained and discussed in the following sections. The concept of hybrid clouds can benefit your organization. especially virtual hosts.Effective use of available bandwidth in designs where multiple links exist between the source and destination and one path is active and the other is blocked by spanning tree. must move without the need to change the topology or require an address change. Ways to address this problem include using Unified Fabric (converged SAN and LAN). Rather than using. Doing capacity planning for all the workloads and identifying candidates to be virtualized help reduce the number of compute nodes in the data center. Computing resources must be optimized. that should happen with minimum disruption. Improved reliability during software updates. you can use 2 × 40 G links and so on. Hybrid clouds extend your existing data center to public clouds as needed. and using technologies such as VM-FEX and Adapter-FEX. which in turn reduces cooling requirements. Cisco is helping customers utilize this concept using Intercloud Fabric. Reducing cabling creates efficient airflow. which happens by building a computing fabric and dealing with CPU and memory as resources that are utilized when needed. That will make it easy to build and tear down test and development environments. 8 × 10 G links. Power and cooling are key problems in the data center today.

. and the 16-slot Nexus 9516. When they run in NX-OS mode and use the enhanced NX-OS software. the design follows the Spine-Leaf architecture shown in Figure 2-2. the design follows the standard three-tier architecture. they provide an application-centric infrastructure. They can run in two modes. Figure 2-2 Nexus 9000 Spine-Leaf Architecture Cisco Nexus 9500 Family The Nexus 9500 family consists of three types of modular chassis. the 8-slot Nexus 9508. they function as a classical Nexus switch. When they run in ACI mode and in combination with Cisco Application Policy Infrastructure Controller (APIC).Figure 2-1 Cisco Nexus Product Family Cisco Nexus 9000 Family There are two types of switches in the Nexus 9000 Series: the Nexus 9500 modular switches and the Nexus 9300 fixed configuration switches. In this case. as shown in Figure 2-3: the 4-slot Nexus 9504. Therefore.

system controllers.Figure 2-3 Nexus 9500 Chassis Options The Cisco Nexus 9500 Series switches have a modular architecture that consists of the following: Switch chassis Supervisor engine System controllers Fabric modules Line cards Power supplies Fan trays Optics Among these parts. line cards. Table 2-2 shows the comparison between the different models of the Nexus 9500 switches. . and power supplies are common components that can be shared among the entire Nexus 9500 product family. supervisors.

Table 2-2 Nexus 9500 Modular Platform Comparison Chassis The Nexus 9500 chassis doesn’t have a midplane. Midplanes tend to block airflow. fabric cards and line cards align together. which results in reduced cooling efficiency. as shown in Figure 2-4. . Because there is no midplane with a precise alignment mechanism.

upgradable to 48 GB RAM and 64 GB SSD storage. The supervisor engine is responsible for the control plane function. and 16 GB RAM.Figure 2-4 Nexus 9500 Chassis Supervisor Engine The Nexus 9500 modular switch supports two redundant half-width supervisor engines. an RS-232 Serial Port (RJ-45). Each supervisor module consists of a Romely 1. including two USB ports. as shown in Figure 2-5. The supervisor modules manage all switch operations.8 GHZ CPU. that is. . The supervisor has an external clock source. pulse per second (PPS). and a 10/100/1000-MBps network port (RJ-45). 4 core. There are multiple ports for management.

and fabric modules.Figure 2-5 Nexus 9500 Supervisor Engine System Controller There is a pair of redundant system controllers at the back of the Nexus 9500 chassis. They host two main control and management paths—the Ethernet Out-of-Band Channel (EOBC) and Ethernet Protocol Channel (EPC) —between supervisor engines. as shown in Figure 2-6. line cards. . The EOBC provides the intrasystem management communication across modules. They offload chassis management functions from the supervisor modules. and the EPC channel handles the intrasystem data plane protocol communication. The system controller is responsible for managing power supplies and fan trays.

The packet lookup and forwarding functions involve both the line cards and the fabric modules. All fabric modules are active. The NFE is a Broadcom trident two ASIC (T2). each fabric module consists of multiple NFEs. . and the T2 uses 24 40GE ports to guarantee the line rate. the Nexus 9508 has two. The Nexus 9504 has one NFE per fabric module. and the Nexus 9516 has four.Figure 2-6 Nexus 9500 System Controllers Fabric Modules The platform supports up to six fabric modules. as shown in Figure 2-7. both contain multiple network forwarding engines (NFE).

Note The fabric modules are behind the fan trays. There are line cards that can be used in application-centric infrastructure mode (ACI) only. the ACI-ready leaf line cards contain an additional ASIC called Application Leaf Engine (ALE). All line cards have multiple NFEs for packet lookup and forwarding. There are cards that can be used in standalone mode when used with enhanced NX-OS. Line Cards It is important to understand that there are multiple types of Nexus 9500 line cards. you need a minimum of three fabric modules to achieve line-rate speeds. in a classical design. the ASE performs ACI spine functions when the Nexus 9500 is used as a spine in the ACI mode. ALE performs the ACI leaf function when the Nexus 9500 is used as a leaf node when deployed in the ACI mode. There are also line cards that can be used in both modes: standalone mode using NX-OS and ACI mode. When you use the 36-port 40GE line cards. Figure 2-8 . In addition. so to install them you must remove the fan trays. you will need six fabric modules to achieve line-rate speeds.Figure 2-7 Nexus 9500 Fabric Module When you use the 1/10G + 4 40GE line cards. The ACI-only line cards contain an additional ASIC called application spine engine (ASE).

such as programming the hardware table resources. Figure 2-8 Nexus 9500 Line Cards Positioning Nexus 9500 line cards are also equipped with dual-core CPUs.shows the high-level positioning of the different cards available for the Nexus 9500 Series network switches. . statistics. This CPU is used to speed up some control functions. Table 2-3 shows the different types of cards available for the Nexus 9500 Series switches and their specification. collecting and sending line card counters. and offloading BFD protocol handling from the supervisors.

Table 2-3 Nexus 9500 Modular Platform Line Card Comparison .

they offer head room for future port densities. they are accessible from the front and are hot swappable. .Power Supplies The Nexus 9500 platform supports up to 10 power supplies. Figure 2-9 Nexus 9500 AC Power Supply Note The additional four power supply slots are not needed with existing line cards shown in Table 2-3. as shown in Figure 2-10. Fan Trays The Nexus 9500 consists of three fan trays. Fan trays are installed after the fabric module installation. Two 3000W AC power supplies can operate a fully loaded chassis. however. they support N+1 and N+N (grid redundancy). each tray consists of three fans. and optics. Dynamic speed is driven by temperature sensors and front-to-back air flow with N+1 redundancy per tray. The 3000W AC power supply shown in Figure 2-9 is 80 Plus platinum rated and provides more than 90% efficiency. bandwidth.

Bi-Di optics are standard based. when you migrate from 10 G to 40 G. 40 G operates on eight strands of fiber. access to aggregation and SpineLeaf design at the spine layer will move to 40 G. the first is the cost barrier of the 40 G port itself. and they enable customers to take the current 10 G cabling plant and use it for 40 G connectivity without replacing the cabling. The 40 G adoption is slow today because of multiple barriers. Figure 2. 10 G operates on what is referred to as two strands of fiber. Cisco QSFP Bi-Di Technology for 40 Gbps Migration As data center designs evolve from 1 G to 10 G at the access layer. If one of the fan trays is removed. .11 shows the difference between the QSFP SR and the QSFP Bi-Di. you must replace the cabling. the other two fan trays will speed up to compensate for the loss of cooling. however.Figure 2-10 Nexus 9500 Fan Tray Note To service the fabric modules. the fan tray must be removed first. and second.

There are currently four chassis-based models in the Nexus 9300 platform. Table 2-4 summarizes the different specifications of each chassis. This section discusses details of the Cisco Nexus 9300 fixed configuration switches.Figure 2-11 Cisco Bi-Di Optics Cisco Nexus 9300 Family The previous section discussed the Nexus 9500 Series modular switches. The Nexus 9300 is designed for top-of-rack (ToR) and mid-of-row (MoR) deployments. Table 2-4 Nexus 9500 Fixed-Platform Comparison .

10 G. As shown in Table 2-4. as shown in Table 2-5.The 40-Gbps ports for Cisco Nexus 9396PX. redundant power supplies. It is very reliable by supporting in-service software upgrades (ISSU) with zero packet loss. . you can utilize NX-OS APIs to manage it. Figure 2-12 shows the different models available today from the Nexus 9300 switches. the Nexus 9396PX. scalable way with a switching capacity beyond 15 Tbps for the Nexus 7000 Series and 83 Tbps for the Nexus 7700 Series. 9396TX. 9396TX. Figure 2-12 Cisco Nexus 9300 Switches Cisco Nexus 7000 and Nexus 7700 Product Family The Nexus 7000 Series switches form the core data center networking fabric. 8 out of the 12 × 40 Gbps QSFP+ ports will be available. There are multiple chassis options from the Nexus 7000 and Nexus 7700 product family. The uplink module is the same for all switches. and redundant fabric cards for high availability. The Nexus 7000 and the Nexus 7700 switches offer a comprehensive set of features for the data center network. The modular design of the Nexus 7000 and Nexus 7700 enables them to offer different types of network interfaces—1 G. and 93128TX can operate in NX-OS mode and in ACI mode (acting as a leaf node). 40 G and 100 G—in a high density. If used with the Cisco Nexus 93128TX. It is easy to manage using the command line or through data center network manager. The Nexus 7000 hardware architecture offers redundant supervisor engines. and 93128TX are provided on an uplink module that can be serviced and replaced by the user. The Nexus 9336PQ can operate in ACI mode only and act as as spine node.

this has been applied across the physical. power. With the current shipping fabric cards and current I/O modules the switches support N+1 redundancy. environmental. Switch fabric redundancy: The fabric modules support load sharing. Note There are dedicated slots for the supervisor engines in all Nexus chassis. the supervisor modules are not interchangeable between the Nexus 7000 and Nexus 7700 chassis. the Nexus 7000 supports up to five fabric modules and Nexus 7700 supports up to six fabric modules. and system software aspects of the chassis. .Table 2-5 Nexus 7000 and Nexus 7700 Modular Platform Comparison The Nexus 7000 and Nexus 7700 product family is modular in design with great focus on the redundancy of all the critical components. You can have multiple fabric modules. State and configuration are in sync between the two supervisors. which provide seamless and stateful switchover in the event of a supervisor module failure. Supervisor module redundancy: The chassis can have up to two supervisor modules operating in active and standby modes.

allowing maximum flexibility. including multiple instances of a particular service that provide effective fault isolation between services and that make each service individually monitored and managed. enabling a service experiencing a failure to be restarted and resume operation without affecting other services. which is the combination of PSU redundancy and grid redundancy. Nexus 7000 and Nexus 7700 have multiple models with different specifications. The transparent front door allows observation of cabling and module indicator and status lights. but not both at the same time. Cisco NX-OS In-Service Software Upgrade (ISSU): With the NX-OS modular architecture you can support ISSU. supervisor. 50% of power is secure. fabric. In the event of an A/C supply failure.Note The fabric modules between the Nexus 7000 and Nexus 7700 are not interchangeable. Full redundancy. . Figure 2-13 shows these switching models. The LEDs alert operators to the need to conduct further investigation. and Table 2-5 shows their specifications. There are multiple fans on the fan trays. Grid redundancy. In most cases. and any failure of one of the fans will not result in loss of service. allowing it to be connected to separate isolated A/C supplies. otherwise commonly called N+1 redundancy.) PSU redundancy. Cooling subsystem: The system has redundant fan trays. PSU Redundancy is the default. Cable management: The cable management cover and optional front module doors provide protection from accidental interference with both the cabling and modules that are installed in the system. Each service running is an individual memory protected process. Most of the services allow stateful restart. These LEDs report the power supply. which enables you to do a complete system upgrade without disrupting the data plane and achieve zero packet loss. where the total power available is the sum of all power supplies minus 1. Power subsystem availability features: The system will support the following power redundancy options: Combined mode. Modular software upgrades: The NX-OS software is designed with a modular architecture. fan. Each PSU has two supply inputs. Cable management: The integrated cable management system is designed to support the cabling requirements of a fully configured system to either or both sides of the switch. which helps to address specific issues and minimize the system overall impact. where the total power available is the sum of the outputs of all the power supplies installed. System-level LEDs: A series of LEDs at the top of the chassis provides a clear summary of the status of the major system components. this will be the same as grid mode but will assure customers that they are protected for either a PSU or a grid failure. and I/O module status. where the total power available is the sum of the power from only one input on each PSU. (This is not redundant.

2 Tbps system bandwidth].3 Tbps. For the Nexus 7700 Series. a 9-slot chassis will have 8. . the complete specification for the Nexus 7004 is shown in Table 2-5. Therefore. the local I/O module fabrics are connected back-to-back to form a two-stage crossbar that interconnects the I/O modules and the supervisor engines. So. Although the bandwidth/slot shown in the table give us 1.8 Tbps system bandwidth].7 Tbps. so that makes a total of 5 crossbar channels.6Tbps/slot) × (16 payload slots) = 41. the Nexus 7700 is capable of more than double its capacity. and the 18-slot chassis will have 18. if the bandwidth/slot is 2. (4400 Gbps) × (2 for full duplex operation) = 8800 Gbps = 8. for example. (4950 Gbps) × (2 for full duplex operation) = 9900 Gbps = 9. Using this type of calculation. In some bandwidth calculations for the chassis. (4400Gbps) × (2 for full duplex operation) = 83. the calculation is [(2. Cisco Nexus 7004 Series Switch Chassis The Cisco Nexus 7004 switch chassis shown in Figure 2-14 has two supervisor module slots and two I/O modules. and that can be achieved by upgrading the fabric cards for the chassis. It has one fan tray and four 3KW power supplies.8 Tbps. we’ll use the Nexus 7718 as an example. For example. the Nexus 7010 has 8 line cards slots.Figure 2-13 Cisco Nexus 7000 and Nexus 7700 Product Family Note When you go through Table 2-5 you might wonder how the switching capacity was calculated.6 Tbps. the Cisco Nexus 7010 bandwidth is being calculated like so: [(550 Gbps/slot) × (9 payload slots) = 4950 Gbps. It is worth mentioning that the Nexus 7004 doesn’t have any fabric modules.9 Tbps system bandwidth]. the supervisor slots are taken into consideration because each supervisor slot has a single channel of connectivity to each fabric card.6 Tbps. so the calculation is [(550 Gbps/slot) × (8 payload slots) = 4400 Gbps.

there is redundancy within the cooling system due to the number of fans. giving you time to get a spare and replace the fan tray. there is a solution for hot aisle–cold aisle design with the 9-slot chassis. The fan redundancy is two parts: individual fans in the fan tray and fan tray controllers. So. reducing the probability of a total fan tray failure. other fans automatically run at higher speeds. The fans in the fan trays are individually wired to isolate any failure and are fully redundant so that other fans in the tray can take over when one or more fans fail. The fan controllers are fully redundant. the complete specification for the Nexus 7009 is shown in Table 2-5. If an individual fan fails. and the system will continue to function.Figure 2-14 Cisco Nexus 7004 Switch Cisco Nexus 7009 Series Switch Chassis The Cisco Nexus 7009 switch chassis shown in Figure 2-15 has two supervisor module slots and seven I/O module slots. Figure 2-15 Cisco Nexus 7009 Switch Although the Nexus 7009 has side-to-side airflow. The Nexus 7009 switch has a single fan tray. .

the complete specification is shown in Table 2-5. restoring the N+1 redundancy. and they will continue to cool the system. If either of the fan trays fails. the complete specification is shown in Table 2-5. Each row cools three module slots (I/O and supervisor). Both fan trays must be installed at all times (apart from maintenance). There are two fabric fan modules in the 7010. until the fan tray is replaced. The fans on the trays are N+1 redundant. Figure 2-16 Cisco Nexus 7010 Switch There are two system fan trays in the 7010. Each fan tray contains 12 fans that are in three rows or four. The . Fan tray replacement restores N+1 redundancy. The system should not be operated with either the system fan tray or the fabric fan components removed apart from the hot swap period of up to 3 minutes. Cisco Nexus 7018 Series Switch Chassis The Cisco Nexus 7018 switch chassis shown in Figure 2-17 has two supervisor engine slots and 16 I/O module slots. the system will keep running without overheating as long as the operating environment is within specifications. There are multiple fans on the 7010 system fan trays. one for the upper half and one for the lower half of the system.Cisco Nexus 7010 Series Switch Chassis The Cisco Nexus 7010 switch chassis shown in Figure 2-16 has two supervisor engine slots and eight I/O modules slots. the remaining fabric fan will continue to cool the fabric modules until the fan is replaced. If either fabric fan fails. both are required for normal operation. and the fan tray will need replacing to restore N+1. The failure of a single fan will result in the other fans increasing speed to compensate. the box does not overheat. For the 7018 system there are two system fan trays. Any single fan failure will not result in degradation in service.

the complete specification is shown in Table 2-5. Cisco Nexus 7706 Series Switch Chassis The Cisco Nexus 7706 switch chassis shown in Figure 2-18 has two supervisor module slots and four I/O module slots. redundant fans. but when inserted in the lower position the fabric fans are inoperative. 96 40G ports. Failure of a single fabric fan will not result in a failure. the fabric fans operate only when installed in the upper fan tray position. Fan trays are fully interchangeable.fan tray should be replaced to restore the N+1 fan resilience. 48 100-G ports. . the remaining fan will cool the fabric modules. There are 192 10-G ports. true front-to-back airflow. The two fans are in series so that the air passes through both to leave the switch and cool the fabric modules. The fabric fans are at the rear of the system fan tray. Integrated into the system fan tray are the fabric fans. Figure 2-17 Cisco Nexus 7018 Switch Note Although both system fan trays are identical. and redundant fabric cards.

There are 768 10-G ports. 192 40G ports. redundant fans. true front-to-back airflow. the complete specification is shown in Table 2-5. the complete specification is shown in Table 2-5. There are 384 1-G ports. redundant fans. 96 100-G ports. Figure 2-19 Cisco Nexus 7710 Switch Cisco Nexus 7718 Series Switch Chassis The Cisco Nexus 7718 switch chassis shown in Figure 2-20 has two supervisor engine slots and 16 I/O module slots. . and redundant fabric cards. true front-to-back airflow.Figure 2-18 Cisco Nexus 7706 Switch Cisco Nexus 7710 Series Switch Chassis The Cisco Nexus 7710 switch chassis shown in Figure 2-19 has two supervisor engine slots and eight I/O module slots. and redundant fabric cards. 384 40-G ports. 192 100-G ports.

Figure 2-20 Cisco Nexus 7718 Switch Cisco Nexus 7000 and Nexus 7700 Supervisor Module Nexus 7000 and Nexus 7700 Series switches have two slots that are available for supervisor modules. . Table 2-6 describes different options and specifications of the supervisor modules. Redundancy is achieved by having both supervisor slots populated.

Table 2-6 Nexus 7000 and Nexus 7700 Supervisor Modules Comparison Cisco Nexus 7000 Series Supervisor 1 Module The Cisco Nexus 7000 supervisor module 1 shown in Figure 2-21 is the first-generation supervisor module for the Nexus 7000. the operating system runs on a dedicated dualcore Xeon processor. As shown in Table 2-6. dual supervisor engines run in active-standby mode with stateful switch over (SSO) and configuration synchronization between both supervisors. The USB ports allow access to USB flash memory devices to software image loading and recovery. An embedded packet analyzer reduces the need for a dedicated packet analyzer to provide faster resolution for control plane problems. There is dual redundant Ethernet out-of-band channels (EOBC) to each I/O and fabric modules to provide resiliency for the communication between control and line card processors. Figure 2-21 Cisco Nexus 7000 Supervisor Module 1 The Connectivity Management Processor (CMP) provides an independent remote system management and monitoring capability. It removes the need for separate terminal server devices for OOB .

It has the capability to initiate a complete system restart and shutdown. Sup2 scale is the same as Sup1. it has a quad-core CPU and 12 G of memory compared to supervisor module 1. they support CPU Shares. Cisco Nexus 7000 Series Supervisor 2 Module The Cisco Nexus 7000 supervisor module 2 shown in Figure 2-22 is the next-generation supervisor module. Figure 2-22 Cisco Nexus 7000 Supervisor 2 Module Supervisor module 2 and supervisor module 2E have more powerful CPUs.management. and it also allows access to supervisor logs and full console control on the supervisor engine. larger memory. when choosing the proper line card. which will enable you to carve out CPU for higher priority VDCs. such as higher VDC and FEX. Note . Administrators must authenticate to get access to the system through CMP. Supervisor module 2E is the enhanced version of supervisor module 2 with two quad-core CPUs and 32 G of memory. If a fault is detected. This is useful in detecting hardware faults. and a higher control plane scale. and nextgeneration ASICs that together will result in improved performance. Both supervisor module 2 and supervisor module 2E support FCoE. The Power-on Self Test (POST) and Cisco Generic Online Diagnostics (GOLD) provide proactive health monitoring both at startup and during system operation. As shown in Table 2-6. corrective action can be taken to mitigate the fault and reduce the risk of a network outage. The Cisco Nexus 7000 supervisor module 1 incorporates highly advanced analysis and debugging capabilities. such as enhanced user experience. and it offers complete visibility during the entire boot process. faster boot and switchover times. Sup2E supports 8+1 VDCs. it will support 4+1 VDCs. which has single-core CPU and 8 G of memory.

you can deliver a maximum of 230 Gbps per slot using five fabric modules. Figure 2-23 Cisco Nexus 7000 Fabric Module In the case of Nexus 7000. Cisco Nexus 7000 and Nexus 7700 Fabric Modules The Nexus 7000 and Nexus 7700 fabric modules provide interconnection between line cards and provide fabric channels to the supervisor modules. In case of a failure or removal of one of the fabric modules. This will be a nondisruptive migration. Nexus 7000 supports virtual output queuing (VOQ) and credit-based arbitration to the crossbar to increase performance. you can deliver a maximum of 550 Gbps per slot. Sup2 and Sup2E can be mixed for migration only.32 Tbps per slot. adding fabric modules increases the available bandwidth per I/O slot because all fabric modules are connected to all slots. The Nexus 7000 implements a three-stage crossbar switch. which is 46 Gbps. the remaining fabric modules will load balance the remaining bandwidth to all the remaining line cards. when using Fabric Module 1. which is 110 Gbps. The Nexus 7000 has five fabric modules. All fabric modules support load sharing. and the Nexus 7700 has six. Fabric stage 1 and fabric stage 3 are implemented on the line card module. When using Fabric Module 2. Figure 2-24 . and stage 2 is implemented on the fabric module. which is 220 Gbps per slot. by using Fabric Module 2. you can deliver a maximum of 1. Note that this will be a disruptive migration requiring removal of supervisor 1. Figure 2-23 shows the different fabric modules for the Nexus 7000 and Nexus 7700 products.You cannot mix Sup1 and Sup2 in the same chassis. and the architecture supports lossless fabric failover. VOQ and credit-based arbitration allow fair sharing of resources when a speed mismatch exists to avoid head-of-line (HOL) blocking. In Nexus 7700.

32 Tbps per slot. Each supervisor module has a single 23 Gbps trace to each fabric module. It provides an aggregate bandwidth of 1. These connections are also 55 Gbps. Figure 2-24 Cisco Nexus 7700 Crossbar Fabric There are two connections from each fabric module to the supervisor module. When populating the chassis with six fabric modules. and each one of these connections is 55 Gbps. providing 230 Gbps of switching capacity per I/O slot for a fully loaded chassis. Table 2-7 describes each license and the features it enables. There are four connections from each fabric module to the line cards. . providing an aggregate bandwidth of 275 Gbps. there are 12 connections from the switch fabric to the supervisor module. Cisco Nexus 7000 and Nexus 7700 Licensing Different types of licenses are required for the Nexus 7000 and the Nexus 7700. Note Cisco Nexus 7000 fabric 1 modules provide two 23 Gbps traces to each fabric module. the total number of connections from the fabric cards to each line card is 24. When all the fabric modules are installed.shows how these stages are connected to each other.

.

Note To manage the Nexus 7000 two types of licenses are needed: the DCNM LAN and DCNM SAN. There is also an evaluation license. To get the license file you must obtain the serial number for your device by entering the show license host-id command. You then have 120 days to install the appropriate license keys. The host ID is also referred to as the device serial number. which is the amount of time the features in a license package can continue functioning without a license.Table 2-7 Nexus 7000 and Nexus 7700 Software Licensing Features It is worth mentioning that Nexus switches have a grace period. or disable the grace period feature. each of which is a separate license. Enabling a licensed feature that does not have a license key starts a counter on the grace period. disable the use of that feature. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number). the Cisco NX-OS software automatically disables the feature and removes the configuration from the device. as shown in Example 2-1. which is a temporary license. If at the end of the 120-day grace period the device does not have a valid license key for the feature. Example 2-1 NX-OS Command to Obtain the Host ID .

.done You can check what licenses are already installed by issuing the command shown in Example 2-3. or the usb2: device. After executing the copy licenses command from the default VDC. Example 2-2 Command Used to Install the License File Click here to view code image switch# install license bootflash:license_file.. Table 2-8 shows the comparison between M-Series modules. Example 2-3 Command Used to Obtain Installed Licenses Click here to view code image switch# show license usage Feature Ins Lic Status Expiry Date Comments Count -------------------------------------------------------------------------------LAN_ENTERPRISE_SERVICES_PKG Yes In use Never -------------------------------------------------------------------------------- Cisco Nexus 7000 and Nexus 7700 Line Cards Nexus 7000 and Nexus 7700 support various types of I/O modules. the usb1: device. the host ID is FOX064317SQ.Click here to view code image switch# show license host-id License hostid: VDH=FOX064317SQ Tip Use the entire ID that appears after the equal sign (=). There are two types of I/O modules: the M-I/O modules and the F-I/O modules. Each has different performance metrics and features. save your license file to one of four locations—the bootflash: directory. as shown in Example 2-2.lic Installing license . Perform the installation by using the install license command on the active supervisor module from the device console. the slot0: device. In this example.

Table 2-8 Nexus 7000 and Nexus 7700 M-Series Modules Comparison Note From the table. you see that the M1 32 port I/O module is the only card that supports .

.LISP. Table 2-9 shows the comparison between F-Series modules.

reducing power wasted as heat and reducing associated data center cooling requirements. UPSs. Variable-speed fans adjust dynamically to lower power consumption and optimize system cooling for true load. Fully Hot Swappable: Continuous system operations.0-KW AC Power Supply Module The 3.0-KW AC power supply shown in Figure 2-25 is designed only for the Nexus 7004 chassis and is used across all the Nexus 7700 Series chassis. The switches offer different types of redundancy modes. for the right sizing of power supplies. . Cisco Nexus 7000 and Nexus 7700 Series 3. and environmental cooling. Real-time power draw: Shows real-time power consumption. Power Redundancy: Multiple system-level options for maximum data center availability. When connecting to high line nominal voltage (220 VAC) it will produce a power output of 3000W. connecting to low line nominal voltage (110 VAC) will produce a power output of 1400W. It is a single 20 Ampere (A) AC input power supply. Temperature measurement: Prevents damage due to overheating (every ASIC on the board has a temperature sensor). no downtime in replacing power supplies. They offer visibility into the actual power consumption of the total system. lower fan speeds use lower power.Table 2-9 Nexus 7000 and Nexus 7700 F-Series Modules Comparison Cisco Nexus 7000 and Nexus 7700 Series Power Supply Options The Nexus 7000 and Nexus 7700 use power supplies with +90% power supply efficiency. and modules enabling accurate power consumption monitoring. Variable Fan Speed: Automatically adjusts to changing thermal characteristics. Internal fault monitoring: Detects component defect and shuts down unit.

0-KW DC Power Supply Module The 3. Cisco Nexus 7000 and Nexus 7700 Series 3.0-KW AC Power Supply Note Although the Nexus 7700 chassis and the Nexus 7004 use a common power supply architecture. Therefore. although technically this might work.0-kW DC power supply has two isolated input stages.Figure 2-25 Cisco Nexus 7000 3. Each stage uses a -48V DC connection. it is not officially supported by Cisco. the system will log an error complaining about the wrong power supply in the system. each delivering up to 1500W of output power. if you interchange the power supplies. . different PIDs are used on each platform.0-KW DC power supply shown in Figure 2-26 is designed only for the Nexus 7004 chassis and is used across all the Nexus 7700 Series chassis. The unit will deliver 1551W when only one input is active and 3051W when two inputs are active. The Nexus 3.

with battery backup capability. and 7018.5-KW power supplies shown in Figure 2-27 are common across Nexus 7009.0-KW DC Power Supply Cisco Nexus 7000 Series 6.Figure 2-26 Cisco Nexus 7000 3. They allow mixed-mode AC and DC operation enabling migration without disruption and providing support for dual environments with unreliable AC power.5-KW AC Power Supply Module The 6. . 7010.0-KW and 7.0-KW and 7.

Table 2-10 Nexus 7000 and Nexus 7700 6. 7010. each delivering up to 1500W of power (6000W total on full load) with peak efficiency of 91% (high for a DC power supply).0-KW and 7.0-KW and 7. The 6-KW has four isolated input stages. It supports the same operational characteristics as the AC units: redundancy modes (N+1 and N+N).5-KW Power Supply Specifications Cisco Nexus 7000 Series 6.5-KW Power Supply Table 2-10 shows the specifications of both power supplies with different numbers of inputs and input types.0-KW DC Power Supply Module The 6-KW DC power supply shown in Figure 2-28 is common to the 7009. and 7018 systems. .Figure 2-27 Cisco Nexus 7000 6. The power supply can be used in combination with AC units or as an all DC setup.

it is always better to choose the mode of power supply operation based on the requirements and needs. 50% of power is secure. Full redundancy. otherwise commonly called N+1 redundancy. Each PSU has two supply inputs. In the event of an A/C supply failure. . Grid redundancy. which is the combination of PSU redundancy and grid redundancy. However. where the total power available is the sum of the power from only one input on each PSU. (This is not redundant.Real-time power—actual power levels Single input mode (3000W) Online insertion and removal Integrated Lock and On/Off switch (for easy removal) Figure 2-28 Cisco Nexus 7000 6. where the total power available is the sum of the outputs of all the power supplies installed. You can lose one power supply or one grid. An example of each mode is shown in Figure 2-29. where the total power available is the sum of all power supplies – 1. Full redundancy provides the highest level of redundancy. so it is recommended.) PSU redundancy. in most cases this will be the same as grid redundancy.0-KW DC Power Supply Multiple power redundancy modes can be configured by the user: Combined mode. allowing them to be connected to separate isolated A/C supplies.

Cisco Nexus 6000 Product Family The Cisco Nexus 6000 product family is a high-performance. It is worth mentioning that the power calculator cannot be taken as a final power recommendation. When using the unified module on the Nexus 6004 you can get native FC as well.com/go/powercalculator. Table 2-11 describes the difference between the Nexus 6000 models.Figure 2-29 Nexus 6. low-latency 1G/10G/40G/FCoE Ethernet port. Nexus 6001P. Cisco has made a power calculator that can be used as a starting point. and Nexus 6004X.cisco. The power calculator can be located at http://www. Nexus 6004EF.0-KW Power Redundancy Modes To help with planning for the power requirements. high-density. Multiple models are available: Nexus 6001T. .

or 7 cables are used. the Nexus 6004X was renamed Nexus 5696Q. It provides integrated Layer 2 and Layer 3 at wire speed. front-to-back (port side exhaust) and back-to-front (port side intake). . port-to-port latency is around 3. Nexus 6001T supports FCOE on the RJ-45 ports for up to 30M when CAT 6. front-to-back (port side exhaust) and back-to-front (port side intake). 6a. Port-to-port latency is around 1 microsecond. Each 40 GE port can be split into 4 × 10GE ports. Both switches are shown in Figure 2-30. this can be done by using a QSFP+ break-out cable (Twinax or Fiber). The split can be done by using a QSFP+ break-out cable (Twinax or Fiber). It offers two choices of airflow. Each 40 GE port can be split into 4 × 10 GE ports. It offers two choices of air flow. Cisco Nexus 6001P and Nexus 6001T Switches and Features The Nexus 6001P is a fixed configuration 1RU Ethernet switch. The Nexus 6001T is a fixed configuration 1RU Ethernet switch. It provides 48 1G/10G BASE-T ports and four 40 G Ethernet QSFP+ ports. It provides 48 1G/10G SFP+ ports and four 40-G Ethernet QSFP+ ports.Table 2-11 Nexus 6000 Product Specification Note At the time this chapter was written.3 microseconds. It provides integrated Layer 2 and Layer 3 at wire speed.

The Nexus 6004 is 4RU 10 G/40 G Ethernet switch. Currently. The Nexus 6004 offers integrated Layer 2 and Layer 3 at wire rate. There are two choices of LEMs: 12-port 10 G/40 G QSFP and 20-port unified ports offering either 1 G/10 G SFP+ or 2/4/8 G FC. It offers up to 96 40 G QSFP ports and up to 384 10 G SFP+ ports. it offers eight line card expansion module (LEM) slots. Port-to-port latency is approximately 1 microsecond for any packet size. the Nexus 6004EF and the Nexus 6004X. there are two models. The main difference is that the Nexus 6004X supports Virtual Extensible LAN (VXLAN).Figure 2-30 Cisco Nexus 6001 Switches Cisco Nexus 6004 Switches Features The Nexus 6004 is shown in Figure 2-31. .

The SFP+ can support both 10 G and 1 GE transceivers. or disable the grace . Cisco Nexus 6000 Switches Licensing Options Different types of licenses are required for the Nexus 6000. enabling a licensed feature that does not have a license key to start a counter on the grace period. It is worth mentioning that Nexus switches have a grace period. which is the amount of time the features in a license package can continue functioning without a license. Table 2-12 describes each license and the features it enables. This converts each QSFP+ port to one SFP+ port.Figure 2-31 Nexus 6004 Switch Note The Nexus 6004 can support 1 G ports using a QSA converter (QSFP to SFP adapter) on a QSFP interface. You then have 120 days to install the appropriate license keys. disable the use of that feature.

. the Cisco NX-OS software automatically disables the feature and removes the configuration from the device. If at the end of the 120-day grace period the device does not have a valid license key for the feature.period feature.

Table 2-13 shows the comparison between different models. which is a temporary license.Table 2-12 Nexus 6000 Licensing Options There is also an evaluation license. it includes Cisco Nexus 5500 and Cisco Nexus 5600 platforms. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number). Cisco Nexus 5500 and Nexus 5600 Product Family The Cisco Nexus 5000 product family is a Layer 2 and Layer 3 1G/10G Ethernet with unified ports. Note To manage the Nexus 6000 two types of licenses are needed: the DCNM LAN and DCNM SAN. . Each of them is a separate license.

.

cisco.Table 2-13 Nexus 5500 and Nexus 5600 Product Specification Note The Nexus 5010 and 5020 switches are considered end-of-sale. Refer to www. Figure 2-32 shows the layout for them. It has 32 fixed ports of 10 G BASE-T and 16 fixed ports of SFP+. meaning that the ports can run 1/10 Gbps or they can run 8/4/2/1 Gbps native FC or mix between both. the switch supports unified ports on all SFP+ ports. native Fibre Channel and FCoE switch. Figure 2-32 Nexus 5548P and Nexus 5548UP Switches Cisco Nexus 5596UP and 5596T Switches Features The Nexus 5596T shown in Figure 2-33 is a 2RU 1/10 Gbps Ethernet. The Nexus 5548P has all the 32 ports 1/10 Gbps Ethernet only. the 10 G BASE-T ports support FCoE up to 30m with Category 6a and Category 7 cables. Cisco Nexus 5548P and 5548UP Switches Features The Nexus 5548P and the Nexus 5548UP are 1/10-Gbps switches with one expansion module. The switch has three expansion modules.com for more information. . The Nexus 5548UP has the 32 ports unified ports. so they are not covered in this book.

and 8 ports 8/4/2/1-Gbps Native Fibre Channel ports using . It has 8 ports 1/10-Gbps Ethernet and FCoE using SFP+ interfaces. and the Nexus 5596UP/5596T has three. Figure 2-34 Cisco N55-M16P Expansion Module The Cisco N55-M8P8FP module shown in Figure 2-35 is a 16-port module.Figure 2-33 Nexus 5596UP and Nexus 5596T Switches Cisco Nexus 5500 Products Expansion Modules You can have additional Ethernet and FCoE ports or native Fibre Channel ports with the Nexus 5500 products by adding expansion modules. The Cisco N55-M16P module shown in Figure 2-34 has 16 ports 1/10-Gbps Ethernet and FCoE using SFP+ interfaces. The Nexus 5548P/5548UP has one expansion module.

Each QSFP 40-GE port can only work in 4×10G mode and supports DCB and FCoE. It has 16 ports 1/10-Gbps Ethernet and FCoE using SFP+ interfaces or up to 16 ports 8/4/2/1-Gbps Native Fibre Channel ports using SFP+ and SFP interfaces.SFP+ and SFP interfaces. . Figure 2-36 Cisco N55-M16UP Expansion Module The Cisco N55-M4Q shown in Figure 2-37 is a 4-port 40-Gbps Ethernet module. Figure 2-35 Cisco N55-M8P8FP Expansion Module The Cisco N55-M16UP shown in Figure 2-36 is a 16 unified ports module.

This module is supported only in the Nexus 5596T. . This daughter card provides 160 Gbps of Layer 3 forwarding (240 million packets per second [mpps]).Figure 2-37 Cisco N55-M4Q Expansion Module The Cisco N55-M12T shown in Figure 2-38 is a 12-port 10-Gbps BASE-T module. which can be ordered with the system. it supports FCoE up to 30m on category 6a and category 7 cables. or it is field upgradable as a spare. which is shared among all 48 ports. Figure 2-38 Cisco N55-M12T Expansion Module The Cisco 5500 Layer 3 daughter card shown in Figure 2-39 is used to enable Layer 3 on the Nexus 5548P and 5548UP.

Figure 2-40 Cisco Nexus 5548P and 5548UP Layer 3 Daughter Card Upgrade Procedure To enable Layer 3 on the Nexus 5596P and 5596UP. . This daughter card provides 160 Gbps of Layer 3 forwarding (240 million packets per second [mpps]). currently. which is shared among all 48 ports. There is no need to remove the switch from the rack. and follow the steps as shown in Figure 2-40. you must have a Layer 3 expansion module. which can be ordered with the system or as a spare. you must replace the Layer 2 I/O module. you can have only one Layer 3 expansion module per Nexus 5596P and 5596UP. power off the switch. Figure 241 shows the Layer 3 expansion module.Figure 2-39 Cisco Nexus 5548P and 5548UP Layer 3 Daughter Card To install the Layer 3 module.

Table 2-14 shows the summary of the features. 40-Gbps FCoE. The Nexus 5600 has two models. Enabling Layer 3 makes the supported number per Nexus 5500 to be 16. Both models bring integrated L2 and L3. cut-through switching for 10/40 Gbps.Figure 2-41 Cisco Nexus 5596P and 5596UP Layer 3 Daughter Card Upgrade Procedure Enabling Layer 3 affects the scalability limits for the Nexus 5500. and 25 MB buffer per port ASIC. Verify the scalability limits based on the NX-OS you will be using before creating a design. For example. true 40-Gbps flow. the maximum FEXs per Cisco Nexus 5500 Series switches is 24 with Layer 2 only. 1-microsecond port-toport latency with all frame sizes. Cisco Nexus 5600 Product Family The Nexus 5600 is the new generation of the Nexus 5000 switches. Nexus 5672UP and Nexus 56128P. .

The switch has two redundant power supplies and three fan modules. of which 16 ports are unified. . meaning that the ports can run 8/4/2 Gbps Fibre Channel as well as 10 Gigabit Ethernet and FCoE connectivity options.Table 2-14 Nexus 5600 Product Switches Feature Cisco Nexus 5672UP Switch Features The Nexus 5672UP shown in Figure 2-42 has 48 fixed 1/10-Gbps SFP+ ports. The switch supports both port-side exhaust and port-side intake. True 40 Gbps ports use QSFP+ for Ethernet/FCOE.

The 48 fixed SFP+ ports and four 40 Gbps QSFP+ ports support FCOE as well.Figure 2-42 Cisco Nexus 5672UP Cisco Nexus 56128P Switch Features The Cisco Nexus 56128P shown in Figure 2-43 is a 2RU switch. . The Cisco Nexus 56128P has two expansion modules that support 24 unified ports. It has 48 fixed 1/10-Gbps Ethernet SFP+ ports and four 40-Gbps QSFP+ ports.

and a management and console interface on the fan side of the switch.Figure 2-43 Cisco Nexus 56128P The 24 unified ports provide 8/4/2 Gbps Fibre Channel as well as 10 Gigabit Ethernet and FCoE connectivity options. as shown in Figure 2-44. which is a temporary license. You then have 120 days to install the appropriate license keys. Table 2-15 describes each license and the features it enables. The Nexus 56128P currently supports one expansion module. hot-swappable power supplies. There is also an evaluation license. four N+1 redundant. hot-swappable independent fans. Enabling a licensed feature that does not have a license key starts a counter on the grace period. plus two 40 Gbps ports. or disable the grace period feature. If at the end of the 120-day grace period the device does not have a valid license key for the feature. Figure 2-44 Cisco Nexus 56128P Unified Port Expansion Module Cisco Nexus 5500 and Nexus 5600 Licensing Options Different types of licenses are required for the Nexus 5500 and Nexus 5600. Nexus switches have a grace period. which is the amount of time the features in a license package can continue functioning without a license. It has four N+N redundant. the Cisco NX-OS software automatically disables the feature and removes the configuration from the device. That module provides 24 ports 10 G Ethernet/FCoE or 2/4/8 G Fibre Channel and 2 ports 40 Gigabit QSFP+ Ethernet/FCoE ports. disable the use of that feature. Cisco Nexus 5600 Expansion Modules Expansion modules enable the Cisco Nexus 5600 switches to support unified ports with native Fibre Channel connectivity. . the N56M24UP2Q expansion module. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number).

.

This product family consists of multiple switch models: the Nexus 3000 Series. The Nexus 3000 switches are 1 RU server access switches. Table 2-16 gives a comparison and specifications of these different models. The Nexus 3000 Series consists of five models: Nexus 3064X. Nexus 3016Q.Table 2-15 Nexus 5500 Product Licensing Note To manage the Nexus 5500 and Nexus 5600. and 3048. All of them offer wire rate Layer 2 and Layer 3 on all ports. they offer high performance and high density at ultra low latency. . Nexus 3064-32T. two types of licenses are needed: the DCNM LAN and DCNM SAN. they are shown in Figure 2-45. Each is a separate license. Nexus 3064T. Nexus 3100 Series. Cisco Nexus 3000 Product Family The Nexus 3000 switches are part of the Cisco Unified Fabric architecture. These are compact 1RU 1/10/40-Gbps Ethernet switches. and Nexus 3500 switches. and ultra low latency.

Figure 2-45 Cisco Nexus 3000 Switches .

.

Table 2-16 Nexus 3000 Product Model Comparison The Nexus 3100 switches have four models: Nexus 3132Q. all of them offer wire rate Layer 2 and Layer 3 on all ports and ultra low latency. Nexus 3164Q. Figure 2-46 Cisco Nexus 3100 Switches . Nexus 3172Q. Table 2-17 gives a comparison and specifications of different models. These are compact 1RU 1/10/40-Gbps Ethernet switches. and Nexus 3172TQ. Shown in Figure 2-46.

They are shown in Figure 2-47. These are compact 1RU 1/10-Gbps Ethernet switches. Table 2-18 gives a comparison and specifications of the different models. . Nexus 3524 and Nexus 3548. Both offer wire rate Layer 2 and Layer 3 on all ports and ultra low latency.Table 2-17 Nexus 3100 Product Model Comparison The Nexus 3500 switches have two models.

which uses a hardware technology called Algo boost. . Algo boost provides ultra-low latency—as low as 190 nanoseconds (ns). They are very well suited for highperformance trading and high-performance computing environments. It is worth mentioning that the Nexus 3000 doesn’t support the grace period feature.Figure 2-47 Cisco Nexus 3500 Switches Table 2-18 Nexus 3500 Product Model Comparison Note The Nexus 3524 and the Nexus 3548 support a forwarding mode called warp mode. Cisco Nexus 3000 Licensing Options There are different types of licenses for the Nexus 3000. Table 2-19 shows the license options.

.

This type of architecture provides the flexibility and benefit of both architectures: top-of-rack (ToR) and end-of-row (EoR). and Nexus 9000 Series switches. or Nexus 9000 Series switch that is installed in the EoR position as the parent switch. So no configuration or software maintenance tasks need to be done with the FEXs. Nexus 7000. It also enables highly scalable servers access design without the dependency on spanning tree. All Nexus 2000 Fabric Extenders connected to the same parent switch are managed from a single point. as shown in Figure 2-49. This is a ToR design from a cabling point of view. The cabling between the servers and the Cisco Nexus 2000 Series Fabric Extenders is contained within the rack. Nexus 6000. Using Nexus 2000 gives you great flexibility when it comes to the type of connectivity and physical topology. Rack-01 is using dual redundant Cisco Nexus 2000 Series Fabric Extender. reducing cabling between racks. Nexus 6000. which is placed at the top of the rack. The uplink ports on the Cisco Nexus 2000 Series Fabric Extenders can be connected to a Cisco Nexus 5000. Cisco Nexus 7000.Table 2-19 Nexus 3000 Licensing Options Cisco Nexus 2000 Fabric Extenders Product Family The Cisco Nexus 2000 Fabric Extenders behave as remote line cards. ToR and EoR. because all these Nexus 2000 Fabric Extenders will be managed from the parent switch. They appear as an extension to the parent switch to which they connect. The parent switch can be Nexus 5000. which can be 10 Gbps or 40 Gbps. Multiple connectivity options can be used to connect the FEX to the parent switch. . Only the cables between the Nexus 2000 and the parent switches will run between the racks. Figure 2-48 shows both types. which is explained shortly. Figure 2-48 Cisco Nexus 2000 Top-of-Rack and End-of-Row Design As shown in Figure 2-48. but from an operation point of view this design looks like an EoR design.

Note The Cisco Nexus 7000 Series switches currently support only straight-through deployment using dynamic pinning. for Layer 2 it uses Source MAC and Destination MAC. Destination MAC. This method is used when the FEX is connected straight through to the parent switch. Note The Cisco Nexus 2248PQ Fabric Extender does not support the static pinning fabric interface connection. The traffic is being distributed using a hashing algorithm. using static pinning: To achieve a deterministic relationship between the host port on the FEX to which the server connects and the fabric interfaces that connect to the parent switch. for Layer 3 it uses Source MAC. The server port will always use the same uplink port. The host interfaces are divided by the number of the max-links and distributed accordingly. . Static pinning and active-active FEX are currently supported only on the Cisco Nexus 5000 Series switches. Table 2-20 shows the different models of the 1/10-Gbps fabric extenders. Straight-through. using dynamic pinning (port channel): You can use this method to load balance between the down link ports connected to the server and the fabric ports connected to the parent switch. You must use the pinning max-links command to create pinned fabric interface connections so that the parent switch can determine a distribution of host interfaces. and Source IP Destination IP. This method bundles multiple uplink interfaces to one logical port channel. the host port will be statically pinned to one of the uplinks between the FEX and the parent switch. Active-active FEX using vPC: In this scenario. The default value is maxlinks equals 1.Figure 2-49 Cisco Nexus 2000 Connectivity Options Straight-through. the FEX is dual homed using vPC to different parent switches.

.Table 2-20 Nexus 2000 1/10-Gbps Model Comparison Table 2-21 shows the different models of the 100M and 1 Gbps fabric extenders.

Table 2-21 Nexus 2000 100 Mbps/1 Gbps Model Comparison As shown in Figure 2-50. Fujitsu. They simplify the operational model by making the blade switch management part of the parent switch and make it appear as a remote line card to the parent switch. It extends the FEX to third-party blade centers from HP. and IBM. Dell. the Cisco Nexus B22 Fabric Extender is part of the fabric extenders family. similar to the Nexus 2000 fabric extenders. There is a separate hardware model for each vendor. .

which are called fabric ports. Similar to the Nexus 2000 Series. the B22 consists of two types of ports. This architecture gives the benefit of centralized management through the Nexus parent switch. as shown in Figure 2-51. host ports for blade server attachments and uplink ports. .Figure 2-50 Cisco Nexus B22 Fabric Extender The B22 topology shown in Figure 2-51 is creating a highly scalable server access design with no spanning tree running. It is similar to the Nexus 2000 Series management architecture. The uplink ports are visible and are used for the connectivity to the upstream parent switch.

Using open application programming interfaces (API). The leaf nodes are at the edge. and this is where the host is connected. as well as the L4 to L7 services. Nexus 6000.Figure 2-51 Cisco Nexus B22 Access Topology The number of host ports and fabric ports for each model is shown in Table 2-22. The DFA architecture includes fabric management using Central Point of Management (CPoM) and the automation of configuration of the fabric components. and nothing connects to the spines except the leaf switches. The spines are at the core of the network. you can build a Spine-Leaf architecture called Dynamic Fabric Automation (DFA). and Nexus 5000 to build standard three-tier data center architectures. . Table 2-22 Nexus B22 Specifications Cisco Dynamic Fabric Automation Besides using the Nexus 7000. as shown in Figure 2-52. you can integrate automation and orchestration tools to provide control for provisioning of the fabric components.

and the Nexus 9000 switches. One of the key features it supports is VXLAN and true 40 Gbps ports. The Nexus 5600 Series is the latest addition to the Nexus 5000 family of switches. The Nexus family provides operational continuity to meet your needs for an environment where system availability is assumed and maintenance windows are rare. The Nexus product family offers different types of connectivity options that are needed in the data center. budget. Summary In this chapter.32 Tbps per slot. true . It supports true 40 Gbps ports with 40 Gbps flows. if not totally extinct. It uses a 24-bit (16 million) segment identifier to support a large-scale virtual fabric that can scale beyond the traditional 4000 VLANs. supervisor module 2. Multiple supervisor modules are available for the Nexus 7000 switches and the Nexus 7700 switches: supervisor module 1. The Nexus 6004X is the latest addition to the Nexus 6000 family. Nexus modular switches are the Nexus 7000. If you want 100 Mbps ports you must use Nexus 2000 fabric extenders. and that helps you increase energy. Following are some points to remember: With the Nexus product family you can have an infrastructure that can be scaled cost effectively.Figure 2-52 Cisco Nexus B22 Topology Cisco DFA enables creation of tenant-specific virtual fabrics and allows these virtual fabrics to be extended anywhere within the physical data center fabric. and resource efficiency. and the Nexus 9000 can scale up to 3. spanning from 100 Mbps per port and going up to 100 Gbps per port. Nexus 7700. the Nexus 7700 can scale up to 1. you learned about the different Cisco Nexus family of switches. It supports unified ports.84 Tbps per slot. The Nexus 7000 can scale up to 550 Gbps per slot. The Nexus product family is designed to meet the stringent requirements of the next-generation data center. and supervisor module 2E. Nexus modular switches go from 1 Gbps ports up to 100 Gbps ports.

and is the only switch in the Nexus 6000 family that supports VXLAN. noted with the key topic icon in the outer margin of the page. The Nexus 2000 can be connected to Nexus 7000. and Nexus 5000 switches.40 Gbps. . Nexus 2000 fabric extenders must have a parent switch connected to them. Nexus 9000. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. Nexus 3000 is part of the unified fabric architecture. Table 2-23 lists a reference for these key topics and the page numbers on which each is found. Nexus 6000. It is an ultra-low latency switch that is well suited to the high-performance trading and high-performance computing environments. They act as the parent switch for the Nexus 2000.

“Memory Tables” (found on the disc).” also on the disc. Appendix C. includes completed tables and lists to check your work.Table 2-23 Key Topics for Chapter 2 Complete Tables and Lists from Memory Print a copy of Appendix B. and complete the tables and lists from memory. or at least the section for this chapter. and check your answers in the glossary: Application Centric Infrastructure (ACI) port channel (PC) virtual port channel (vPC) Fabric Extender (FEX) Top of Rack (ToR) End of Row (EoR) Dynamic Fabric Automation (DFA) Virtual Extensible LAN (VXLAN) . Define Key Terms Define the following key terms from this chapter. “Memory Tables Answer Key.

” Table 3-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. That approach has its own benefits and business requirements. That approach has its own benefits and business requirements. The first is how to partition one physical switch and make it appear as multiple switches. The other option is how to make multiple switches appear as if they are one big modular switch. How many VDCs are supported on Nexus 7000 SUP 1 and SUP 2E (do not count admin VDC)? a. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. This chapter covers the virtualization capabilities of the Nexus 7000 and Nexus 5000 switches using Virtual Device Context (VDC) and Network Interface Virtualization (NIV). Admin VDC . “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. (Choose three. each with its own administrative domain and security policy. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. SUP1 support 4 * SUP 2E support 8 2. You can find the answers in Appendix A. SUP1 support 4 * SUP 2E support 6 c. Table 3-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. read the entire chapter.) a. SUP1 support 2 * SUP 2E support 8 d.Chapter 3. Default VDC b. Virtualizing Cisco Network Devices This chapter covers the following exam topic: Describe Device Virtualization There are two types of device virtualization when it comes to Nexus devices. Choose the different types of VDCs that can be created on the Nexus 7000. SUP1 support 4 * SUP 2E support 4 b. “Answers to the ‘Do I Know This Already?’ Quizzes. 1. you should mark that question as wrong for purposes of the self-assessment. too. If you do not know the answer to a question or are only partially sure of the answer.

True or false? The Nexus 2000 Series switch can be used as a standalone access switch. switchto {vdc name} 5. Virtual interface c. meaning presenting the Nexus 7000 switch as multiple logical switches.Qbr 7. Within a VDC. False 6. within this VDC are multiple processes and Layer 2 and Layer 3 services running. Storage VDC d. Which command is used to access a nondefault VDC from the default VDC? a. OTV VDC 3. VN-Tag initial frame b.1Qbh c. 802.1BR d. VN-Tag interface Foundation Topics Describing VDCs on the Cisco Nexus 7000 Series Switch The Virtual Device Context (VDC) is used to virtualize the Nexus 7000 switch. a. as shown in Figure 3-1. Without creating any VDC on the Nexus 7000. what does the acronym VIF stand for? a. In the VN-Tag fields.1qpg b. 802. .c. False 4. Virtualized interface d. connectTo b. you can have a unique set of VLANs and VRFs allowing the virtualization of the control plane. True b. True or false? We can assign physical interfaces to an admin VDC. True b. 802. VDC {id} command and press Enter d. a. The VN-Tag is being standardized under which IEEE standard? a. the control plane runs a single VDC. 802. You can assign a line card to a VDC or a set of ports to the VDC. Each VDC will have its own administrative domain allowing the management plane as well to be virtualized. switchto vdc{vdc name} c. This VDC is always active and cannot be deleted.

separate data traffic. these processes are replicated for each VDC. When enabling Role-Based Access Control (RBAC) for each VDC. each VDC administrator will interface with the process for his or her own VDC.Figure 3-1 Single VDC Operation Mode Structure Virtualization using VLANS and VRFs within this VDC can still be used. . When enabling multiple VDCs. You can have fault isolation. Within each VDC you can have duplicate VLAN names and VRF names. meaning you can have VRF production in VDC1 and VRF production in VDC2. and hardware partitioning. Figure 3-2 shows what the structure looks like when creating multiple VDCs. separate administration per VDC.

and Common Criteria Evaluation and Validation Scheme (CCEVS) 10349 certification with EAL4 conformance. development. which in turn saves space.Figure 3-2 Multiple VDC Operation Mode Structure The following are typical use cases for the Cisco Nexus 7000 VDC. which enable the reduction of the physical footprint in the data center. and DMZ Creation of separate organizations on the same physical switch Creation of separate application environments Creation of separate departments for the same organization due to different administration domains VDC separation is industry certified. power. and test environments Creation of intranet. Federal Information Processing Standards (FIPS 140-2) certification. VDC Deployment Scenarios There are multiple deployment scenarios using the VDCs. and cooling. extranet. Horizontal Consolidation Scenarios . which can be used in the design of the data center: Creation of separate production. it has NSS labs for Payment Card Industry (PCI) compliant environments.

you can create multiple VDCs called Aggregation 1 and Aggregation 2. In horizontal consolidation. This can happen by creating two VDCs. which can be useful in facilitating migration scenarios. and after the migration you can reallocate the old core ports to the new core. VDC1 will accommodate Core A and VDC2 will accommodate Core B. you can consolidate core functions and aggregation functions. so rather than building a separate aggregation layer for test and development. Figure 3-4 shows multiple aggregation layers created on the same physical devices.VDCs can be used to consolidate multiple physical devices that share the same function role. using a pair of Nexus 7000 switches you can build two redundant data center cores. Figure 3-3 Horizontal Consolidations for Core Layer Aggregation layers can also be consolidated. For example. You can allocate ports or line cards to the VDCs. as shown in Figure 3-3. . Aggregation 1 and Aggregation 2 are completely isolated with a separate role-based access and separate configuration file.

Although the port count is the same. it can be useful because it reduces the number of physical switches. . you can build a redundant core and aggregation layer. using a pair of Nexus 7000 switches. This can happen by creating two VDCs. You can allocate ports or line cards to the VDCs. For example. VDC1 accommodates Core A and Aggregation A. VDC2 accommodates Core B and Aggregation B.Figure 3-4 Horizontal Consolidation for Aggregation Layers Vertical Consolidation Scenarios When using VDCs in vertical consolidation. you can consolidate core functions and aggregation functions. as shown in Figure 3-5.

we used a pair of Nexus 7000 switches to build a three-tier architecture. and traffic must go out VDC-A to reach VDC-B. The only way is through the firewall. for traffic to cross VDC-A to go to VDC-B it must pass through the firewall. and Access Layers VDCs for Service Insertion VDCs can be used for service insertion and policy enforcement.Figure 3-5 Vertical Consolidation for Core and Aggregation Layers Another scenario is consolidation of core aggregation and access while you maintain the same hierarchy. By creating two separate VDCs for two logical areas inside and outside. This design can make service insertion more deterministic and secure. Each VDC has its own administrative domain. as you can see. Aggregation. as shown in Figure 3-7. Figure 3-6 Vertical Consolidation for Core. Figure 3-6 shows that scenario. .

VDCs simply partition the physical Nexus device into multiple logical devices with a separate control plane. CPU shares are supported only with SUP2/2e. Changes in the nondefault VDC affect only that VDC. An independent process is started for each protocol per VDC created. the admin always uses it to manage the physical device and other VDCs created. and management plane. The following list describes the different types of VDCs that can be created on the Nexus 7000: Default VDC: When you first log in to the Nexus 7000. you log in to the default VDC. The default VDC has certain characteristics and tasks that can only be done in the default VDC. memory NX-OS upgrade across all VDCs EPLD upgrade—as directed by TAC or to enable new features Ethanalyzer captures—control plane traffic Feature-set installation for Nexus 2000. which can be summarized in the following: VDC creation/deletion/suspend Resource allocation—interfaces. You must be in the default VDC or the Admin VDC (discussed later in the chapter) to do certain tasks. Connecting two VDCs on the same physical switch must be done through external cables. Nondefault VDC: The nondefault VDC is a fully functional VDC that can be created from the default VDC or the Admin VDC. The default VDC is a full-function VDC. and FCoE Control Plane Policing (CoPP) Port channel load balancing Hardware IDS checks control ACL capture feature enabled Note With CPU shares.Figure 3-7 Using VDCs for Firewall Insertion Understanding Different Types of VDCs As explained earlier in the chapter. data plane. There is a separate . the NX-OS on the Nexus 7000 supports Virtual Device Contexts (VDC). FabricPath. you don’t dedicate a CPU to a VDC rather than giving access and assigning priorities to the CPU. you cannot connect VDCs together from inside the chassis. which can be done only in these two VDC types.

You cannot assign any interface to the Admin VDC. which can carry both Ethernet and FCoE traffic. To change back to the default VDC you must erase the configuration and do a fresh boot script. All nonglobal configuration on the default VDC will be migrated to the new VDC. After the creation of the storage VDC.2(2) for SUP1 module.1 for SUP2/2E modules. you can create the admin VDC in different ways: During a fresh switch boot. and all nonglobal configuration in the default VDC will be lost. and starting from the NX-OS 6. After the Admin VDC is created. only mgmt0 can be assigned to the Admin VDC. we assign interfaces to it. you will be prompted to select an Admin VDC. When you enable the Admin VDC. and each VDC has its own RBAC. it cannot be deleted or changed back to the default VDC. ADMIN VDC: The Admin VDC is used for administration functions. The storage VDC relies on the FCoE license and doesn’t need a VDC license. Figure 3-8 shows the two types of ports. The Admin VDC has certain characteristics that are unique to the Admin VDC. We can create only one storage VDC on the physical device. Storage VDC: The storage VDC is a nondefault VDC that helps maintain the operation model when the customer has a LAN admin and a SAN admin. dedicated FCoE interfaces or shared interfaces. SNMP. You can enable starting from NX-OS 6. and so on. . Enter the system admin-vdc command after boot. In that case.configuration file per VDC. Note Creating the Admin VDC doesn’t require the advanced package license or VDC license. The storage VDC counts toward the total number of VDCs. the default VDC becomes the Admin VDC. it replaces the default VDC. Enter the system admin-vdc migrate new vdc name command. You must be careful or you will lose the nonglobal configuration. Features and feature sets cannot be enabled in an Admin VDC. There are two types of interfaces.

Sup2/2e is required. With Supervisor engine 2E you can have up to eight VDCs and one Admin VDC. Note All members of a port group are automatically allocated to the VDC when you allocate an interface. which requires 8 GB of RAM because you must upgrade to NX-OS 6. The physical interface gets configured within the VDC. For example. Logical interfaces. At the beginning and before creating a VDC. Interface Allocation The only thing you can assign to a VDC are the physical ports. all interfaces belong to the default VDC. the physical interface belongs to one Ethernet VDC and one storage VDC at the same time.2. are created within the VDC. physical and logical interfaces can be assigned to VLANs or VRFs. After you create a VDC.Figure 3-8 Storage VDC Interface Types Note To run FCoE. Supervisor engine 2 can have up to four VDCs and one Admin VDC. When you create a shared interface. Beyond four VDCs on Supervisor 2E we require a license to increment the VDCs with an additional four. all configurations that existed on it get erased. A physical interface can belong to only one VDC at a given time. in the N7K-M132XP-12 (4 interfaces × 8 port groups = 32 interfaces) and N7K-M132XP-12L (same as non-L M132) (1 interface × 8 port groups = . When you allocate the physical interface to a VDC. With Supervisor 1 you can have up to four VDCs and one Admin VDC. and within a VDC. such as tunnel interfaces and switch virtual interfaces. you start assigning interfaces to that VDC.

Note You can create custom roles for a user. This role gives a user complete control over the specific nondefault VDC. See the example for this module in Figure 3-9. no users are assigned to this role. The network-admin role is automatically assigned to this user. A user with the network-operator role is given the VDC-operator role in the nondefault VDC. This role grants the user read-only rights in the default VDC. However. User roles are not shared between VDCs. that user is mapped to a role of the same level in the nondefault VDC. and the NX-OS provides four default user roles: Network-admin: The first user account that is created on a Cisco Nexus 7000 Series switch in the default VDC is admin. each VDC maintains its user role . Interfaces belonging to the same port group must belong to the same VDC. the first user account on that VDC is admin. By default. or change nondefault VDCs. All M132 cards require allocation in groups of four ports. VDC-admin: When a new VDC is created. That means a user with the network-admin role is given the VDC-admin role in the nondefault VDC. The VDC-admin role is automatically assigned to this admin user on a nondefault VDC. When a network-admin or network-operator user accesses a nondefault VDC by using the switchto command.8 interfaces). and you can configure eight port groups. Network-operator: The second default role that exists on Cisco Nexus 7000 Series switches is the network-operator role. and it is available only in the default VDC. This role includes the ability to create. Users can have multiple roles assigned to them. which can be used to access a nondefault VDC from the default VDC. but you cannot change the default user roles for a VDC. delete. This role has no rights to any other VDC. Figure 3-9 VDC Interface Allocation Example VDC Administration The operations allowed for a user in VDC depend on the role assigned to that user. The role must be assigned specifically to a user by a user who has network-admin rights. The network-operator role includes the right to issue the switchto command. VDC-operator: The VDC-operator role has read-only rights for a specific VDC. The network-admin role gives a user complete read and write access to the device. this user does not have any rights in any other VDC and cannot access other VDCs through the switchto command.

To stop the grace period. The license is associated with the serial number of a Cisco Nexus 7000 switch chassis. 100 days before.-------. You can try the feature for the 120-day grace period. you are left with 20 days to install the license. As shown in Example 3-1. this command displays information about all VDCs on the physical device. The storage VDC is enabled using the storage license associated with a specific line card. all the features of the license package must be stopped. Example 3-1 Displaying the Status of a Feature Set on a Switch Click here to view code image N7K-VDC-A# show feature-set Feature Set Name ID State -------------------. the show feature-set command is used to verify and show which feature has been enabled or disabled in a VDC. any nondefault VDCs will be removed from the switch configuration. You need network admin privileges to do that. If you are executing the command from the default VDC. the license package can contain several features. Physical interfaces also are assigned to nondefault VDCs from the default VDC. but another feature was enabled. VDC Requirements To create VDCs. VDCs are created or deleted from the default VDC only. so if you enable a feature to try it. Note The grace period operates across all features in a license package. Example 3-2 Displaying Information About All VDCs (Running the Command from the Default VDC) Click here to view code image . for example. When you enable a feature. When the grace period expires. Verifying VDCs on the Cisco Nexus 7000 Series Switch The next few lines show certain commands to display useful information about the VDC. an advanced license is required. the countdown for the 120 days doesn’t stop if a single feature of the license package is already enabled.-------fabricpath 2 enabled fex 3 disabled N7K-VDC-A# The show feature-set command in Example 3-1 shows that the FabricPath feature is enabled and the FEX feature is disabled.database. As stated before. The show vdc command displays different outputs depending on where you are executing it from.

Example 3-3 Displaying Information About the Current VDC (Running the Command from the Nondefault VDC) Click here to view code image N7K-Core-Prod# show vdc vdc_id -----1 vdc_name -------prod state ----active mac ---------00:18:ba:d8:3f:fd To display the detailed information about all VDCs. Example 3-4 Displaying Detailed Information About All VDCs Click here to view code image N7K-Core# show vdc detail vdc id: 1 vdc name: N7K-Core vdc state: active vdc mac address: 00:22:55:79:a4:c1 vdc ha policy: RELOAD vdc dual-sup ha policy: SWITCHOVER vdc boot Order: 1 vdc create time: Thu May 14 08:14:39 2012 vdc restart count: 0 vdc vdc vdc vdc vdc vdc vdc vdc vdc id: 2 name: prod state: active mac address: 00:22:55:79:a4:c2 ha policy: RESTART dual-sup ha policy: SWITCHOVER boot Order: 1 create time: Thu May 14 08:15:22 2012 restart count: 0 vdc vdc vdc vdc vdc vdc vdc vdc id: 3 name: dev state: active mac address: 00:22:55:79:a4:c3 ha policy: RESTART dual-sup ha policy: SWITCHOVER boot Order: 1 create time: Thu May 14 08:15:29 2012 .N7K-Core# show vdc vdc_id -----1 2 3 vdc_name -------prod dev MyVDC state ----active active active mac ---------00:18:ba:d8:3f:fd 00:18:ba:d8:3f:fe 00:18:ba:d8:3f:ff If you are executing the show vdc command within the nondefault VDC. the output displays information about the current VDC. which in our case is the N7K-Core-Prod. as shown in Example 3-4. execute show vdc detail from the default VDC.

it will show the interface membership only for that VDC. Example 3-6 Displaying Interface Allocation per VDC Click here to view code image N7K-Core# show vdc membership vdc_id: 1 vdc_name: N7K-Core interfaces: Ethernet2/1 Ethernet2/2 Ethernet2/4 Ethernet2/5 Ethernet2/7 Ethernet2/8 Ethernet2/10 Ethernet2/11 Ethernet2/13 Ethernet2/14 Ethernet2/16 Ethernet2/17 Ethernet2/19 Ethernet2/20 Ethernet2/22 Ethernet2/23 Ethernet2/25 Ethernet2/26 Ethernet2/28 Ethernet2/29 Ethernet2/31 Ethernet2/32 Ethernet2/34 Ethernet2/35 Ethernet2/37 Ethernet2/38 Ethernet2/40 Ethernet2/41 Ethernet2/43 Ethernet2/44 Ethernet2/48 vdc_id: 2 vdc_name: prod interfaces: Ethernet2/47 vdc_id: 3 vdc_name: dev interfaces: Ethernet2/46 Ethernet2/3 Ethernet2/6 Ethernet2/9 Ethernet2/12 Ethernet2/15 Ethernet2/18 Ethernet2/21 Ethernet2/24 Ethernet2/27 Ethernet2/30 Ethernet2/33 Ethernet2/36 Ethernet2/39 Ethernet2/42 Ethernet2/45 If you execute show VDC membership from the VDC you are currently in. execute show vdc membership from the default VDC. as shown in Example 3-6. Example 3-5 Displaying Detailed Information About the Current VDC When You Are in the Nondefault VDC Click here to view code image N7K-Core-prod# show vdc prod detail vdc id: 2 vdc name: prod vdc state: active vdc mac address: 00:22:55:79:a4:c2 vdc ha policy: RESTART vdc dual-sup ha policy: SWITCHOVER vdc boot Order: 1 vdc create time: Thu May 14 08:15:22 2012 vdc restart count: 0 To verify the interface allocation per VDC. You can save all the running configurations to the startup configuration for all VDCs using one command. as shown in Example 3-5.vdc restart count: 0 You can display the detailed information about the current VDC by executing show vdc {vdc name} detail from the nondefault VDC. Use the copy running-config startup-config vdc-all command. which is executed from the .

You cannot see the configuration for other VDCs except from the default VDC. connecting to server. Cisco Systems. and several .opensource.1. Inc. when you create the user accounts and configure the IP connectivity. All rights reserved. VLAN.php and http://www.opensource. Example 3-8 Navigating Between VDCs on the Nexus 7000 Switch Click here to view code image N7K-Core# switchto vdc Prod TAC support: http://www. Use these commands for the initial setup.org/licenses/lgpl-2. executed from the default VDC. Certain components of this software are licensed under the GNU General Public License (GPL) version 2. Cisco Nexus 2000 FEX Terminology The following terminology and acronyms are used with the Nexus 2000 fabric extenders: Network Interface (NIF): Port on FEX. connecting to parent switch. as shown in Example 3-8. after that.1. The copyrights to certain works contained in this software are owned by other third parties and used and distributed under license. you must do switchback to the default VDC first before going to another VDC.php N7K-Core-Prod# To switch back to the default VDC. use the switchback command. Describing Network Interface Virtualization You can deliver an extensible and scalable fabric solution using the Cisco FEX technology. use the show running-config vdc-all command.org/licenses/gpl-2. You can navigate from the default VDC to the nondefault VDC using the switchto command. A copy of each such license is available at http://www. The FEX technology can be extended all the way to the server hypervisor. Host Interface (HIF): Front panel port on FEX. Example 3-7 Saving All Configurations for All VDCs Click here to view code image N7K-Core# copy running-config startup-config vdc-all [########################################] 100% To display the running configurations for all VDCs.com/tac Copyright (c) 2002-2008. you use Secure Shell (SSH) and Telnet to connect to the desired VDC. Virtual Interface (VIF): Logical construct consisting of a front panel port.0 or the GNU Lesser General Public License (LGPL) Version 2.cisco. This technology enables operational simplicity by having a single point of management and policy enforcement on the access parent switch.default VDC. You cannot execute the switchto command from the nondefault VDC.0.

Based on the type and the density of the access ports required. The FEX acts as a . Fabric Extender (FEX): NEXUS 2000 is the instantiation of FEX. FEX port is also called a Host Interface (HIF). Fabric Port Channel: Port channel between N7K and FEX. Nexus 5600. or Nexus 5000 Series. Nexus 6000. FEX A/A or dual-attached FEX: Scenario in which the fabric links of FEX are connected. Logical interface (LIF): Presentation of a front panel port in parent switch. also referred to as server port or host port in this presentation. this design is an EoR design. which will be 10/40 Gbps. Only a small number of cables will run between the racks. as stated before. it needs a parent switch. From the logical network deployment point of view. actively forwarding traffic to two parent switches. Fabric Port: N7K side of the link connected to FEX. Figure 3-10 shows the ports with reference to the physical hardware. Nexus 7000. The cabling between the servers and the Nexus 2000 Series fabric extenders is contained within the rack. The uplink ports from the Nexus 2000 will be connected to the parent switch. Parent switch: Switch (N7K/N5K) where FEX is connected. FEX port: Server-facing port on FEX. Fabric links (Fabric Port Channel or FPC): Links that connect FEX NIF and FEX fabric ports on parent switch. From a cabling point of view. The FEX uplink is also called a Network Interface (NIF). dual redundant Nexus 2000 fabric extenders are placed at the top of the rack.other parameters. Figure 3-10 Nexus 2000 Interface Types and Acronyms Nexus 2000 Series Fabric Extender Connectivity The Cisco Nexus 2000 Series cannot run as a standalone switch. This type of design combines the benefit of the Top-of-Rack (ToR) design with the benefit of the End-of-Row (EoR) design. this design is a ToR design. FEX uplink: Network-facing port on the FEX side of the link connected to N7K. which will be installed at the EoR. This switch can be a Nexus 9000.

as stated earlier. A frame exchanged between the Nexus 2000 fabric extender and the parent switch will have an information tag inserted in it called VN-Tag. Figure 3-11 shows the physical and logical view for the Nexus 2000 fabric extender deployment. The Nexus 2000 host ports (HIFs) do not expect to receive BPDUs on the host ports and expect only end hosts like servers. and no operation tasks are required to be performed from the FEXs.1Qbh working group. “Cisco Nexus Product Family” in the section “Cisco Nexus 2000 Fabric Extenders Product Family” to see the different models and specifications. and the hosts connected to the Nexus 2000 fabric extender appear as if they are directly connected to the parent switch. At the time of writing this book there is no local switching happening on the Nexus 2000 fabric extender. Note You can connect a switch to the FEX host interfaces. Refer to Chapter 2. so all traffic must be sent to the parent switch.remote line card.1Qbh was withdrawn and is no longer an approved IEEE project. Forwarding is performed by the parent switch. The server appears as if it is connected directly to the parent switch. 802. All control and management functions are performed by the parent switch. VN-Tag is a Network Interface Virtualization (NIV) technology. At the beginning. which enables advanced functions and policies to be applied to it. but this is not a recommended solution. Figure 3-10 shows the physical and the logical architecture of the FEX implementation. which might create a loop in the network. From an operation perspective. To make this work.1BR in July 2011. The physical ports on the Nexus 2000 fabric extender appear as logical ports on the parent switch. However. Figure 3- . VN-Tag was being standardized under the IEEE 802. The host connected to the Nexus 2000 fabric extender is unaware of any tags. this design has the advantage of the EoR design. Figure 3-11 Nexus 2000 Deployment Physical and Logical View VN-Tag Overview The Cisco Nexus 2000 Series fabric extender acts as a remote line card to a parent switch. The effort was moved to the IEEE 802. you need to turn off Bridge Protocol Data Units (BPDUs) from the access switch and turn on “bpdufilter” on the Nexus 2000. all configuration and maintenance tasks are done from the parent switch.

1 is network-to-host) p: Pointer bit (set if this is a multicast frame and requires egress replication) L: Looped filter (set if sending back to source Cisco Nexus 2000 Series) VIF: Virtual interface Cisco Nexus 2000 FEX Packet Flow The packet processing on the Nexus 2000 happens in two parts: when the host send a packet to the network and when the network sends a packet to the host. these events occur: 1. The frame arrives from the host. The Cisco Nexus 2000 Series switch adds a VN-Tag. and the packet is forwarded over a fabric link using a specific VN-Tag. These are the VN-Tag field values: . The Cisco Nexus 2000 Series switch adds a unique VN-Tag for each Cisco Nexus 2000 Series host interface. Figure 3-13 shows the two scenarios. 2. Figure 3-13 Cisco Nexus 2000 Packet Flow When the host sends a packet to the network (diagrams A and B).12 shows the VN-Tag fields in an Ethernet frame. Figure 3-12 VN-Tag Fields in an Ethernet Frame d: Direction bit (0 is host-to-network forwarding.

The Cisco Nexus switch performs standard lookup and policy processing when the egress port is determined to be a logical interface (Cisco Nexus 2000 Series) port. The Cisco Nexus 2000 Series switch strips the VN-Tag. The Cisco Nexus switch applies an ingress policy that is based on the physical Cisco Nexus Series switch port and logical interface: Access control and forwarding are based on frame fields and virtual (logical) interface policy. The frame is received on the physical or logical interface. When the network sends a packet to the host (diagram C). The Cisco Nexus switch extracts the VN-Tag. The p bit is set if this frame is a multicast frame and requires egress replication.The direction bit is set to 0. The p (pointer). and destination virtual interface are undefined (0). The l (looped) bit filter is set if sending back to a source Cisco Nexus 2000 Series switch. The Cisco Nexus switch inserts a VN-Tag with these characteristics: The direction is set to 1 (network to host). 2. The source virtual interface is set based on the ingress host interface. 4. There is a best practice for connecting the Nexus 2000 fabric extender to the Nexus 7000. these events occur: 1. indicating host-to network forwarding. which identifies the logical interface that corresponds to the physical host interface on the Cisco Nexus 2000 Series. 3. l (looped). The destination virtual interface is set to be the Cisco Nexus 2000 Series port VN-Tag. The packet is forwarded over a fabric link using a specific VN-Tag. Physical link-level properties are based on the Cisco Nexus Series switch port. The packet is received over the fabric link using a specific VN-Tag. 12-port 40-Gigabit Ethernet F3-Series QSFP I/O module (N7K-F312FQ-25) for Cisco Nexus 7000 Series switches 24-port Cisco Nexus 7700 F3 Series 40-Gigabit Ethernet QSFP I/O module (N77-F324FQ-25) 48-port Cisco Nexus 7700 F3 Series 1/10-Gigabit Ethernet SFP+ I/O module (N77-F348XP23) 32-port 10-Gigabit Ethernet SFP+ I/O module (N7K-M132XP-12) 32-port 10-Gigabit Ethernet SFP+ I/O module (N7K-M132XP-12L) 24-port 10-Gigabit Ethernet I/O M2 Series module XL (N7K-M224XP-23L) 48-port 1/10-Gigabit Ethernet SFP+ I/O F2 Series module (N7K-F248XP-25) Enhanced 48-port 1/10-Gigabit Ethernet SFP+ I/O module (F2e Series) (N7K-F248XP-25E) . The Cisco Nexus switch strips the VN-Tag and sends the packet to the network. Cisco Nexus 2000 FEX Port Connectivity You can connect the Nexus 2000 Series fabric extender to one of the following Ethernet modules that are installed in the Cisco Nexus 7000 Series. The source virtual interface is set if the packet was sourced from a Cisco Nexus 2000 Series port. and the frame is forwarded to the host interface. depending on the type and model of card installed in it. 3.

Traffic is automatically redistributed across the remaining links in the port channel fabric interface. all the associated host interfaces go down and will remain down as long as the fabric port is down. its host interfaces are distributed equally among the available fabric interfaces. The drawback here is that if the fabric link goes down. Figure 3-14 Cisco Nexus 2000 Connectivity to N7K-M132XP-12 There are two ways to connect the Nexus 2000 to the parent switch: either use static pinning. you can configure the FEX to use a port channel fabric interface connection. when the FEX is powered up and connected to the parent switch. This is explained in more detail in the next paragraphs. To provide load balancing between the host interfaces and the parent switch. but instead is always specified by you. . The host interfaces are divided by the number of the max-links and distributed accordingly. or use a port channel when connecting the fabric interfaces on the FEX to the parent switch. all host interfaces on the FEX are set to the down state. The default value is max-links 1. This connection bundles 10-Gigabit Ethernet fabric interfaces into a single logical channel. A fabric interface that fails in the port channel does not trigger a change to the host interfaces. You must use the pinning max-links command to create several pinned fabric interface connections so that the parent switch can determine a distribution of host interfaces.48-port 1/10-Gigabit Ethernet SFP+ I/O module (F2e Series) (N77-F248XP-23E) Figure 3-14 shows the connectivity between the Cisco Nexus 2000 fabric extender and the N7KM132XP-12 line card. As a result. In static pinning. If all links in the fabric port channel go down. which makes the FEX use individual links connected to the parent switch. the bandwidth that is dedicated to each end host toward the parent switch is never changed by the switch.

you must follow certain rules. 6000. Figure 3-15 shows the FEX connectivity to the Nexus 7000 when you have multiple VDCs. Figure 3-15 Cisco Nexus 2000 and VDCs Cisco Nexus 2000 FEX Configuration on the Nexus 7000 Series How you configure the Cisco Nexus 2000 Series on the Cisco 7000 Series is different from the configuration on Cisco Nexus 5000. Use the following steps to connect the Cisco Nexus 2000 to the Cisco 7000 Series parent switch when connected to one of the supported line cards. Example 3-11 Entering the Chassis Mode for the Fabric Extender Click here to view code image N7K-Agg(config)# fex 101 . and 5600 Series switches. The difference comes from the VDC-based architecture of the Cisco Nexus 7000 Series switches.When you have VDCs created on the Nexus 7000. All FEX fabric links belong to the same VDC. all FEX host ports belong to the same VDC. when you install the fabric extender feature set. You can disallow the installed fabric extender feature set in a specific VDC on the device. it is allowed in all VDCs. Example 3-9 Installing the Fabric Extender Feature Set in Default VDC Click here to view code image N7K-Agg(config)# N7K-Agg(config)# install feature-set fex Example 3-10 Enabling the Fabric Extender Feature Set Click here to view code image N7K-Agg(config)# N7K-Agg(config)# feature-set fex By default. FEX IDs are unique across VDCs (the same FEX ID cannot be used across different VDCs).

N7K-Agg(config-fex)# description Rack8-N2k N7K-Agg(config-fex)# type N2232P Example 3-12 Defining the Number of Uplinks Click here to view code image N7K-Agg(config-fex)# pinning max-links 1 You can use this command if the fabric extender is connected to its parent switch using one or more statically pinned fabric interfaces. The next step is to associate the fabric extender to a port channel interface on the parent device. There can be only one port channel connection. we create a port channel with four member ports. so it might be different depending on which VDC you want to connect to. In Example 3-14. Example 3-13 Disallowing Enabling the Fabric Extender Feature Set Click here to view code image N7K-Agg# configure terminal N7K-Agg (config)# vdc 1 N7K-Agg (config-vdc)# no allow feature-set fex N7K-Agg (config-vdc)# end N7K-Agg # Note The N7k-agg# VDC1 is the VDC_ID. Example 3-14 Associate the Fabric Extender to a Port Channel Interface Click here to view code image N7K-Agg# configure terminal N7K-Agg (config)# interface ethernet 1/28 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface ethernet 1/29 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface ethernet 1/30 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface ethernet 1/31 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface port-channel 4 .

you must do some verification to make sure everything is configured properly.6) Switch Interim version: 5.Interface Up.N7K-Agg (config-if)# switchport N7K-Agg (config-if)# switchport mode fex-fabric N7K-Agg (config-if)# fex associate 101 After the FEX is associated successfully. State: Active Fex Port State Fabric Port Primary Fabric Eth101/1/1 Up Po101 Po101 Eth101/1/2 Up Po101 Po101 Eth101/1/3 Down Po101 Po101 Eth101/1/4 Down Po101 Po101 Example 3-17 Display Which FEX Interfaces Are Pinned to Which Fabric Interfaces Click here to view code image N7K-Agg# show interface port-channel 101 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Po101 Eth101/1/2 Eth101/1/1 . Num Macs: 64 Module Sw Gen: 21 [Switch Sw Gen: 21] pinning-mode: static Max-links: 1 Fabric port for control traffic: Po101 Fabric interface state: Po101 . State: Active Eth2/1 .1(1) Extender Model: N2K-C2248TP-1GE.Interface Up. Extender Serial: JAF1418AARL Part No: 73-12748-05 Card Id: 99. Mac Addr: 54:75:d0:a9:49:42.1(0. State: Active Eth4/1 .1(1)] FEX Interim version: 5.Interface Up. State: Active Eth2/2 .Interface Up. Example 3-15 Verifying FEX Association Click here to view code image N7K-Agg# show fex FEX FEX FEX FEX Number Description State Model Serial -----------------------------------------------------------------------101 FEX0101 Online N2K-C2248TP-1GE JAF1418AARL Example 3-16 Display Detailed Information About a Specific FEX Click here to view code image N7K-Agg# show fex 101 detail FEX: 101 Description: FEX0101 state: Online FEX version: 5.159. State: Active Eth4/2 .1(1) [Switch version: 5.Interface Up.

To send traffic between VDCs you must use external physical cables that lead to greater security of user data. each VDC will have its own management domain.Example 3-18 Display the Host Interfaces That Are Pinned to a Port Channel Fabric Interface Click here to view code image N7K-Agg# show interface port-channel 4 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Po4 Eth101/1/48 Eth101/1/47 Eth101/1/46 Eth101/1/44 Eth101/1/43 Eth101/1/42 Eth101/1/40 Eth101/1/39 Eth101/1/38 Eth101/1/36 Eth101/1/35 Eth101/1/34 Eth101/1/32 Eth101/1/31 Eth101/1/30 Eth101/1/28 Eth101/1/27 Eth101/1/26 Eth101/1/24 Eth101/1/23 Eth101/1/22 Eth101/1/20 Eth101/1/19 Eth101/1/18 Eth101/1/16 Eth101/1/15 Eth101/1/14 Eth101/1/12 Eth101/1/11 Eth101/1/10 Eth101/1/8 Eth101/1/7 Eth101/1/6 Eth101/1/4 Eth101/1/3 Eth101/1/2 Eth101/1/45 Eth101/1/41 Eth101/1/37 Eth101/1/33 Eth101/1/29 Eth101/1/25 Eth101/1/21 Eth101/1/17 Eth101/1/13 Eth101/1/9 Eth101/1/5 Eth101/1/1 Summary VDCs on the Nexus 7000 switches enable true device virtualization. This technology enables operational simplicity by having a single point of management and policy enforcement on the access parent switch Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. noted with the key topics icon in the outer margin of the page. Cisco Nexus 2000 cannot operate in a standalone mode. Table 3-2 lists a reference for these key topics and the page numbers on which each is found. The FEX technology can be extended all the way to the server hypervisor. allowing the management plane also to be virtualized. it needs a parent switch to connect to. .

Table 3-2 Key Topics for Chapter 3 Define Key Terms Define the following key terms from this chapter. and check your answers in the Glossary: Virtual Device Context (VDC) supervisor engine (SUP) demilitarized zone (DMZ) erasable programmable logic device (EPLD) Fabric Extender (FEX) role-based access control (RBAC) network interface (NIF) host interface (HIF) virtual interface (VIF) logical interface (LIF) End-of-Row (EoR) Top-of-Rack (ToR) VN-Tag .

Table 4-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. True b. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. you learn about the latest Cisco innovations in the Data Center Extension solutions and in LAN Extension in particular. Catalyst 6500 Series 2. ASR1000 Series d. Nexus 7000 Series b.Chapter 4. 1. server clustering. read the entire chapter. which enables you to have data replication. If you do not know the answer to a question or are only partially sure of the answer. “Answers to the ‘Do I Know This Already?’ Quizzes. Which version of IGMP is used on the OTV join interface? . DCI solutions in general extend LAN and SAN connectivity between geographically dispersed data centers. a. and workload mobility. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. True or false? For silent hosts. we can set up static MAC-to-IP mappings in OTV. ASR9000 Series c. You can find the answers in Appendix A. you should mark that question as wrong for purposes of the self-assessment. False 3. which is called Overlay Transport Virtualization (OTV). Data Center Interconnect This chapter covers the following exam topics: Identify Key Differentiators Between DCI and Network Interconnectivity Cisco Data Center Interconnect (DCI) solutions enable IT organizations to meet two main objectives: business continuity in case of disaster events and corporate and regularity compliance to improve data security. In this chapter.” Table 4-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. Which of the following Cisco products supports OTV? a.

False Foundation Topics Introduction to Overlay Transport Virtualization Data Center Interconnect (DCI) solutions have been available for quite some time. instead of point-to-point Ethernet pseudowires. Different types of technologies provide Layer 2 extension solutions: Ethernet over Multiprotocol Label Switching (EoMPLS): EoMPLS can be used to provision point-to-point Layer 2 Ethernet connections between two sites using MPLS as the transport. However. dark fiber might be available to build private optical connections between data centers. Dense wavelength-division multiplexing (DWDM) or coarse wavelengthdivision multiplexing (CWDM) can increase the number of Layer 2 connections that can be run through the fibers. and database layers. 8 5. Dark fiber: In some cases. True or False? You need to do no shutdown for the overlay interface after adding it. is necessary to contain failures and preserve the resiliency achieved by the use of multiple data centers. Layer 2 extensions might be required at different layers in the data center to enable the resiliency and clustering mechanisms offered by the different applications at the web. These technologies increase the total bandwidth over the same number of fibers. No need to enable IGMP on the OTV join interface 4. Transport infrastructure nature: The nature of the transport between data centers varies depending on the location of the data centers and the availability and cost of services in the . application. A solution that provides Layer 2 connectivity.a. Virtual Private LAN Services (VPLS): Similar to EoMPLS. VPLS delivers a virtual multiaccess Ethernet network. True b. How many join interfaces can you create per logical overlay? a. IGMPv1 b. 2 c. IGMPv2 c. 4 d. it is mainly used to enable the benefit of geographically separated data centers. Using these technologies to extend the LAN across multiple data centers creates a series of challenges: Failure isolation: The extension of Layer 2 domains between multiple data center sites can cause the data centers to share protocols and failures that would normally have been isolated when interconnecting data centers over an IP network. IGMPv3 d. a. 1 b. yet restricts the reach of the flood domain. VPLS uses an MPLS network as the underlying transport network.

OTV doesn’t require additional configuration to offer multihoming. Finally. There is no need to extend Spanning Tree Protocol (STP) across the transport infrastructure. OTV provides Layer 2 extension between remote data centers by using a concept called MAC address routing. meaning a control plane protocol is used to exchange MAC address reachability information between network devices providing the LAN extension functionality. The loop prevention mechanisms prevent loops on the overlay and prevent loops from the sites that are multihomed to the overlay. following are the minimum requirements to run OTV on each platform: Cisco Nexus 7000 Series and Cisco Nexus 7700 platform: . scale. this results in failure containment within a data center. and failure doesn’t propagate into other data centers and stops at the edge device. An IP-capable transport is the most generalized offering. the use of available bandwidth between data centers must be optimized to obtain the best connectivity at the lowest cost. Complex operations: Layer 2 VPNs can provide extended Layer 2 connectivity across data centers. Therefore. Mechanisms must be provided to prevent loops that might be induced when connecting bridged networks that are multihomed. load balancing. Balancing the load across all available paths while providing resilient connectivity between the data center and the transport network requires added intelligence above and beyond that available in traditional Ethernet switching and Layer 2 VPNs. Bandwidth utilization with replication. Each Ethernet frame is encapsulated into an IP packet and delivered across the transport infrastructure. and there is no need to establish virtual circuits between the data center sites. multihoming of the Layer 2 sites onto the VPN is required. which is independent from the other site even though all sites have a common Layer 2 domain. therefore. Multicast and broadcast traffic should also be replicated optimally to reduce bandwidth consumption. it is built in natively in OTV using the same control plane protocol along with loop prevention mechanisms. and it provides flexibility and enables long-reach connectivity. which typically rely on data plane learning and the use of flooding across the transport infrastructure to learn reachability. This is built-in functionality. Although the features. distributed provisioning. and path diversity: When extending Layer 2 domains across data centers. addition or removal of an edge device does not affect other edge devices on the overlay. Cisco has two main products that currently support OTV: Cisco Nexus 7000 Series switches and Cisco ASR 1000 Series aggregation services routers. This has a huge benefit over traditional data center interconnect solutions. A simple overlay protocol with built-in capability and point-to-cloud provisioning is crucial to reducing the cost of providing this connectivity. Multihoming with end-to-end loop prevention: LAN extension techniques should provide a high degree of resiliency. OTV must be deployed only at specific edge devices. but will usually involve a mix of complex protocols. A cost-effective solution for the interconnection of data centers must be transport agnostic and give the network designer the flexibility to choose any transport between data centers based on business and operational preferences. and convergence times continue to improve. and an operationally intensive hierarchical scaling model. A solution capable of using an IP transport is expected to provide the most flexibility. and no extra configuration is needed to make it work.different areas. Bridge Protocol Data Units (BPDU) are not forwarded on the overlay. Each site will have its own spanning-tree domain.

Because the edge device needs to see the . These terminologies are used throughout the chapter. it receives the Layer 2 traffic for all VLANS that need to be extended to remote sites and dynamically encapsulates the Ethernet frames into IP packets and sends them across through the transport infrastructure. OTV configuration is done on the edge device and is completely transparent to the rest of the infrastructure. Each site can have one or more edge devices. OTV Terminology Before you learn how OTV control plane and data plane works. you must know OTV terminology. Figure 4-1 OTV Terminology OTV edge device: The edge device performs OTV functions.2(2) or later (Cisco Nexus 7700 platform) Transport Services license Cisco ASR 1000 Series: Cisco IOS XE Software Release 3.Any M-Series (Cisco Nexus 7000 Series) or F3 (Cisco Nexus 7000 Series or 7700 platform) line card for encapsulation Cisco NX-OS Release 5. which is shown in Figure 4-1.0(3) or later (Cisco Nexus 7000 Series) or Cisco NX-OS Release 6.5 or later Advanced IP Services or Advanced Enterprise Services license Note This chapter focuses on OTV implementation using the Nexus 7000 Series.

Transport network: This is the network that connects the OTV sites. It is part of the OTV control plane and doesn’t need additional configuration. typical Layer 2 functions. which gets encapsulated in an IP unicast or multicast packet and sent to the remote data center. At the time of writing this book. such as local switching. and flooding. Overlay network: This is a logical network that interconnects OTV edge devices. and you can have one join interface shared across multiple overlays. This could ultimately result in the creation of a loop scenario. Site: An OTV site is the same as a data center. The IP address of the join interface is used to advertise reachability of MAC addresses present in the site. one of the recommended designs is to have a separate VDC on the Nexus 7000 acting as the OTV edge device. The OTV edge device can be deployed in many places. thereby ending up in an active/active mode (for the same data VLAN). OTV overlay interface: The OTV overlay interface is a logical multiaccess and multicast capable interface. are performed on the internal interfaces. or both. You can have only one join interface associated with a given overlay. the only requirement is to allow IP reachability between the edge devices. each OTV device maintains dual adjacencies with other OTV edge devices belonging to the same DC site.2 (1) NX-OS release. OTV used the Site VLAN within a site to detect and establish adjacencies with other OTV edge devices. the mechanism of electing AEDs that is based only on the communication established on the site VLAN can create situations (resulting from connectivity issues or misconfiguration) where OTV edge devices belonging to the same site can fail to detect one another. OTV join interface: The OTV join interface is a Layer 3 interface. The internal interface doesn’t need any specific OTV configuration. The forwarder is known as the Authoritative Edge Device (AED). It must be defined by the user. When the OTV edge device receives a Layer 2 frame that needs to be delivered to a remote data center site. which need to be extended. So we create one VDC per each Nexus 7000 to have redundancy. it is recommended to have it at the Layer 3 boundary for the site so it can see all the VLANs. The internal interface is usually configured as an access or trunk interface and receives Layer 2 traffic.2(1). the frame is logically forwarded to the overlay interface. Authoritative edge device: OTV elects a designated forwarding edge device per site for each VLAN to provide loop-free multihoming. OTV internal interface: The Layer 2 interface on the edge device that connects to the VLANS to be extended is called the internal interface. starting with the 5. The edge devices talk to each other on the internal interfaces to elect the AED. It uses the site VLAN to elect the authoritative edge device for the VLANs. it can be a logical interface. Site VLAN: OTV edge devices send hellos on the site VLAN to detect other OTV edge devices in the same site. All the OTV configuration is applied to the overlay interface. Prior to NX-OS release 5.Layer 2 traffic. such as a Layer 3 port channel or a Layer 3 port-channel subinterface. however. which has VLANs that must be extended to another site—meaning another data center. The join interface “joins” the overlay network and discovers the other overlay remote edge devices. It is recommended to use a dedicated VLAN as a site VLAN that is not extended across the overlay. OTV edge devices continue to use the site VLAN to . It can be owned by the customer or by the service provider. To address this concern. it can be a physical interface or subinterface. However. spanning-tree.

This happens without enabling PIM on this interface. This adjacency is called site adjacency. you can use one or more OTV edge devices as an adjacency server. This can happen in two ways. All OTV edge devices belonging to a specific overlay register with the available list of the adjacency servers. 2. all OTV edge devices should be configured to join a specific Any Source Multicast (ASM) group. only IGMPv3 is enabled on the interface using the IP IGMP version 3 command. The combination of the site identifier and the IS-IS system ID is used to identify a neighbor edge device in the same site. All edge devices that are in the same site must be configured with the same site identifier. it is now mandatory to configure each OTV device also with a site-identifier value. The entire edge device will be a source and a receiver on that ASM group. 3. The OTV control protocol running on each OTV edge device generates hello packets that need to be sent to all other OTV edge devices. depending on the nature of the transport infrastructure: Multicast Enabled Transport: If the transport infrastructure is enabled for multicast.2(1). adding an external IP header. Each OTV edge device sends an IGMP report to join the specific ASM group. the original frames must be OTV encapsulated.discover and establish adjacency with other OTV edge devices in a site. You need only to specify the ASM group to be used and associate it with a given overlay interface. To be able to exchange MAC reachability information. The OTV hello messages are sent across the logical overlay to reach all OTV remote edge devices. This is required to communicate its existence and to trigger the establishment of control plane adjacencies within the same overlay. The following steps explain the sequence. To enable this new functionality. This site identifier is advertised in IS-IS hello packets sent over both the overlay and on the site VLAN. established via the join interfaces across the Layer 3 network domain. The control protocol function is to advertise MAC address reachability information instead of using data plane learning. the enterprise must negotiate the use of this ASM group with the service provider. which runs between OTV edge devices. The edge devices join the group as hosts. Multicast Enabled Transport Infrastructure When the transport infrastructure is enabled for multicast. OTV edge devices must become adjacent to each other. In addition to the site adjacency. OTV Control Plane One of the key differentiators of OTV is the use of the control plane protocol. named overlay adjacency. which is used to carry control protocol exchanges. Unicast Enabled Transport: If the transport infrastructure is not enabled for multicast. leveraging the join interface. The source IP address in the external header is set to the IP address of the join . OTV devices also maintain a second adjacency. starting from NX-OS 5. For this to happen. we use a multicast group to exchange control protocol messages between the OTV edge devices. If the transport infrastructure is owned by the service provider. which leads to the discovery of all OTV edge devices belonging to a specific overlay: 1. which relies on flooding of Layer 2 traffic across the transport infrastructure.

3. The multicast frames are carried across the transport and optimally replicated to reach all the OTV edge devices that joined that multicast group. Note To enable OTV functionality you must use the feature otv command. The resulting multicast frame is then sent to the join interface toward the Layer 3 network domain. An OTV update message is created containing information for the MAC address learned through the internal interface. it is possible to leverage the IS-IS HMAC-MD5 authentication feature to add an HMAC-MD5 digest to each OTV control protocol message. After the OTV devices discover each other. it will point to the joint interface of the remote OTV edge device. It was selected because it is a standard-based protocol. Unicast-Enabled Transport Infrastructure OTV can be deployed with a unicast-only transport infrastructure. 2. The OTV update is replicated in the transport infrastructure and delivered to all the remote OTV edge devices on the same overlay. the next step is to exchange MAC address reachability information: 1. 6. the device doesn’t display any OTV commands until you enable the feature. and the end result is the creation of OTV control protocol adjacencies between all edge devices. The message is OTV encapsulated and sent into the OTV transport infrastructure. This prevents unauthorized routing messages from being injected into the network routing domain. the source IP address in the external header is set to the IP address of the join interface of the OTV edge device. 4. The IP destination address of the packet in the outer header is the multicast group used for OTV control protocol exchanges. the OTV control plane protocol is enabled. The receiving OTV edge devices decapsulate the packets. The routing protocol used to implement the OTV control plane is IS-IS. The OTV edge devices receiving the update decapsulate the message and deliver it to the OTV control process. The hellos are passed to the control protocol process. For security. The OTV edge device learns new MAC addresses on its internal interface using traditional data plane learning. The use of the ASM group to transport the hello messages allows the edge devices to discover each other as if they were deployed on a shared LAN segment. 5. instead of pointing to a physical interface. 4. whereas the destination is the multicast address of the ASM group dedicated to carry the control protocol. The MAC address reachability information is imported to the MAC address tables of the edge devices. originally designed with the capability of carrying MAC address information in the TLV. After enabling the OTV overlay interface. the unicast deployment of OTV is . The same process occurs in the opposite direction. The only difference with the traditional MAC address table entry is that for the MAC address in the remote data center. it doesn’t require a specific configuration command to do it.interface of the edge device.

Each OTV edge device must join a specific overlay register with the adjacency server.used when a multicast-enabled transport infrastructure is not an option. Data Plane for Unicast Traffic After all OTV edge devices are part of an overlay establish adjacency and MAC address reachability information is exchanged. When a Layer 2 frame is received at the aggregation device. it builds the list of all the OTV edge devices that are part of the same overlay. This list is periodically sent to all the listed OTV edge devices. The only difference is that each OTV edge device must create a copy of the control plane packet and send it to each remote OTV edge device. To be able to send control protocol messages to all the OTV edge devices. This is achieved by having one or more OTV edge devices as an adjacency server. There are two types of Layer 2 communication: intrasite (traffic within the same site) and intersite (traffic between two different sites on the same overlay). When the adjacency server receives hello messages from the remote OTV edge devices. a normal Layer 2 lookup determines how to reach MAC 2. traffic can start flowing across the overlay. As shown in Figure 4-2. The list is called a unicast-replication list. which is the OTV edge device in this case. the information in the MAC table points to Eth1 and the frame is delivered to the destination. They both belong to Site A and they are in the same VLAN. The following process details the communication: Intrasite unicast communication: Intrasite means that both servers that need to talk to each other reside in the same data center. each OTV edge device needs to know the list of the other devices that are part of the same overlay. MAC 1 needs to talk to MAC 2. Figure 4-2 Intrasite Layer 2 Unicast Communication Intersite unicast communication: Intersite means that the servers that need to talk to each other . The OTV control plane over unicast transport infrastructure works in the same way the multicast transport works.

1Q fields from the original Layer 2 frame and adds an OTV shim (containing the VLAN and the overlay information) and an external IP header. MAC 1 belongs to Site A and MAC 3 belongs to Site B. The Layer 2 frame is received at the aggregation device. The OTV edge device at Site B performs another Layer 2 lookup on the original Ethernet frame. MAC 3 will point to a local physical interface. When the remote OTV edge device receives the IP packets. MAC 3 in the MAC table in Site A points to the IP address of the remote OTV edge device in Site B. The following procedure details the process for MAC 1 to establish communication with MAC 3: 1. a normal Layer 2 lookup happens.reside in different data centers but belong to the same overlay. a normal IP unicast packet is sent across the transport infrastructure. MAC 1 needs to talk to MAC 3. The destination IP address in the outer header is the IP address of the joint interface of the remote OTV edge device in Site B. 4. The OTV edge device in Site A encapsulates the Layer 2 frame. 2. This time. after that. the source IP of the outer header is the IP address of the joint interface of the OTV edge device in Site A. Figure 4-3 Intersite Layer 2 Unicast Communication Figure 4-4 shows the OTV data plane encapsulation on the original Ethernet frame. 3. As shown in Figure 4-3. The OTV edge device removes the CRC and the 802. and the frame is delivered to MAC 3 destination. . which is the OTV edge device at Site A in this case. The OTV encapsulation increases the overall MTU size by 42 bytes. it decapsulates it and exposes the original Layer 2 packet.

These groups are different from the ASM groups used for the OTV control plane. There are two ways to overcome the issue of the increased MTU size: Configure a larger MTU on all interfaces where traffic will be encapsulated. In a Layer 2 domain. the assumption is that all intermediate LAN segments support at least the configured interface MTU size of the host. we distinguished between two scenarios where the transport infrastructure is multicast enabled or not. The mapping happens between the site multicast groups and the SSM group range in the transport infrastructure. Step 1 starts when a multicast source is activated in a given data center site. A multicast source is activated on the west side and starts streaming traffic to the multicast group Gs. in this case. The multicast traffic needs to flow across the OTV overlay. This means that mechanisms like Path MTU Discovery (PMTUD) are not an option in this case. Note Nexus 7000 doesn’t support fragmentation and reassembly capabilities. which is explained in the following steps and shown in Figure 4-5. . including the join interface and any links between the data centers that are in the OTV transport. Data Plane for Multicast Traffic You might be faced with situations in which there is a need to establish multicast communication between remote sites. 1. Lower the MTU on all servers so that the total packet size does not exceed the MTU of the interfaces where traffic is encapsulated. This can happen when the multicast sources are deployed in one site and the multicast receivers are in another remote site.Figure 4-4 OTV Data Plane Encapsulation All OTV control and data plane packets originate from an OTV edge device with the Don’t Fragment (DF) bit set. we use a set of Source Specific Multicast (SSM) groups in the transport to carry the Layer 2 multicast streams. In the OTV control plane section. That means you need to accommodate the additional 42 bytes in the MTU size along the path between source and destination OTV edge devices.

The OTV control protocol is used to communicate the Gs-to-Gd mapping to all remote OTV edge devices. 4. The OTV edge device snoops the IGMP message and realizes there is an active receiver in the site interested in group Gs. 2.2. 3. . belonging to VLAN A. Figure 4-5 Mapping of the Site Multicast Group to the SSM Range Step 2 starts when a receiver in the remote site deployed in the same VLAN as the multicast source decides to join the multicast group Gs. in turn. Gd) SSM group. The mapping information specifies the VLAN (VLAN A) to which the multicast source belongs and the IP address of the OTV edge device that created the mapping. This allows building an SSM tree (group Gd) across the transport infrastructure that can be used to deliver the multicast stream Gs. The local OTV edge device receives the first multicast frame and creates a mapping between the group Gs and a specific SSM group Gd available in the transport infrastructure. The client sends an IGMP report inside the east site to join the Gs group. The east edge device. The OTV device sends an OTV control protocol message to all the remote edge devices to communicate this information. sends an IGMPv3 report to the transport to join the (IP A. The range of SSM groups to be used to carry Layer 2 multicast data streams are specified during the configuration of the overlay interface. 1. The edge device in the east side finds the mapping information previously received from the OTV edge device in the west side identified by the IP address IP A. 5. The remote edge device in the west side receives the GM-Update and updates its Outgoing Interface List (OIL) with the information that group Gs needs to be delivered across the OTV overlay. which is explained in the following steps and shown in Figure 4-6. 3.

This helps to reduce the amount of head-end replication. IGMP snooping is used to ensure that Layer 2 multicast traffic is sent to the remote DC only when an active receiver exists. This is native in OTV. ARP suppression. Failure Isolation Data center interconnect solutions should provide a Layer 2 extension between multiple data center sites without giving up resilience and stability. This section explains each of these functions STP Isolation OTV doesn’t forward STP BPDUs across the overlay. This feature enables the administrator to configure a dedicated broadcast group to be used by OTV in the multicast core network.Figure 4-6 Receiver Joining Multicast Group Gs When the OTV transport infrastructure is not multicast enabled. Layer 2 multicast traffic can be sent across the overlay using head-end replication by the OTV edge device deployed in the DC where the multicast source is enabled. Note Cisco NX-OS 6. This control plane feature makes each site STP domain independent. we leverage the same ASM multicast groups in the transport already used for OTV control protocol. Unknown Unicast Handling . OTV achieves this goal by providing four main functions: Spanning Tree Protocol (STP) isolation. Broadcast traffic will be handled the same way as the OTV hello messages. and broadcast policy control. For unicast only transport infrastructure. you can now enforce different QoS treatments for data traffic and control traffic. When we have a multicast-enabled transport. Unknown Unicast Traffic suppression. you don’t need to add any configuration to achieve it. By separating the two multicast addresses. The STP root placement configuration and parameters are decided for each site separately. head-end replication performed on the OTV device in the site originating the broadcast would ensure traffic delivery to all the remote OTV edge devices that are part of the unicast-only list.2(2) introduces a new optional feature called dedicated broadcast group.

2(2). The ARP aging timer on the OTV edge devices should always be set lower than the CAM table aging timer. Note For the ARP caching. This helps to reduce the amount of traffic sent across the transport infrastructure. When a device in one of the data centers. When a subsequent ARP request originates to the same IP address in Site A. OTV control protocol will map the MAC address to the destination IP address of the join interface of the remote data center. it will not be forwarded across the OTV overlay but will be locally answered by the OTV edge device. sends out an ARP request. The command otv flood mac mac-address vlan vlan-id must be added. By default on the Nexus 7000. Starting with NX-OS 6. Having more than one OTV edge device at a given site. you can statically add the MAC address of the silent host in the OTV edge device MAC address table. selective unknown unicast flooding based on the MAC address is supported. which require the flooding of Layer 2 traffic to function. however. When the OTV edge device receives a frame destined to a MAC address that is not in the MAC address table. OTV Multihoming Multihoming is a native function in OTV where two or more OTV edge devices provide a LAN extension for a given site. The ARP request eventually reaches the destination machine. If the host’s default gateway is placed on a device different from Nexus 7000. As a workaround. the OTV ARP aging timer is 480 seconds and the MAC aging timer is 1800 seconds. for example. This feature is important. This is the normal behavior of a classical Ethernet interface. ARP Optimization ARP optimization is one of the native features in OTV that helps to reduce the amount of traffic sent across the transport infrastructure. It is assumed that no silent hosts are available and eventually the OTV edge device will learn the MAC address of the host and will advertise it to the other OTV edge devices on the overlay. because OTV . which is a broadcast message. that doesn’t happen via the overlay. all unknown unicast frames will not be forwarded across the logical overlay. such as Site A. The OTV edge device in Site A can snoop the ARP reply and cache the contained mapping information. you should put the ARP aging timer of the device to a value lower than its MAC aging timer.Unknown Unicast Handling OTV control protocol advertises MAC address reachability information between OTV edge devices. the Layer 2 traffic is flooded out the internal interfaces. If the MAC address is in a remote data center. otherwise. this request is sent across the OTV overlay to Site B. which in turn creates an ARP reply message and sends it back to the originating host. in the case of Microsoft Network Load Balancing Services (NLBS). consider the interaction between ARP and CAM table aging timers because incorrect settings can lead to traffic black holing.

The site identifier is advertised through IS-IS hello packets. This is necessary to localize the gateway at a given site and optimize the outbound traffic from that site. In this case. It is important that you enable the filtering of FHRP messages across the overlay. First Hop Redundancy Protocol Isolation Another capability introduced by OTV is to filter the First Hop Redundancy Protocol (FHRP —HSRP. It is assumed the VDC is already created. if we don’t filter the hello messages between the sites it will lead to the election of a single default gateway for a given VLAN that will lead to a suboptimal path for the outbound traffic. Because we have the same VLANs exist in multiple sites. VRRP. In addition to the site adjacency. the same default gateway is available. The AED role is elected on a per-VLAN basis at a given site. and so on) messages across the overlay. can lead to an end-to-end loop. OTV maintains a second adjacency named overlay adjacency. You will be using VLANs 10 and 5 to be extended to VLAN 15 as the site VLAN. Figure 4-7 shows the topology used. All OTV edge devices in the same site are configured with the same site-identifier value. To be able to support multihoming. always using the local default gateway. and to avoid creating an end-to-end loop. and it is characterized by the same virtual IP and virtual MAC addresses in each data center. an Authoritative Edge Device (AED) concept is introduced. it allows the use of the same FHRP configuration in different sites. OTV uses a VLAN called Site VLAN within a site to detect adjacency with other OTV edge devices. the outbound traffic will be able to follow the optimal and shortest path. To enable this functionality. the OTV device must be configured with a site-identifier value. The combination of the system ID and the site identifier is used to identify a neighbor edge device belonging to the same site. Examples 4-1 through 4-5 use a dedicated VDC to do the OTV function.doesn’t send spanning tree BPDUs across the overlay. In this case. . which happens via the OTV join interface across the Layer 3 network domain. OTV Configuration Example with Multicast Transport This configuration example addresses OTV deployment at the aggregation layer with a multicast transport infrastructure. so you will not see steps to create the Nexus 7000 VDC.

99/30 AGG-VDC-A(config-if)# ip pim sparse-mode AGG-VDC-A(config-if)# ip igmp version 3 AGG-VDC-A(config-if)# no shutdown Example 4-4 Sample PIM Configuration on the Aggregation VDC Click here to view code image .26.26.Figure 4-7 OTV Topology for Multicast Transport Example 4-1 Sample Configuration for How to Enable OTV VDC VLANs Click here to view code image OTV-VDC-A(config)# vlan 5-10.98/30 OTV-VDC-A(config-if)# ip igmp version 3 OTV-VDC-A(config-if)# no shutdown Example 4-3 Sample Configuration of the Point-to-Point Interface on the Aggregation Layer Facing the Join Interface Click here to view code image AGG-VDC-A(config)# interface ethernet 2/1 AGG-VDC-A(config-if)# description [ Connected to OTV Join-Interface ] AGG-VDC-A(config-if)# ip address 172. 15 Example 4-2 Sample Configuration of the Join Interface Click here to view code image OTV-VDC-A(config)# interface ethernet 2/2 OTV-VDC-A(config-if)# description [ OTV Join-Interface ] OTV-VDC-A(config-if)# ip address 172.255.255.

(It should be the same for all the OTV edge devices belonging to the same site.1. otherwise. Configure the OTV control group: Click here to view code image OTV-VDC-A(config-if)# otv control-group 239. you will not see any OTV commands: Click here to view code image OTV-VDC-A(config)# feature otv 2.1. Enable the OTV feature.0/8 Example 4-5 Sample Configuration of the Internal Interfaces Click here to view code image interface ethernet 2/10 description [ OTV Internal Interface ] switchport switchport mode trunk switchport trunk allowed vlan 5-10.AGG-VDC-A# show running pim feature pim interface ethernet 2/1 ip pim sparse-mode interface ethernet 1/10 ip pim sparse-mode interface ethernet 1/20 ip pim sparse-mode ip pim rp-address 172. Configure the OTV site VLAN: Click here to view code image OTV-VDC-A(config)# otv site-vlan 15 4. Create and configure the overlay interface: Click here to view code image OTV-VDC-A(config)# interface overlay 1 OTV-VDC-A(config-if)# otv join-interface ethernet 2/2 5. Configure the OTV site identifier.101 ip pim ssm range 232. 15 no shutdown Before creating and configuring the overlay interface.0.255. certain configurations must be done.) Click here to view code image OTV-VDC-A(config)# otv site-identifier 0x1 3. The following shows a sample configuration for the overlay interface for a multicast-enabled transport infrastructure: 1. Configure the OTV data group: .1 6.0.26.

5579.1 Data group range(s) : 232.1.0.255.0.255.1.0/26 Join interface(s) : Eth2/2 (172.255. it is shut down by default: Click here to view code image OTV-VDC-A(config-if)# no shutdown Example 4-6 Verify That the OTV Edge Device Joined the Overlay Click here to view code image OTV-VDC-A# sh otv overlay 1 OTV Overlay Information Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Control group : 239.0/0 172.54c2.94 Up Time 1w0d 1w0d Adj-State UP UP Example 4-7 Check the MAC Addresses Learned Through the Overlay Click here to view code image OTV-VDC-A# show otv route OTV Unicast MAC Routing Table For Overlay1 VLAN MAC-Address Metric Uptime Owner ---.1.26.26.1.7c42 172.1.ac64 1 00:00:55 site 10 001b.26.26.3dc1 1 00:00:55 Overlay Next-hop(s) ----------ethernet 2/10 OTV-VDC-C OTV Configuration Example with Unicast Transport Examples 4-8 through 4-13 show sample configurations for an OTV deployment in a unicast-only .-----.255.Click here to view code image OTV-VDC-A(config-if)# otv data-group 232.98) Site vlan : 15 (up) OTV-VDC-A# show otv adjacency Overlay Adjacency database Overlay-Interface Overlay1 : Hostname System-ID Dest Addr OTV-VDC-C 0022. This is required to allow communication from the OTV VDC to the Layer 3 network domain: Click here to view code image OTV-VDC-A(config)# ip route 0.36c2 172.0/26 7.99 9. Configure the VLAN range to be extended across the overlay: Click here to view code image OTV-VDC-A(config-if)# otv extend-vlan 5-10 8.0c07.-------------.--------10 0000.5579.1.90 OTV-VDC-D 0022. You must bring up the overlay interface. Create a static default route to point out to the join interface.-------.

Figure 4-8 show the topology that will be used in Examples 4-8 through 4-13. When a new site is added.1 unicast-only otv extend-vlan 5-10 Example 4-10 Client OTV Edge Device Configuration Click here to view code image feature otv .transport. The second step is to do the required configuration on the OTV edge device not acting as an adjacency server. meaning acting as a client.1. one of the OTV edge devices. The client devices are configured with the address of the adjacent server. It is always recommended to configure a pair with a secondary adjacency server in another site for redundancy. only the OTV edge devices for the new site need to be configured with adjacency server addresses.1. The first step is to define the role of the adjacency server. Figure 4-8 OTV Topology for Unicast Transport Example 4-8 Primary Adjacency Server Configuration Click here to view code image feature otv otv site-identifier 0x1 otv site-vlan 15 interface Overlay1 otv join-interface e2/2 otv adjacency-server unicast-only otv extend-vlan 5-10 Example 4-9 Secondary Adjacency Server Configuration Click here to view code image feature otv otv site-identifier 0x2 otv site-vlan 15 interface Overlay1 otv join-interface e1/2 otv adjacency-server unicast-only otv use-adjacency-server 10.

0000.1 10.2.3.2.2) Site vlan : 15 (up) AED-Capable : Yes Capability : Unicast-Only Is Adjacency Server : Yes Adjacency Server(s) : 10.0000.1.1.1.2 unicast-only otv extend-vlan 5-10 Example 4-11 Identifying the Role of the Primary Adjacency Server Click here to view code image Primary_AS# show otv overlay 1 OTV Overlay Information Site Identifier 0000.1.3.0001 Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Join interface(s) : E2/2 (10.0002 Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Join interface(s) : E1/2 (10.otv site-identifier 0x3 otv site-vlan 15 interface Overlay1 otv join-interface e1/1 otv use-adjacency-server 10.2.1.1 / [None] Example 4-13 Identifying the Role of the Client OTV Edge Device Click here to view code image Generic_OTV# show otv overlay 1 OTV Overlay Information Site Identifier 0000.0000.2.3) Site vlan : 15 (up) AED-Capable : Yes .1.1) Site vlan : 15 (up) AED-Capable : Yes Capability : Unicast-Only Is Adjacency Server : Yes Adjacency Server(s) : [None] / [None] Example 4-12 Identifying the Role of the Secondary Adjacency Server Click here to view code image Secondary_AS# show otv overlay 1 OTV Overlay Information Site Identifier 0000.0003 Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Join interface(s) : E1/1 (10.

and is simple to configure. Table 4-2 Key Topics for Chapter 4 Define Key Terms Define the following key terms from this chapter. When it comes to Layer 2 extension solutions. and check your answers in the glossary: overlay transport virtualization (OTV) Data Center Interconnect (DCI) Ethernet over Multiprotocol Label Switching (EoMPLS) Multiprotocol Label Switching (MPLS) Virtual Private LAN Services (VPLS) coarse wavelength-division multiplexing (CWDM) Spanning Tree Protocol (STP) Authoritative Edge Device (AED) virtual LAN (VLAN) bridge protocol data unit (BPDU) . the only requirement is IP connectivity between the sites. noted with the key topics icon in the outer margin of the page.1.Capability : Unicast-Only Is Adjacency Server : No Adjacency Server(s) : 10. With a few command lines you can extend Layer 2 between multiple sites.2. OTV uses a concept called MAC address routing.2. we went through the fundamentals of OTV and how it can be used as a data center interconnect solution.1 / 10.1. you can run OTV over any transport infrastructure. It provides native multihoming. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.2 Summary In this chapter. OTV is the latest innovation. Table 4-2 lists a reference of these key topics and the page numbers on which each is found. loop prevention end-to-end.

Media Access Control (MAC) address resolution protocol (ARP) Virtual Router Redundancy Protocol (VRRP) First Hop Redundancy Protocol (FHRP) Hot Standby Router Protocol (HSRP) Internet Group Management Protocol (IGMP) Protocol Independent Multicast (PIM) .

Chapter 5. you should mark that question as wrong for purposes of the self-assessment. The Cisco Nexus platform provides a rich set of management and monitoring mechanisms. This includes initial configuration of devices and ongoing maintenance of all the components. and verification. The operational efficiency is an important business priority because it helps in reducing the operational expenditure (OPEX) of the data center. Management and Monitoring of Cisco Nexus Devices This chapter covers the following exam topics: Control Plane. Table 5-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. “Answers to the ‘Do I Know This Already?’ Quizzes. Data Plane. . If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. An efficient and rich management and monitoring interface enables a network operations team to effectively operate and maintain the data center network. You can find the answers in Appendix A. This chapter also identifies the mechanism available in the Nexus platform to protect the control plane of the switch. and Management Plane Initial Setup of Nexus Switches Nexus Device Configuration and Verification Management and monitoring of network devices is an important element of data center operations. It explains the functions performed by each plane and the key features of the data plane. It also provides an overview of out-of-band and in-band management interfaces. read the entire chapter.” Table 5-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. configuration. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. The Nexus platform provides several methods for device configuration and management. If you do not know the answer to a question or are only partially sure of the answer. This chapter provides an overview of the operational planes of the Nexus platform. control plane. and management plane. These methods are discussed with some important commands for initial setup.

Boot variable . When you execute the write erase command. User plane 4. Which of the following parameters are not deleted by this command? a. LACP 6. STP d. It stores user data on the switch. STP d. it deletes the startup configuration of the switch. Switch name b. NETCONF c. User plane 3. Select the protocols that belong to the control plane of the switch. SNMP b. Default route in the global routing table 7. Control plane b. Which of the following operational planes is responsible for maintaining routing and switching tables of a multilayer switch? a. Management plane d. Data plane c. a. Which of the following operational planes of a switch is responsible for configuring and monitoring the switch? a. IP address of loopback 0 e. Control plane b. Select the protocols that belong to the management plane of the switch. Admin user password d. The initial setup script of a Nexus switch performs the startup configuration of a system. What is the function of the data plane in a switch? a. It maintains mac-address-table. It maintains the routing table.1. mgmt0 IP address and subnet mask c. LACP 5. b. Data plane c. Management plane d. 2. a. d. c. SNMP b. NETCONF c. Which of the following parameters are configurable by initial setup script? a. It forwards user traffic.

you get an error message. IPv4 configuration on mgmt0 interface c. From CLI using the install all command b. 11. The command syntax is incorrect.b. which of the following components can be upgraded? (Choose all that apply. Supervisor BIOS c. Loopback d. c. enable the feature interface-vlan.) a. First. a. Port channel e. and then create SVI. Link Aggregation Control Protocol (LACP) fast timers are configured. However. Identify the conditions in which In Service Software Upgrade (ISSU) cannot be performed on a Nexus 5000 switch. c. When using ISSU to upgrade the Nexus 7000 switch software. when you type the interface vlan 10 command. Switch name d. I/O module BIOS and image 10. Admin user password 8. show running-config default . d. From a web browser using HTTP protocol c. with all the default configuration parameters. You are configuring a Cisco Nexus switch and would like to create a VLAN interface for VLAN 10. FEX image e. First. Using DCNM software d. To upgrade Nexus switch software. Kickstart image b. Which of the following commands is used to perform this task? a. b. mgmt0 b. b. 12. Which of the following are examples of logical interfaces on a Nexus switch? a. An application on the switch has Cisco Fabric Services (CFS) lock enabled. You want to check the current configuration of a Nexus switch. d. SVI 13. System image d. you must create VLAN 10 on the switch. The appropriate software license is missing. which of the following methods can be used? a. From NMS server using SNMP 9. vPC is configured on the switch. FEX is connected to the switch. E1/1 c. What could be the possible reason of this error? a.

show interface connected Foundation Topics Operational Planes of a Nexus Switch The data center network is built using multiple network nodes or devices. Typically. Which command will show you a brief list of interfaces with up state? a. Figure 5-1 shows the operational planes of a Nexus switch in a data center. it is switched by these network devices based on the best path they learned using the control protocols. show logging last 10 c. This simple explanation of a network node leads to the three operational components of a network node. and management plane. These components are data plane. show running-config 14. show last 10 logging d. These network nodes are switches and routers. show logging 10 15. connected to each other and configured in different topologies. In a large network with hundreds of these devices. You want to check the last 10 entries of the log file. show interface up brief b. When a network endpoint sends a packet (data). You want to see the list of interfaces that are currently in up state. it is important to build a management framework to monitor and manage these devices. Which command do you use to perform this task? a. show startup-config c. The purpose of building such network controls is to provide a reliable data transport for network endpoints.b. These network endpoints are servers and workstations. control plane. show logging b. these network devices talk to each other using different control protocols to find the best network path. show interface status d. show interface brief | include up c. show running-config all d. .

however. This approach is helpful in reducing the packethandling time within the network device. The switch then performs the forwarding process. Following are a few characteristics of store-and-forward switching: Error Checking: Store-and-forward switches receive a frame completely and analyze it using the frame-check-sequence (FCS) calculation to make sure the frame is free from data-link errors. The advancements in integrated circuit technologies allowed network device vendors to move the Layer 2 forwarding decision from Complex Instruction Set Computing (CISC) and Reduced Instruction Set Computing (RISC) processors to the Application-Specific Integrated Circuits (ASIC) and Field-Programmable Gate Arrays (FPGA). with ASICs and FPGAs packets can be switched without storing them in the memory. The earlier method requires the packet to be stored in the memory before a forwarding decision can be made. drop the traffic. . It can process incoming traffic by looking at the destination address in its forwarding table and deciding on an appropriate action. This component of a network device receives endpoint traffic from a network interface and decides what to do with this traffic. This action could be to forward traffic to an outgoing interface. Store-and-Forward Switching In store-and-forward switches. In a typical Layer 2 switch. the forwarding decisions are based on the destination MAC address of data packets. thereby improving the data plane latency of network switches to tens of microseconds. data packets received from an incoming interface must be completely stored in switch memory or ASIC before sending them to an outgoing interface. or send it to the control plane for further processing.Figure 5-1 Operational Plane of Nexus Data Plane The data plane is also called a forwarding plane.

but the egress port is busy transmitting a frame coming in from another interface. In Nexus 5500. the frame is not forwarded in a cut-through fashion. Each UPC manages eight ports. In this switching mode. Therefore. the switch needs to buffer this packet. The store-and-forward switch architecture is simple because it is easier to forward a stored packet. the basic principles of data plane operations are the same.Buffering: The process of storing and then forwarding enables the switch to handle various networking conditions. however. Cut-Through Switching In the cut-through switching mode. for a 10 Gbps port it provides a 20 percent over-speed rate. . switches start forwarding a frame before the frame has been completely received. Unified Crossbar Fabric (UCF): It provides a switching fabric by cross-connecting the UPCs. The Cisco Nexus 5500 switch data plane architecture is primarily based on two custom built ASICs developed by Cisco: Unified Port Controller (UPC): It provides data plane packet processing on ingress and egress interfaces. low latency. Invalid Packets: It starts forwarding the packet before receiving it completely and making sure it is valid. and weighted fairness across all inputs. which helps ensure line-rate throughput regardless of the internal packet overhead imposed by ASICs. The switch does not have to wait for the rest of the packet to make its forwarding decision. The forwarding decision is made as soon as the switch sees the portion of the frame with the destination address. Following are a few characteristics of cut-through switching: Low Latency Switching: A cut-through switch can make a forwarding decision as soon as it has looked up the DMAC address of the data packet. The ingress buffering process provides the flexibility to support any mix of Ethernet speeds. It is a single-stage. Figure 5-2 shows Nexus 5500 data plane architecture. This technique reduces the network latency. and each port has a dedicated data path. cut-through switching only flags but does not drop invalid packets. In this case. Therefore. which ensures high throughput. high-performance. Nexus 5500 Data Plane Architecture Data plane architecture of Nexus switches vary based on the hardware platform. an enhanced iSLIP algorithm is used for scheduling. The scheduler coordinates the use of the crossbar for a contention-free match between input and output pairs. Access Control List: Because a store-and-forward switch stores the entire packet. Each data path connects to Unified Crossbar Fabric (UCF) via a dedicated fabric interface at 12 Gbps. nonblocking crossbar with an integrated scheduler. The UCF is always used to schedule and switch packets between the ports of UPC. Egress Port Congestion: If a cut-though forwarding decision is made. packets with errors are forwarded to other segments of the network. the switch can examine the necessary portions to permit or deny that frame.

The switch fabric is designed to forward 10 G packets in cut-through mode. Cut-through switching can be performed only when the ingress data rate is equivalent or faster than the egress data rate. The 1 G to 1 G switching is performed in store-and-forward mode. Table 5-2 Nexus 5500 Switching Modes Control Plane . Table 5-2 shows the interface types and modes of switching.Figure 5-2 Nexus 5500 Data Plane Nexus 5500 utilizes both cut-through and store-and-forward switching.

or BGP are examples of control plane protocol. the configuration of these components is different based on the platform and hardware version. EIGRP. This information is then processed to create tables that help the data plane to operate. or through the out-of-band 10/100/1000-Mbps management port. The control plane of a network node can speak to the control plane of its neighbor to share routing and switching information. Table 5-3 summarizes the control plane specifications of a Nexus 5500 switch.The control plane maintains the information necessary for the data plane to operate. Figure 5-3 Control Plane Protocols Nexus 5500 Control Plane Architecture The Nexus switches control plane architecture has a similar component for almost all Nexus switches. the control plane is not limited to only routing protocols. This information is collected and computed using complex protocols and algorithms. The control plane performs several other functions. . however. The control plane of this switch runs Cisco NX-OS software on a dual-core 1. However. let’s look at the Cisco Nexus 5548P switch. Routing protocols such as OSPF. To understand control plane of Nexus. and the system is managed inband. including Cisco Discovery Protocol (CDP) Bidirectional Forwarding (BFD) Unidirectional Link Detection (UDLD) Link Aggregation Control Protocol (LACP) Address Resolution Protocol (ARP) Spanning Tree Protocol (STP) FabricPath Figure 5-3 shows the Layer 2 and Layer 3 control plane protocols of a Nexus switch.7-GHz Intel Xeon Processor C5500/C3500 Series with 8 GB of DRAM. The supervisor complex is connected to the data plane in-band through two internal ports running 1-Gbps Ethernet.

Table 5-3 Nexus 5500 Control Plane Specifications Figure 5-4 shows the control plane of a Nexus 5500 switch. Control Plane Policing . Figure 5-4 Control Plane of Nexus 5500 Cisco NX-OS software provides isolation between control and data forwarding planes within the device. This isolation means that a disruption within one plane doe not disrupt the other.

Any disruption or attacks to the supervisor module can bring down the control and management plane of the switch. updates. For example. This policy map is similar to a normal QoS policy and is applied to all traffic entering the switch from a nonmanagement port. therefore. In this case. Redirected packets: Packets that are redirected to the supervisor module. the supervisor module spends a large amount of time in handling these malicious packets and preventing the control plane from processing genuine traffic. Glean packets: If a Layer 2 MAC address for a destination IP address is not present in the FIB. or a slow response for applications Slow or unresponsive interactive sessions with the CLI Processor resource exhaustion. CoPP classifies the packets to different classes and provides a mechanism to individually control the rate at which the supervisor module receives these packets. Hence. the supervisor module receives the packet and sends an ARP request to the host. and packet delivery. it is a critical element for overall network stability and availability. reachability. The attacker can generate IP traffic streams to the control plane or management plane at a very high rate. Exception packets: Packets that need special handling by the supervisor module. Features such as Dynamic Host Configuration Protocol (DHCP) snooping or dynamic Address Resolution Protocol (ARP) inspection redirect some packets to the supervisor module. or keepalives Unstable Layer 2 topology Reduced service quality. such as a broadcast storm. Packet . The purpose of CoPP is to protect the control plane of the switch from the anomalies within the data plane. such as poor voice. it ensures network stability. Some examples of DoS attacks are outlined next: Internet Control Message Protocol (ICMP) echo requests IP fragments TCP SYN flooding If the control plane of the switch is impacted by excessive traffic. Another example is a denial of service (DoS) attack on the supervisor module. resulting in serious network outages. video. a high volume of traffic from the data plane to the supervisor module could overload and slow down the performance of the entire switch. such as the memory and buffers Indiscriminate drops of incoming packets Different types of packets can reach the control plane: Receive packets: Packets that have the destination address of a router. The supervisor module of the Nexus switch performs both the management plane and the control plane function. or from a denial of service (DoS) attack. These packets include router updates and keepalive messages. This includes an ICMP unreachable packet and a packet with IP options set. you will observe the following symptoms on the network: High CPU utilization Route flaps due to loss of the routing protocol neighbor relationship.Control Plane Policing The Control Plane Policing (CoPP) feature prevents unnecessary traffic from overwhelming the control plane resources. The CoPP feature enables a policy map to be applied to the supervisor bound traffic.

normal-dhcp-relay-response. Policing is the monitoring of data rates and burst sizes for a particular class of traffic. A policer determines three colors or conditions for traffic.2. which has a value of 1250 ms). which has a value of 1500 ms). The class l2-unpoliced has a BC value of 5 MB. NX-OS software applies strict policy. The classes critical. and monitoring have a BC value of 1000 ms. exception. Dual-rate policers monitor both CIR and peak information rate (PIR) of traffic. In Cisco NX-OS releases prior to 5. Peak information rate (PIR): The rate above which data traffic is negatively affected. After the initial setup. These values are 50 percent greater than the strict policy.classification can be done using the following parameters: Source and destination IP address Source and destination MAC address VLAN Source and destination port Exception cause After the packets are classified. Extended burst (BE): The size that a traffic burst can reach before all traffic exceeds the PIR. The actions can transmit the packet. If you do not select a policy. Committed burst (BC): The size of a traffic burst that can exceed the CIR within a given unit of time. undesirable. Choosing one of the following CoPP policy options from the initial setup can set the level of protection for the control plane of a switch: Strict: This policy is 1 rate and 2 color and has a BC value of 250 ms (except for the important class. mark down the packet. These values are 25 percent greater than the strict policy. the Cisco NX-OS software installs the default CoPP system policy to protect the supervisor module from DoS attacks. Skip: No control plane policy is applied. the rate at which packets arrive at the supervisor module can be controlled by two mechanisms. you should choose rates so that the aggregate traffic does not overwhelm the . Dense: This policy is 1 rate and 2 color. and violate (red). which has a value of 1000 ms). Moderate: This policy is 1 rate and 2 color and has a BC value of 310 ms (except for the important class. normal-dhcp. You can configure the following parameters for policing: Committed information rate (CIR): This is the desired bandwidth. One is called policing and the other is called rate limiting. normal. or drop the packet. The Cisco Nexus device hardware performs CoPP on a per-forwarding-engine basis. CoPP does not support distributed policing. l2-default. exceed (yellow). You can define only one action for each condition. redirect. These conditions or colors are as follows: conform (green). The classes important. Single-rate policers monitor the specified committed information rate (CIR) of traffic. therefore. and default have a BC value of 250 ms. this option is named none. management. The setup utility allows building an initial configuration file using the system configuration dialog. Lenient: This policy is 1 rate and 2 color and has a BC value of 375 ms (except for the important class.

. It preserves and improves the operational continuity of the network infrastructure. This tool is useful for troubleshooting problems related to the switch itself. It does not capture hardware-switched traffic between data ports of the switch. The packets captured by Ethanalyzer are only control plane traffic destined to the supervisor CPU. Monitor network statistics by collecting health and utilization data from the network node using network management protocols such as simple network management protocol (SNMP). Configure the network node using network management protocols such as simple network management protocol (SNMP) and network configuration protocol (NETCONF). Control Plane Analyzer Ethanalyzer is a Cisco NX-OS built-in protocol analyzer. The management plane is used in day-to-day administration of the network node. You can use Ethanalyzer to troubleshoot your network by capturing and decoding the control-plane traffic. DCNM leverages XML APIs of the Nexus platform. Manage the network node using an element manager such as Data Center Network Manager (DCNM). Representational State Transfer (REST). JavaScript Object Notation (JSON). and guest shell with support for Python. Program the network node using One Platform Kit (onePK). Network administrators can learn about the amount and nature of the control-plane traffic within the switch. Ethanalyzer enables you to perform the following functions: Capture packets sent to and received by the switch supervisor CPU Set the number of packets to be captured Set the length of the packets to be captured Display packets with either very detailed protocol information or a one-line summary Open and save the packet data captured Filter packet captures based on many criteria Filter packet displays based on many criteria Decode the internal header of the control packet Management Plane The management plane of the network node deals with the configuration and monitoring of the control plane.supervisor module. Application Program Interface (API). It improves troubleshooting and reduces time to resolution. based on the command-line version of Wireshark. Following are the key benefits of Ethanalyzer: It is part of the NX-OS integrated management tool set. It provides an interface to the network administrators (see Figure 5-5) and NMS tools to perform actions such as the following: Configure the network node using command-line interface (CLI).

Some of these tools are discussed in the following section. Figure 5-6 shows an overview of the management and monitoring tools.Figure 5-5 Management Plane of Nexus Nexus Management and Monitoring Features The Nexus platform provides a wide range of management and monitoring features. The purpose of creating an out-of-band network is to increase . out-of-band (OOB) management means that management traffic is traversing though a dedicated path in the network. Figure 5-6 Nexus Management and Monitoring Tools Out-of-Band Management In a data center network.

The remote administrator connects to the terminal server router and performs a reverse telnet to connect to the console port of the device.the security and availability of management capabilities within the data center. It is recommended to connect all console ports of the devices to a terminal or comm server router such as Cisco 2900 for remotely connecting to the console port of devices. This network provides connectivity to management tools. it is important to secure it so only administrators can access it. Figure 5-7 Out-of-Band Management Cisco Nexus switches can be connected to the OOB management network using the following methods: Console port Connectivity Management Processor (CMP) Management port Console Port The console port is an asynchronous serial port used for initial switch configuration. It provides an RS-232 connection with an RJ-45 interface. the management port of network devices. . and ILO ports of servers. dedicated switches. Because this network is built for management of data center devices. In Figure 5-7. In-band management is discussed later in this section. and network administrator 2 is using in-band network management. network operators can reach the devices using this OOB management network. Figure 5-8 shows remote access to the console port via OOB network. network administrator 1 is using OOB network management. In a data center. and troubleshooting without any dependency on the network build for user traffic. monitoring. and firewalls are deployed to create an OOB network. OOB management enables a network administrator to access the devices in the data center for routine changes. The advantage of this approach is that during a network outage. routers. This network is dedicated for management traffic only and is completely isolated from the user traffic.

its supervisor module. The CMP provides OOB management and monitoring capability independent from the primary operating system. Figure 5-9 shows how to connect to CMP from the active supervisor module.Figure 5-8 Console Port Connectivity Management Processor For management high availability. The CMP enables lightsout remote monitoring and management of the Cisco Nexus 7000 Series system. Key features of the CMP include the following: CMP provides a dedicated operating environment independent of the switch processor. You can access the CMP from the active supervisor module. Note that CMP is available only in the Nexus 7000 Supervisor engine 1. Before you begin. . It provides complete visibility of the system during the entire boot process. ensure that you are in the default virtual device context (VDC). the Cisco Nexus 7000 series switches have a connectivity management processor that is known as the connectivity management processor (CMP). It provides the capability to take complete console control of the supervisor. It provides monitoring of supervisor status and initiation of resets. It provides access to critical log information on the supervisor. It provides the capability to initiate a complete system restart. and all other modules without the need for separate terminal servers.

The administrator uses Secure Shell (SSH) and Telnet to the vty lines to create a virtual terminal session. In this method. If the network is down. or Telnet are examples). . you must globally disable the specific protocol. Cisco NX-OS devices have a limited number of vty lines. This port is part of management VRF on the switch and supports speeds of 10/100/1000 Ethernet. up to 16 concurrent vty sessions are allowed. switches are configured with an in-band IP address on a routed interface. A vty line is used for all remote network connections supported by the device. to prevent a Telnet session to the vty line. in-band management will also not work. OOB connectivity via mgmt0 port is shown earlier in Figure 5-7. In-band is done using vty of a Cisco NX-OS device. SCP. In-band Management In-band management utilizes the same network as user data to manage network devices. For example. SVI. The network administrator manages the switch using this in-band IP address. There is no need to build a separate OOB management network. and other management information. you must disable Telnet globally using the no feature telnet command. The OOB management interface (mgmt0) can be used to manage all VDCs. It enables you to manage the device using an IPv4 or IPv6 address. SNMP. regardless of protocol (SSH. To disable a specific protocol from accessing vty sessions. Each VDC has its own representation for mgmt0 with a unique IP address from a common management subnet for the physical switch that can be used to send syslog. vty lines automatically accept connections using any configured transport protocols. Management port is also known as mgmt0 interface. When all vty lines are in use. By default. To help ensure that a device can be accessed through a local or remote management session. You can configure an inactive session timeout and a maximum sessions limit for virtual terminals.Figure 5-9 Connectivity Management Processor Management Port (mgmt0) The management port of the Nexus switch is an Ethernet interface to provide OOB management connectivity. new management sessions cannot be established. In Cisco NX-OS. proper controls must be enforced on vty lines. the number of configured lines available can be determined by using the show run vshd command. or loopback interface that is reachable via the same path as user data.

Each user can have multiple roles. and each role can contain multiple rules. which then determines what the individual user is allowed to do on the switch. VLANs. this user also has Role-B. . When roles are combined.Role-Based Access Control To manage a Nexus switch. When you create a user account on the switch. you can define new roles and assign them to the users. Therefore. The roles are defined to restrict user authorization to perform switch management operations. a user has Role-A. users who belong to both Role-A and Role-B can access configuration and debug operations. Table 5-4 Default Roles on Nexus 5000 A Nexus 7000 switch supports features such as VDC. associate the appropriate role with the user account. a network operations center (NOC) engineer might want to check the health of the network. If a user belongs to multiple roles. but he is not permitted to make changes to the switch configuration. if Role-A allows access to only configuration operations. which is a virtual switch within the Nexus chassis. For example. With RBAC. the user can execute a combination of all the commands permitted by these roles. however. users log in to the switch and perform tasks as per their role in the organization. Each role has a list of rules that define the permission to perform a task on the switch. These roles are assigned to users so they have limited access to switch operations as per their role. you first define the required user roles and then specify the rules for each switch management operation that user role is allowed to perform. For example. which denies access to a configuration command. and Role-B allows access to only debug operations. User Roles User roles contain rules that define the operations allowed on the switch. the user is allowed access to the configuration commands. NX-OS has some default user roles. and interfaces of a Nexus switch. NX-OS enables you to create local users on the switch and assign a role to these users. which allows access to the configuration command. Table 5-5 shows the default roles on a Nexus 7000 switch. For example. Table 5-4 shows the default roles on a Nexus 5000 switch. You can also define roles to limit access to specific VSANs. a Nexus 7000 supports additional roles to administer and operate the VDCs. Role-Based Access Control (RBAC) provides a mechanism to define the amount of access that each user has when the user logs in to the switch. In this case. a permit rule takes priority over deny. However.

Enter the show role feature group command to display the default feature groups available for this parameter. You can apply rules for the following parameters: Command: NX-OS command or group of commands defined in a regular expression. The most granular control parameter of the rule is the command. The feature group combines related features for the ease of management. 64 user-defined roles can be configured per VDC. Rules defined for a user role override the user role policies. 256 rules can be assigned to a single role. which represents all commands associated with the feature. You can check the available feature names for this command using the show role feature command. The last level of control parameter is the feature group.Table 5-5 Default Roles on Nexus 7000 Rules A rule is the basic element of a role that defines the switch operations that are permitted or denied by a user role. Feature group: Default or user-defined group of features. Users see only the CLI commands they are permitted to execute when using context-sensitive help. rule 2 is checked before rule 1). that user can execute the most privileged union of all the rules in all assigned roles. Feature: All the commands associated with a switch feature. The next level of control parameter is the feature. Role changes do not take effect until the user logs out and logs in again. They allow limiting access to the interfaces. User Role Policies The user role policies are used to limit user access to the switch resources. A user account can belong to 64 roles. . If a user account belongs to multiple roles. and the user does not have access to the interface unless you configure a command rule to permit the interface command. Rules are executed in descending order (for example. For example. and VSANs. RBAC Characteristics and Guidelines Some characteristics and guidelines for RBAC are outlined next: Roles are associated to user accounts in the local database or to the users in a remote Authentication Authorization and Accounting (AAA) database. you define a user role policy to access a specific interface. VLANs.

Privilege Levels Privilege levels are introduced in NX-OS to provide IOS-compatible authentication and authorization functionality. By default. the privilege level and RBAC roles can be configured simultaneously. the predefined privilege levels are created as roles. SNMP works at the application layer and facilitates the exchange of management information between network management tools and the network devices. Table 5-6 shows the IOS privilege levels and their corresponding NX-OS system-defined roles. They cannot be referenced in an AAA server command authorization policy (for example. A user account can be associated to a privilege level or to RBAC roles. and plan for network growth. Similar to the RBAC roles. These predefined privilege levels cannot be deleted. Network administrators use SNMP to remotely manage network performance. the privilege level feature is disabled on the Nexus switch. This feature is useful in a network environment that uses a common AAA policy for managing Cisco IOS and NX-OS devices.Role features and feature groups should be used to simplify RBAC configuration when applicable. you must define the relationship between manager and agent. the privilege levels can also be modified to meet different security requirements. troubleshoot faults. When this feature is enabled. . Role features and feature groups can be referenced only in the local user account database. To enable privilege level support on the Nexus switch. RBAC roles can be distributed to multiple Nexus 7000 devices using Cisco Fabric Services (CFS). Table 5-6 NX-OS Privilege Levels Simple Network Management Protocol Cisco NX-OS supports Simple Network Management Protocol (SNMP) to remotely manage the Nexus devices. SNMP agent: A software component that resides on the device that is managed. To enable SNMP agent. use the feature privilege configuration command. such as switches and routers. AAA/TACACS+). There are three components of SNMP framework: SNMP manager: This is a network management system (NMS) that uses SNMP protocol to manage the network devices. On Nexus switches.

and monitor traffic loads. These messages are less reliable because there is no way for an agent to discover whether the SNMP manager has received the message. Figure 5-10 Nexus SNMP The Cisco Nexus devices support the agent and MIB. for communication between the SNMP manager and agent. SNMP Notifications In case of a network fault or other important event. The information collected from polling is analyzed to help troubleshoot network problems. The network management system (NMS) uses the Cisco MIB variables to perform set operations to configure a device or a get operation to poll devices for specific information. the SNMP agent can generate notifications without any polling request from the SNMP manager. such as authentication and encryption. SNMPv3 solves this problem by providing security features. verify configuration. NX-OS can generate the following two types of SNMP notifications: Traps: An unacknowledged message sent from the agent to the SNMP managers listed in the host receiver table. create network performance. These messages are more reliable because of the acknowledgement of the message from the SNMP manager. SNMPv3 Security was always a concern for SNMP. Authentication: Determines the message is from a valid source. The community string was sent in clear text and used for authentication between the SNMP manager and agent. You can configure Cisco NX-OS to send notifications to multiple host receivers. The security features provided in SNMPv3 are as follows: Message integrity: Ensures that a packet has not been tampered with in transit. for which the manager must acknowledge the receipt of the message. it can send the inform request again. Figure 5-10 shows components of SNMP. If the Cisco Nexus device never receives a response for a message.Management Information Base (MIB): The collection of managed objects on the SNMP agent. . Informs: A message sent from the SNMP agent to the SNMP manager.

Table 5-7 NX-OS SNMP Security Models and Levels Remote Monitoring Remote Monitoring (RMON) is an industry standard remote network monitoring specification. . Sampling interval: The interval to collect a sample value of the MIB object. you specify the following parameters: MIB: The object to monitor. RMON allows various network console systems and agents to exchange network-monitoring data.17).6. A security model is an authentication strategy that is set up for a user and the role in which the user resides. triggers an alarm at a specified threshold value. You can enable RMON and configure your alarms and events by using the CLI or an SNMP-based network management station. developed by the Internet Engineering Task Force (IETF).1. A combination of a security model and a security level determines which security mechanism is employed when handling an SNMP packet. When you create an alarm.2. The Cisco Nexus device supports SNMPv1. and SNMPv3. Each SNMP version has different security models and levels.3. The Cisco NX-OS supports RMON alarms.Encryption: Scrambles the packet contents to prevent it from being seen by unauthorized sources. SNMPv2c. and resets the alarm at another threshold value. In a Cisco Nexus switch.2.1. You can use alarms with RMON events to generate a log entry or an SNMP notification when the RMON alarm triggers. and no events or alarms are configured.17 represents ifOutOctets.2. 1. and logs to monitor a Cisco Nexus device. events. The specified object must be an existing SNMP MIB object in standard dot notation (for example. Table 5-7 shows SNMP security models and levels supported by NX-OS. RMON is disabled by default. RMON Alarms An RMON alarm monitors a specific management information base (MIB) object for a specified interval. A security level is the permitted level of security within a security model. You can set an alarm on any MIB object that resolves into an SNMP INTEGER type.1.

RMON supports the following type of events: SNMP notification: Sends an SNMP notification when a rising alarm or a falling alarm is triggered Log: Adds an entry in the RMON log table when the associated alarm is triggered Both: Sends an SNMP notification and adds an entry in the RMON log table when the associated alarm is triggered Syslog Syslog is a standard protocol defined in IETF RFC 5424 for logging system messages to a remote server. Table 5-8 describes the severity levels used in system messages. RMON Events RMON events are generated by RMON alarms. When you configure the severity level. Different events can be generated for a falling alarm and a rising alarm. Nexus switches support logging of system messages. you can use Cisco Fabric Services (CFS) to distribute the syslog server configuration. the system outputs messages at that level and lower. You can associate an event to an alarm. By default. To support the same configuration of syslog servers on all switches in a fabric. and syslog servers on remote systems.Sample type: Absolute sample is the value of MIB object. This log can be sent to the terminal sessions. When the switch first initializes. Falling threshold: Device triggers a falling alarm or resets a rising alarm on this threshold. Rising threshold: Device triggers a rising alarm or resets a falling alarm on this threshold. Cisco Nexus switches only log system messages to a log file and output them to the terminal sessions. a log file. Delta sample is the difference between two consecutive sample values. Table 5-8 System Message Severity Levels Embedded Event Manager . You can configure the Nexus switch to send syslog messages to a maximum of eight syslog servers. messages are sent to syslog servers only after the network is initialized. Events: The action taken when an alarm (rising or falling) is triggered.

Force the shutdown of any module. beginning in Cisco NX-OS Release 5.2. The action statement defines the action EEM takes when the event occurs. such as when an interface or a fan malfunctions. you can configure multiple event triggers. Event statements specify the event that triggers a policy to run. programmability. If no action is associated with a policy. . Each policy can have multiple action statements. Generate a Call Home event. Generate a syslog message. In many cases. Shut down specified modules because the power is over budget. In Cisco NX-OS releases prior to 5. such as a workaround or a notification. EEM defines event filters so only critical events or multiple occurrences of an event within a specified time period trigger an associated action. and automation within the device.2. The Embedded Event Manager (EEM) monitors events that occur on the device and takes action to recover or troubleshoot these events. There are three major components of EEM: Event statement Action statement Policies Event Statements An event is any device activity for which some action. Log an exception. should be taken. Update a counter. Generate an SNMP notification. these events are related to faults in the device. NX-OS EEM supports the following actions in action statements: Execute any CLI commands. The event statement defines the event to look for as well as the filtering characteristics for the event. Action Statements Action statements describe the action triggered by a policy. Policies NX-OS EEM policy consists of an event statement and one or more action statements. However. Use the default action for the system policy.Cisco NX-OS Embedded Event Manager provides real-time network event detection. Figure 5-11 shows a high-level overview of NX-OS Embedded Event Management. based on the configuration. you can configure only one event statement per policy. These actions could be sending an e-mail or disabling an interface to recover from an event. EEM still observes events but takes no actions. Reload the device.

. GOLD provides the following tests as part of the feature set: Boot diagnostics Continuous health monitoring On-demand tests Scheduled tests Figure 5-12 shows an overview of Nexus GOLD. It helps in detecting hardware faults and to make sure that internal data paths are operating as designed. If a fault is detected.Figure 5-11 Nexus Embedded Event Manager Generic Online Diagnostics (GOLD) Cisco generic online diagnostics (GOLD) is a suite of diagnostic functions that provide hardware testing and verification. the switch takes corrective action to mitigate the fault and to reduce potential network outages.

The switch includes the command output in the transmitted Call Home message. These alerts are grouped into alert groups. GOLD tests are executed when the chassis is powered up or during an online insertion and removal (OIR) event. The XML format enables communication with the Cisco Systems Technical Assistance Center (Cisco-TAC). and some important CLI commands to configure and verify network connectivity. Call Home includes a fixed set of predefined alerts on your switch. To display the diagnostic level that is currently in place. you can use following command: diagnostic start module 6 test all. Smart Call Home The Call Home feature provides e-mail-based notification of critical system events. or use Cisco Smart Call Home services to automatically generate a case with the technical assistance center (TAC). as per schedule. Full Text: Fully formatted message information suitable for human reading. Continuous health checks occur in the background. and CLI commands are assigned to execute when an alert in an alert group occurs. Multiple concurrent message destinations. It is recommended to start a disruptive diagnostic test manually during a scheduled network maintenance window. Multiple message format options. Boot diagnostics include disruptive tests and nondisruptive tests that run during system boot and system reset. use show diagnostic boot level. to start an on-demand test on module 6. basic switch configuration. For example. This feature can be used to page a network support engineer. such as the following: Short Text: Suitable for pagers or printed reports. corrective actions are taken through Embedded Event Manager (EEM) policies. Runtime diagnostics (also known as health monitoring diagnostics) include nondisruptive tests that run in the background during the normal operation of the switch. XML: Matching readable format that uses the Extensible Markup Language (XML) and the Adaptive Messaging Language (AML) XML schema definition (XSD). use the diagnostic bootup level complete command. Call Home can deliver alerts to multiple recipients that you configure in destination profiles. Following are advantages of using the Call Home feature: Automatic execution and attachment of relevant CLI command output. To enable boot diagnostics. You can use this feature to notify any external entity when an important event occurs on your device. and on demand from CLI. You can also start or stop an on-demand diagnostic test. If a failure condition is observed during the test. e-mail a network operations center. You can configure up to 50 e-mail destination addresses for each destination profile. Perform Initial Setup .Figure 5-12 Nexus GOLD During the boot diagnostics. Nexus Device Configuration and Verification This section reviews the process of initial setup of Nexus switches.

The initial setup script for the startup configuration is executed automatically the first time the switch is started. To start a new switch configuration. The startup configuration enables you to build an initial configuration file with enough information to provide basic system management connectivity. However. there is no default configuration on these switches. you can use CLI to perform additional configurations of the switch. you can run the initial setup script anytime to perform a basic device configuration by using the setup command from the CLI. This initial setup mode indicates that the switch is unable to find the configuration file at the startup. first you perform the startup configuration using the console port of the switch. However. the switch creates an admin user as the only initial user. and you have already configured the mgmt0 interface earlier. if you press Enter to skip the IP address configuration of mgmt0 interface. To use the default answer for a question. Therefore. If a default answer is not available. if a default value is displayed on the prompt and you press Enter for that step. After the initial configuration file is created. This setup script can have slightly different steps based on the Nexus platform. and not the configured value. the setup utility uses what was previously configured on the switch and goes to the next question. when you power on the switch the first time. it goes into an initial setup mode. Figure 5-13 shows the initial configuration process of a Nexus switch. this initial setup mode is also called startup configuration mode. and there is no configuration file found in the NVRAM of the switch. For example. Figure 5-13 Initial Configuration Procedure During the startup configuration. You can press Ctrl+C at any prompt of the script to skip the remaining configuration steps and use what you have configured up to that point. just press Enter. The startup configuration mode of the switch is an interactive system configuration dialog that guides you through the basic configuration of the system.When Nexus switches are shipped from the factory. therefore. The default . the setup utility changes to a configuration using that default. the setup utility does not change that configuration. The only step you cannot skip is to provide a password for the administrator.

If the configuration is correct. you do not need to edit it. and numerical characters. Press Enter at anytime to skip a dialog. you must use a minimum of eight characters with a combination of alphabets. upper.255.16.16. Use ctrl-c at anytime to skip the remaining dialogs. *Note: setup is mainly used for configuring the system initially.1 Configure advanced IP options? (yes/no) [n]: n Enable the telnet service? (yes/no) [n]: n Enable the ssh service? (yes/no) [y]: y Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa Number of rsa key bits <1024-2048> [1024]: 1024 Configure default interface layer (L3/L2) [L3]:L3 Configure default switchport interface state (shut/noshut) [shut]: shut The following configuration will be applied: password strength-check Switchname N7K interface mgmt0 ip address 172.255.0.1.0 Configure the default gateway? (yes/no) [y]: y IPv4 address of the default gateway : 172. During the basic system configuration.option is to use strong passwords.1. the switch shows you the initial configuration based on your input. You can change the password strength check during the initial setup script by answering no to the enforce secure password standard question. When you configure a strong password.0 no shutdown vrf context management ip route 0.20 Mgmt0 IPv4 netmask : 255. Click here to view code image Do you want to enforce secure password standard (yes/no): yes Enter the password for "admin": C1sco123 Confirm the password for "admin": C1sco123 ---. After the initial parameters are provided to the startup script. so you can verify it. Figure 5-14 shows the initial configuration setups of Nexus switches. After the admin user password is configured. when no configuration is present.Basic System Configuration Dialog VDC: 2 ---This setup utility will guide you through the basic configuration of the system. or later using CLI command no password strengthcheck. Answer yes if you want to enter the basic system configuration dialog. Press Enter if you would like to accept the default parameter.1.20 255. and it is saved in the switch NVRAM. the startup script asks you to proceed with basic system configuration. you can accept this configuration. the default answers are mentioned in square brackets [xxx]. So setup always assumes system defaults and not the current system configuration values.1 exit .16.16.1.255. Setup configures only enough connectivity for management of the system.and lowercase.0/0 172. Would you like to enter the basic configuration dialog (yes/no): yes Do you want to enforce secure password standard (yes/no) [y]: y Create another login account (yes/no) [n]: n Configure read-only SNMP community string (yes/no) [n]: n Configure read-write SNMP community string (yes/no) [n]: n Enter the switch name : N7K Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y Mgmt0 IPv4 address : 172.0.255.

In-Service Software Upgrade Each Nexus switch is shipped with the Cisco NX-OS software. and you can configure IPv6 later using CLI. The Cisco NX-OS software consists of two images: the kickstart image and the system image. There is minimal disruption to control-plane traffic only. To delete the switch startup configuration from NVRAM. This command will not delete the running configuration of the switch. ISSUs enable a network operator to upgrade NX-OS software without disrupting the operation of the switch. use the write erase command. ISSU upgrades the software images without disrupting data-plane traffic. To upgrade the switch to newer software. This enables you to reschedule the software upgrade to a time less critical for your services to minimize the impact of outage. The write erase command erases the entire startup configuration. a time might come when you have to upgrade the NX-OS software to address the software defects or to add new features to the switch. It is important to note that the initial configuration script supports only the configuration of the IPv4 address on the mgmt0. the Cisco NX-OS software warns you before proceeding with the software upgrade. When you restart the switch after erasing its configuration from NVRAM. except for the following: Boot variable definitions The IPv4 configuration on the mgmt0 interface. The software upgrade of a switch in the data center should not have any impact on availability of services.no feature telnet ssh key rsa 1024 force feature ssh no system default switchport system default switchport shutdown Would you like to edit the configuration? (yes/no) [n]: <enter> Use this configuration and save it? (yes/no) [y]: y Figure 5-14 Initial Configuration Dialogue After the initial configuration is completed. you can use the write erase boot command. the network administrator initiates the ISSU manually using one of the following methods: From CLI using a single command From the administrative interface of Cisco Data Center Network Manager (DCNM) On Nexus switches. After installing the Nexus switch and putting it into production. When an ISSU is initiated. including the IP address and subnet mask The route address in the management VRF To remove the boot variable definitions and the IPv4 configuration on the mgmt0 interface. the switch will go into the startup configuration mode. and the switch will continue to operate normally. you can disconnect from the console port and access the switch using the IP address configured on the mgmt0 port. In the case where ISSU will cause disruption to the data-plane traffic. it updates the following components on the system as needed: . You can erase the configuration of a Nexus switch to return to the factory default.

This multistage ISSU cycle is designed to minimize overall system interruption with no impact on data traffic forwarding. During this reload. and supervisor state is restored to the previously known configuration. Cisco NX-OS software supports ISSU from release 4. When a network administrator initiates ISSU. When the system recovers from the reset. but the data plane of the switch continues its operation and keeps forwarding packets. therefore. the ISSU performed on the parent switch automatically downloads and upgrades each of the attached fabric extenders. Figure 5-15 shows the ISSU process on Nexus 5000 switch. and the ISSU on this switch causes the supervisor CPU to reset and load the new software image. examples of Nexus 5000 and Nexus 7000 switches are described in the following sections. existing flows for servers connected to the Cisco Nexus device should see no traffic disruption. During the upgrade. control plane operation is resumed. It takes fewer than 80 seconds to restore control plane operation. a multistage ISSU cycle is initiated by the installer service. and the process varies based on the Nexus platform. it is running an updated version of NX-OS.2. To explain ISSU. At this time the runtime state of the control plane and data plane are synchronized. In a topology where Cisco Nexus 2000 Series fabric extenders are used. This separation in the control and data planes leads to a software upgrade with no service disruption. the data plane keeps forwarding packets. the control plane of the switch is inactive.Kickstart image Supervisor BIOS System image Fabric extender (FEX) image I/O module BIOS and image A well-designed data center considers service continuity by implementing switch-level redundancy at each layer within the data center and by dual-homing servers to redundant switches at the access layer. ISSU on the Cisco Nexus 5000 The Nexus 5000 switch is a single supervisor system. As a result.1 and later. . each fabric extender is upgraded to the Cisco NX-OS software release of the parent switch after the parent switch ISSU completes.

such as STP. FSPF. The ISSU process will abort if the system is configured with LACP fast timers.3. download the required software image and copy it to your upgrade location. Use the dir command to verify that required NX-OS files are present on the system. these images are copied to the switch bootflash. .bin system n5000uk9.5.0. and module insertion are not allowed during ISSU operation. use the show install all impact command to determine which components of the system will be affected by the upgrade. After verifying the image on the bootflash. All the configuration change requests from CLI and SNMP are denied. The Nexus 5000 switch undergoing ISSU must not be a root switch in the STP topology. No interface should be in a spanning tree designated forwarding state.2.Figure 5-15 ISSU Process on Nexus 5000 To perform ISSU. Link Aggregation Control Protocol (LACP) fast timers are not supported.N2. Make sure both the kickstart file and the system file have the same version. The syntax of this command is show install all impact kickstart n5000-uk9-kickstart. Network topology changes. FC Fabric changes that affect zoning.N2.0. Typically. ISSU cannot be performed if any application has Cisco Fabric Services (CFS) lock enabled.2.3.5.bin Some of the important considerations for Nexus 5000 ISSU are the following: The Nexus 5000 switch does not allow configuration changes during ISSU. domain manager.

The vPC switches continue their control plane communication during the entire ISSU process. both the data plane and the control plane are available during the entire software upgrade process. the primary switch is responsible for upgrading the FEX. the nondisruptive software upgrade is not supported when the Layer 3 license is installed. ISSU is supported on Nexus 5000 in a vPC configuration. Figure 5-16 shows the ISSU process for the Nexus 7000 platform. on Nexus 7000. The software is updated on the supervisor that is now in standby mode (previously active). The Cisco Nexus 5500 platform switches support Layer 3 functionality. Note Always refer to the latest release notes for ISSU guidelines. and a failover is done so the standby supervisor with the new image becomes active. After this step. It is okay to have bridge assurance enabled on the vPC peer-link. first the standby supervisor is upgraded to the new software. It is a fully distributed system with nondisruptive Stateful Switch Over (SSO) and ISSU capabilities. The secondary vPC switch should be upgraded after the ISSU process completes successfully on the primary device. the linecards or I/O modules are upgraded. except when the ISSU process resets the CPU of the switch being upgraded.If a zone merge or zone change request is in progress. Please note that the SSO feature is available only when dual supervisor modules are installed in the system. When one peer switch is undergoing an ISSU. you should start with the primary vPC switch. ISSU is achieved using a rolling update process. However. The peer switch is responsible for holding on to its state until the ISSU process is completed successfully. In this process. While upgrading the switches in vPC topology. Therefore. ISSU will be aborted. When both supervisors are upgraded successfully. In this process. the control plane of the switch is running on new software. In FEX virtual port channel (vPC) configurations. . Bridge assurance must be disabled for nondisruptive ISSU. ISSU on the Cisco Nexus 7000 The Nexus 7000 switch is a modular system with supervisor and I/O modules installed in a chassis. the other peer switch locks the configuration until the ISSU is completed. The Nexus 7000 ISSU leverages the SSO capabilities of the switch to perform a nondisruptive software upgrade.

. Figure 5-17 shows the state of supervisor modules before and after the software upgrade.Figure 5-16 ISSU Process on Nexus 7000 It is important to note that the ISSU process results in a change in the state of active and standby supervisors.

use the following ISSU command: install all kickstart <image> system <image> parallel. The software upgrade might be disruptive under the following conditions: A single-supervisor system with a kickstart or system image changes A dual-supervisor system with incompatible system software images VPC peers can only operate dissimilar versions of the Cisco NX-OS software during the upgrade or downgrade process. make sure that the upgrade process will be nondisruptive. It is recommended to use the show install all impact command to check whether the images are compatible for an upgrade. Prior to NX-OS release 5. Following are some of the important considerations for a Nexus 7000 ISSU: When you upgrade Cisco NX-OS software on Nexus 7000. Configuration mode is blocked during the Cisco ISSU to prevent any changes.2(1). It is not possible to upgrade the software individually for each VDC. use the show incompatibility system command.Figure 5-17 Nexus 7000 Supervisors Before and After ISSU Before you start ISSU on a Nexus 7000 switch. An ISSU might be disruptive if you have configured features that are not supported on the new software image. This method was very time consuming. To start a parallel upgrade of line cards.1(1). ISSU performs the linecard software upgrade serially. Operating VPC peers with dissimilar versions after the upgrade or downgrade process is complete is not supported. To initiate parallel FEX software upgrades. a parallel upgrade on the FEX is supported by ISSU. You can start the ISSU using the install all command. For releases . one I/O module is upgraded at a time. that is. To determine ISSU compatibility. Starting from NX-OS Release 6. it is upgraded for the entire system. use the parallel keyword in the command. therefore. including all VDCs on the physical device. support for simultaneous line card upgrade is added into this release to reduce the ISSU time.

and command syntax help for the interface command.3z fc Fiber Channel interface loopback Loopback interface mgmt Management interface port-channel Port Channel interface san-port-channel SAN Port Channel interface vfc Virtual FC interface vlan Vlan interface N5K-A(config)# interface <press-tab-key> ethernet loopback port-channel fc mgmt san-port-channel N5K-A(config)# interface vfc vlan Automatic Command Completion In any command mode. only a serial upgrade of FEX is supported. you can type an incomplete command and then press the Tab key to complete the rest of the command.prior to Cisco NX-OS Release 6. These commands will help you in configuration and verification of basic switch connectivity. Example 5-1 shows how to enter into switch configuration mode. the upgrade will be a serial upgrade. In the Cisco NX-OS software.1(1). To list keywords or arguments of a command. This form of help is called command syntax help. Even if a user types the parallel keyword in the command. all the commands that display system. user-defined names. and all commands that are used to configure the switch are grouped under the configure terminal command. N5K-A(config)# interface ? ethernet Ethernet IEEE 802. you use the configure terminal command and then use the appropriate command to configure the system. End with CNTL/Z. you see full parser help and descriptions. For example. If you would like to use a command. Pressing the Tab key provides command completion to a unique but partially typed command string. you first must enter into the group hierarchy and then type the command. the Tab key . Pressing the Tab key displays a brief list of the commands that are available at the current branch. In Example 5-2. in some cases. one per line. enter a question mark (?) in place of a keyword or argument. without any detail description. Command-Line Help To obtain a list of available commands. command completion may also be used to complete a system-defined name and. Important CLI Commands In NX-OS all CLI commands are organized hierarchically. type a question mark (?) at the system prompt. configuration. For example. Example 5-1 Command-Line Help Click here to view code image N5K-A# configure terminal Enter configuration commands. The commands that perform similar functions are grouped together at the same level. This section discusses important NX-OS CLI commands. the use of the where command to find current context. In syntax help. or hardware information are grouped under the show command. to configure a system.

All interface-related commands for interface Ethernet 1/1 will go under the interface. For example. Example 5-2 Automatic Command Completion Click here to view code image N5K-A(config)# vrf con<press-tab-key> N5K-A(config)# vrf context N5K-A(config)# vrf context man<press-tab-key> N5K-A(config)# vrf context management N5K-A(config-vrf)# Entering and Exiting Configuration Context To perform a configuration task.is used to complete the VRF command and then the name of a VRF on the command line. If you need to view a specific feature’s portion of the configuration. as shown in Example 5-4. The Exit command will take you back one level in the hierarchy. NX-OS will go out of the context to find the match. For example. you can run all show commands from the configuration context. it displays all the configuration. you enter into the interface context by typing interface ethernet 1/1. including the default configuration of the switch. Example 5-4 show running-config Click here to view code image N5K-A# sh running-config ipqos !Command: show running-config ipqos . you can do so by running the command with the feature specified. For example. show running-config interface ethernet 1/1 shows the configuration of interface Ethernet 1/1. If you run this command with command-line parameter all. Commands discussed in this section and their output are shown in Example 5-3. first you must switch to the appropriate configuration context in the group hierarchy. to configure an interface. When you type a command that does not belong to the current context. The Ctrl+Z key combination exits configuration mode but retains the user input on the command line. interface Ethernet1/1 admin@N5K-A N5K-A(config-if)# exit N5K-A(config)# show <Press CTRL-Z> N5K-A# show Display Current Configuration The show running-config command displays the current running configuration of the switch. Example 5-3 Configuration Context Click here to view code image N5K-A(config)# interface ethernet 1/1 N5K-A(config-if)# where conf. You can use the where command to display the current context and login credentials.

redirect. The pipe parameter can be placed at the end of all CLI commands to qualify. or filter the output of the command. the next 4 in the command means that 4 lines of output following the match of mstp are also to be displayed. where the qualifiers indicate to display the lines that match the regular expression mstp. The show spanning-tree command’s output is piped to egrep. Many advanced pipe options are available. The egrep parameter enables you to search for a word or expression and print a word count. and ignore-case in the command indicates that this match must not be case sensitive.!Time: Sun Sep 21 12:45:14 2014 version 5. When a match is found. In Example 5-5. It also has other options. These searching and filtering options follow a pipe character (|) at the end of the show command.0(3)N2(2) class-map type qos class-fcoe class-map type queuing class-fcoe match qos-group 1 class-map type queuing class-all-flood match qos-group 2 class-map type queuing class-ip-multicast match qos-group 2 class-map type network-qos class-fcoe match qos-group 1 class-map type network-qos class-all-flood match qos-group 2 class-map type network-qos class-ip-multicast match qos-group 2 system qos service-policy type qos input fcoe-default-in-policy service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type network-qos fcoe-default-nq-policy Command Piping and Parsing To locate information in the output of a show command. the show spanning-tree | egrep ignore-case next 4 mstp command is executed. including the following: Get regular expression (grep) and extended get regular expression (egrep) Less No-more Head and tail The pipe option can also be used multiple times or recursively. the Cisco NX-OS software provides the searching and filtering options. Example 5-5 Command Piping and Parsing Click here to view code image N5K-A# sh spanning-tree | egrep ? WORD Search for the expression count Print a total count of matching lines only ignore-case Ignore case difference when comparing strings invert-match Print only lines that contain no matches for <expr> line-exp Print only lines where the match is a whole line line-number Print each match preceded by its line number .

In the same manner other features can be enabled before configuring them on the switch. and Ethernet port channel interface types in the Cisco NX-OS software command output. For some features. the process for that feature is not running. After you enable the feature on the switch. followed by pressing the Tab key for help. the configuration and verification commands become available. protocols. Because of the feature command on the Nexus platform. You can enable or disable specific features such as OSPF. or disable them if they are no longer required. and their related configuration contexts are not visible until that feature is enabled on the switch. the process will not start until you configure the features. vFC.73cb. If a feature is not enabled on the switch. or logical interfaces. It is recommended that you keep the features disabled until they are required. you can see only Ethernet. however. After enabling the interface-vlan feature on the Nexus switch. interfaces. PIM. In the default configuration. and tunnels. Example 5-6 feature Command Click here to view code image N5K-A(config)# sh running-config | include feature feature fcoe feature telnet no feature http-server Feature lldp N5K-A(config)# interface <Press-Tab-Key> ethernet loopback port-channel vfc . the show running-config | include feature command is used to determine the features that are currently enabled on the switch. NX-OS only enables those features that are required for initial network connectivity. You can also use the show feature command to verify the features that are currently enabled on the switch. not all features are enabled by default. and the configuration and verification commands for that feature are not available from the CLI. the SVI interface type appears as a valid interface option. such as SVI.next prev word-exp Print <num> lines of context after every matching line Print <num> lines of context before every matching line Print only lines where the match is a complete word N5K-A# sh spanning-tree | egrep ignore-case next 4 mstp Spanning tree enabled protocol mstp Root ID Priority 32768 Address 0005. only feature telnet and feature lldp are enabled. In Example 5-6. Nexus switch supports physical interfaces.6d01 This bridge is the root Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Feature Command Cisco NX-OS supports many features. mgmt. and FCOE by using the feature command. vPC. such as 1Gbps and 10Gbps Ethernet. An example of such a feature is OSPF. When you use the interface command. This approach optimizes the operating system and increases availability of the switch by running only required processes on the switch.

In Nexus 5000 Series switch.000f N5K-A# Serial-Num ---------JAF1505BGTG JAF1504CCAR The show module command on the Nexus 7000 Series switches identifies which modules are inserted and operational on the switch.73cb.6ce7 3 0000. The number and type of expansion modules are dependent on the model of the system.6cc8 to 0005. each FEX is also considered an expansion module of the main system. and a 32-port M1 module in slot 10.0 World-Wide-Name(s) (WWN) -------------------------------------------------20:01:00:05:73:cb:6c:c0 to 20:08:00:05:73:cb:6c:c0 -- Mod MAC-Address(es) --. Example 5-8 Nexus 7000 Modules Click here to view code image N7K-1# show module Mod Ports Module-Type Model Status --.0000. a 32-port F1 module in slot 9. If expansion modules are installed. FEX slot numbers are assigned by the administrator during FEX configuration. Example 5-7 Nexus 5000 Modules Click here to view code image N5K-A# show module Mod Ports Module-Type Model Status --.-----------1 32 O2 32X10GE/Modular Universal Pla N5K-C5548UP-SUP active * 3 0 O2 Daughter Card with L3 ASIC N55-D160L3 ok Mod --1 3 Sw -------------5. they are also visible in this command.0(3)N2(2) Hw -----1. This number can be seen under the Module-Type column of the show module command.----. and their range of MAC addresses for the module are also available via this command. The number of ports associated with module 1 will vary based on the model of the switch. Modules that are inserted into the expansion module slots will be numbered according to the slot into which they are inserted.---------------------.----------------------------------. Example 5-7 shows the output of the show module command for the Nexus 5548 switch.fc N5K-A(config)# N5K-A(config)# ethernet fc N5K-A(config)# mgmt san-port-channel feature interface-vlan interface <Press-Tab-Key> loopback port-channel mgmt san-port-channel vfc vlan Verifying Modules The show module command provides a hardware inventory of the interfaces and module that are installed in the chassis.0(3)N2(2) 5. This switch has a Supervisor 1 module in slot 5.0000. Example 5-8 shows output of the show module command on the Nexus 7000 switch.----.0 1. their serial numbers. If fabric extenders (FEX) are attached.-------------------------------------1 0005.---------- . Software and hardware versions of each module.-------------------------------. the main system is referred to as module 1.-----------------.73cb.0000 to 0000.

tunnels. on Nexus there is no differentiation in the naming convention for the different speeds. and rate mode (dedicated or shared) is also displayed.0 ----. it also lists the reason why the interface is down. SVI. Information about the switch port mode.1. Example 5-9 shows the output of the show interface brief command. subinterfaces Management: Management On the Nexus platform.16.2(1) 5. null.2(1) 5. Example 5-9 show interface brief Command Click here to view code image N5K-A# show interface brief -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------mgmt0 -up 172.output removed ----- . speed. VLAN.1 2. all Ethernet interfaces are named Ethernet.output removed ----- Verifying Interfaces The Cisco Nexus switches running NX-OS software support the following types of interfaces: Physical: Ethernet (10/100/1000/10G/40G) Logical: Port channel. The show interface brief command lists the most relevant interface parameters for all interfaces on a Cisco Nexus Series switch.2(1) N7K-SUP1 N7K-F132XP-15 N7K-M132XP-12 active * ok ok Hw -----1. For interfaces that are down. where the name of the interface also indicates the speed. loopback.8 1.1 1000 1500 -------------------------------------------------------------------------------Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # -------------------------------------------------------------------------------Eth1/1 1 eth trunk up none 10G(D) -Eth1/2 1 eth trunk down SFP not inserted 10G(D) 60 Eth1/3 1 eth access down SFP not inserted 10G(D) -Eth1/4 -eth routed down SFP not inserted 10G(D) -Eth1/5 1 eth fabric up none 10G(D) 105 Eth1/6 1 eth access down SFP not inserted 10G(D) -Eth1/7 1 eth access down SFP not inserted 10G(D) -Eth1/8 1 eth access down SFP not inserted 10G(D) – ----. Unlike other switching platforms from Cisco.5 9 10 0 32 32 Supervisor module-1X 1/10 Gbps Ethernet Module 10 Gbps Ethernet Module Mod --5 9 10 Sw -------------5.

10 Gb/s. media type is 10G Beacon is turned off Input flow-control is off. description. BW 10000000 Kbit. The default configuration parameters for any interface may be displayed using the show running-config interface all command. and traffic statistics of the interface. It also shows details about the port speed.The show interface command displays the operational state of any interface. MTU.6cc8) MTU 1500 bytes. 195760 bytes/sec.73cb. output flow-control is off Rate mode is dedicated Switchport monitor is off EtherType is 0x8100 Last link flapped 3week(s) 6day(s) Last clearing of "show interface" counters never 30 seconds input rate 1449184 bits/sec.6cc8 (bia 0005. MAC address. Example 5-10 Displaying Interface Operational State Click here to view code image N5K-A# show interface ethernet 1/1 Ethernet1/1 is up Hardware: 1000/10000 Ethernet. Example 5-11 Showing Default Configuration Click here to view code image N5K-A# show running-config interface ethernet 1/1 all !Command: show running-config interface Ethernet1/1 all !Time: Sun Sep 21 17:39:33 2014 version 5. Example 5-11 shows the output of the show running-config interface Ethernet 1/1 all command. DLY 10 usec. rxload 1/255 Encapsulation ARPA Port mode is trunk full-duplex. Example 5-10 shows the output of the show interface Ethernet 1/1 command.0(3)N2(2) interface Ethernet1/1 no description lacp port-priority 32768 lacp rate normal priority-flow-control mode auto lldp transmit lldp receive no switchport block unicast no switchport block multicast no hardware multicast hw-hash no cdp enable ----. 237 packets/sec 30 seconds output rate 1566080 bits/sec. including the reason why that interface might be down. address: 0005. 252 packets/sec ----. txload 1/255.output removed ----- .73cb. 181148 bytes/sec.output removed ----- Interfaces on the Cisco Nexus Series switches have many default settings that are not visible in the show running-config command. reliability 255/255.

2014 Aug 24 22:40:45 N5K-A ntpd[4744]: ntp:precision = 1.000 usec 2014 Aug 24 22:40:45 N5K-A %FEX-5-FEX_PORT_STATUS_NOTI: Uplink-ID 0 of Fex 105 that is connected with Ethernet1/5 changed its status from Created to Configured 2014 Aug 24 22:40:47 N5K-A %SYSLOG-3-SYSTEM_MSG: Syslog could not be sent to server(10.14. The clear logging nvram command clears the logged messages in NVRAM. The clear logging logfile command clears the contents of the log file. Example 5-13 Viewing NVRAM Logs Click here to view code image N5K-A# sh logging nvram 2012 Jul 8 08:30:22 N5K-A 2012 Jul 8 08:30:22 N5K-A 2012 Jul 8 08:30:26 N5K-A 2012 Jul 10 09:21:20 N5K-A %$ %$ %$ %$ VDC-1 VDC-1 VDC-1 VDC-1 %$ %$ %$ %$ %PFMA-2-FEX_STATUS: Fex 105 is online %NOHMS-2-NOHMS_ENV_FEX_ONLINE: FEX-105 On-line %PFMA-2-FEX_STATUS: Fex 105 is online %SATCTRL-FEX105-2-SOHMS_DIAG_ERROR: FEX-105 . Example 5-12 shows the output of the show logging logfile command. The show logging logfile [start-time yyyy mmm dd hh:mm:ss] [end-time yyyy mmm dd hh:mm:ss] command displays the messages in the log file that have a time stamp within the span entered. Example 5-12 Viewing Log File Click here to view code image N5K-A# sh logging logfile 2014 Aug 24 22:40:45 N5K-A %CDP-6-CDPDUP: CDP Daemon Up.113. use the show logging nvram [last number-lines] command. Example 5-13 shows the output of the show logging nvram command. You can display or clear messages in the log file. Viewing Logs By default. the device outputs messages to terminal sessions and to a log file.4) : No such file or directory 2014 Aug 24 22:40:45 N5K-A %STP-6-STATE_CREATED: Internal state created stateless 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: pss init done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: conditional action being done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: ARP lib init done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: ppf lib init done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: In global cfg db 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: In vlan cfg db 2014 Aug 24 22:40:46 N5K-A last message repeated 12 times ----.The highlighted portions of the CLI include default parameters.output removed ----- Viewing NVRAM logs To view the NVRAM logs. Use the following commands: The show logging last number-lines command displays the last number of lines in the logging file.

noted with the key topics icon in the outer margin of the page. Define Key Terms . “Memory Tables Answer Key.” also on the disc. Table 5-9 lists a reference of these key topics and the page numbers on which each is found. base_addr=0x0. .kernel 2013 Mar 23 06:52:08 N5K-A %$ VDC-1 %$ %NOHMS-2-NOHMS_ENV_ERR_TEMPMINALRM: System minor temperature alarm on module 1.kernel 2013 Mar 13 12:21:59 N5K-A %$ VDC-1 %$ Mar 13 12:21:59 %KERN-2-SYSTEM_MSG: IO Error(set): IO space is not enabled.output removed ----- Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. . Appendix C. base_addr=0x0. Sensor Outlet crossed minor threshold 50C ----. “Memory Tables.” (found on the disc) or at least the section for this chapter.Module 1: Runtime diag detected major event: Voltage failure on power supply: 1 2012 Jul 10 09:21:20 N5K-A %$ VDC-1 %$ %SATCTRL-FEX105-2-SOHMS_DIAG_ERROR: FEX-105 System minor alarm on power supply 1: failed 2012 Jul 10 09:24:26 N5K-A %$ VDC-1 %$ %SATCTRL-FEX105-2-SOHMS_DIAG_ERROR: FEX-105 Recovered: System minor alarm on power supply 1: failed 2013 Jan 13 01:17:35 N5K-A %$ VDC-1 %$ Jan 13 01:17:35 %KERN-2-SYSTEM_MSG: IO Error(set): IO space is not enabled. Table 5-9 Key Topics for Chapter 5 Complete Tables and Lists from Memory Print a copy of Appendix B. and complete the tables and lists from memory. includes completed tables and lists to check your work.

Define Key Terms Define the following key terms from this chapter. and check your answers in the glossary: Data Center Network Manager (DCNM) Application Specific Integrated Circuit (ASIC) Frame Check Sequence (FCS) Unified Port Controller (UPC) Unified Crossbar Fabric (UCF) Cisco Discovery Protocol (CDP) Bidirectional Forwarding Detection (BFD) Unidirectional Link Detection (UDLD) Link Aggregation Control Protocol (LACP) Address Resolution Protocol (ARP) Spanning Tree Protocol (STP) Dynamic Host Configuration Protocol (DHCP) Connectivity Management Processor (CMP) Virtual Device Context (VDC) Role Based Access Control (RBAC) Simple Network Management Protocol (SNMP) Remote Network Monitoring (RMON) Fabric Extender (FEX) Management Information Base (MIB) Generic Online Diagnostics (GOLD) Extensible Markup Language (XML) In Service Software Upgrade (ISSU) Field-Programmable Gate Array (FPGA) Complex Instruction Set Computing (CISC) Reduced Instruction Set Computing (RISC) .

you should try to answer these self-assessment questionnaires to the best of your ability. . answer the “Do I Know This Already?” questions again for the chapters in this part of the book. Refer to “How to View Only DIKTA Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you DIKTA questions for this part only. answer the Part Review questions for this part of the book. Details on each task follow the table. but as you read through each new chapter you will become more familiar and comfortable with them. which are relevant to the particular chapter. which will help you remember the concepts and technologies of these key topics as you progress with the book. using the PCPT software. There will be no written answers to these questions in this book. Review Key Topics Browse back through the chapters and look for the Key Topic icons. if you’re not certain. If you do not remember some details.Part I Review Keep track of your part review progress with the checklist shown in Table PI-1. you should go back and refresh on the key topics. Refer to “How to View Only Part Review Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you Part Review questions for this part only. without looking back at the chapters or your notes. take the time to reread those topics. For this self-assessment exercise in this book. This exercise lists a set of key self-assessment questions. shown in the following list. using the PCPT software. Table PI-1 Part I Review Checklist Repeat All DIKTA Questions For this task. Answer Part Review Questions For this task. This might seem overwhelming to you initially. Self-Assessment Questionnaire Each chapter of this book introduces several key learning items.

. Note your answer and try to reference key words in the answer from the relevant chapter. Virtual Port Channel (vPC) would apply to Chapter 1. For example. You might or might not find the answer explicitly. For example.Think of each of these open self-assessment questions as being about the chapters you’ve read. number 2 is about Cisco Nexus Product Family. number 1 is about Data Center Network Architecture. and so on. but the chapter is structured to give you the knowledge and understanding to be able to discuss and explain these self-assessment questions.

Part II: Data Center Storage-Area Networking Exam topics covered in Part II: Describe the Network Architecture for the Storage Area Network Describe the Edge/Core Layers of the SAN Describe Fibre Channel Storage Networking Describe Initiator Target Describe the Different Storage Array Connectivity Describe VSAN Describe Zoning Describe Basic SAN Connectivity Describe the Cisco MDS Product Family Describe the Cisco MDS Multilayer Directors Describe the Cisco MDS Fabric Switches Describe the Cisco MDS Software and Storage Services Describe the Cisco Prime Data Center Network Management Describe Storage Virtualization Describe LUN Storage Virtualization Describe Redundant Array of Inexpensive Disk (RAID) Describe Virtualizing SANs (VSAN) Describe Where the Storage Virtualization Occurs Perform Initial Setup on Cisco MDS Switch Verify SAN Switch Operations Verify Name Server Login Configure and Verify VSAN Configure and Verify Zoning Chapter 6: Data Center Storage Architecture Chapter 7: Cisco MDS Product Family Chapter 8: Virtualizing Storage Chapter 9: Fibre Channel Storage-Area Networking .

” . SANs offer simplified storage management. and mobile computing. This chapter goes directly into the edge/core layers of the SAN design and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. Data Center Storage Architecture This chapter covers the following exam topics: Describe the Network Architecture for the Storage-Area Network Describe the Edge/Core Layers of the SAN Describe Fibre Channel Storage Networking Describe Initiator Target Describe the Different Storage Array Connectivity Describe VSAN Describe Zoning Describe Basic SAN Connectivity Every day. Fibre Channel.Chapter 6. thousands of devices are newly connected to the Internet. This demand is pushing the move of storage into the network. and an improved capability to combine technologies (both hardware and software) in more powerful ways. Storage area networks (SAN) are the leading storage infrastructure of today. Devices previously only plugged into a power outlet are connected to the Internet and are sending and receiving tons of data at scales. Gartner predicts that worldwide consumer digital storage needs will grow from 329 exabytes in 2011 to 4. “Answers to the ‘Do I Know This Already?’ Quizzes. Companies are searching for more ways to efficiently manage expanding volumes of data. It covers Fibre Channel protocol and operations. movement. scalability. the rapid growth of cloud. Powerful technology trends include the dramatic increase in processing power. flexibility. Table 6-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. the ability to analyze Big Data and turn it into actionable information. You can find the answers in Appendix A. social media. and availability. and to make that data accessible throughout the enterprise data centers. and backup. This chapter discusses the function and operation of the data center storage-networking technologies. and network-attached storage (NAS) connectivity for remote server storage. storage.1 zettabytes in 2016. read the entire chapter. and bandwidth at ever-lower costs. and improved data access. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. It compares Small Computer System Interface (SCSI). “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section.

1. b.) a. Block-level storage systems are very popular with storage area networks. Block-level storage systems are well suited for bulk file storage. Secure access to all hosts e. d. you should mark that question as wrong for purposes of the self-assessment. CIFS b. Consolidation b.Table 6-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. Block-level storage systems are generally inexpensive when compared to file-level storage systems. Business continuity d. Which of the following options describe advantages of block-level storage systems? (Choose all the correct answers. Which of the following options describe advantages of storage-area network (SAN)? (Choose all the correct answers. 2. None 3. Each block or storage volume can be treated as an independent disk drive and is controlled by an external server OS. Storage virtualization c.) a. If you do not know the answer to a question or are only partially sure of the answer. c. Fibre Channel .) a. Which of the following protocols are file based? (Choose all the correct answers. They can support external boot of the systems connected to them. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. e.

c. Integrated large-scale disk array. An HBA or storage device can belong to multiple VSANs. An HBA or a storage device can belong to only a single VSAN—the VSAN associated with the Fx port. Device performance and oversubscription ratios c. A dual-ported HBA has three WWNs: one nWWN. and Fibre Channel disk drive has a single unique nWWN. Low latency 6. 5. TN d. d. only 239 domains are available to the fabric. Membership is typically defined using the VSAN ID to Fx ports. Which of the following options are correct for Fibre Channel addressing? a. to ensure that the initiator and target process can communicate? a. and one pWWN for each port.) a. Centralized controller and cache system. PLOGI c. E b. Port density and topology requirements b. The arbitrated loop physical address (AL-PA) is a 16-bit address. c. NP 9. b. b. FLOGI b. NFS 4. Traffic management d. array controller. SCSI d. Which process enables an N Port to exchange information about ULP support with its target N Port. Which type of port is used to create an ISL on a Fibre Channel SAN? a. It is used for mission-critical applications. Which mapping table is inside the front-end array controller and determines which LUNs are advertised through which storage array ports? . 10. 7. c. Every HBA. d. On a Cisco MDS switch.) a. c. PRLI d. Which of the following options are correct for VSANs? (Choose all the correct answers. switch. b. The domain ID is an 8-bit field. gateway. Back-up storage product. PRLO 8. Which options describe the characteristics of Tier 1 storage? (Choose all the correct answers. one can define 4096 VSANs. F c. Which of the following options should be taken into consideration during SAN design? a. d.

parallel ATA (PATA. IBM introduced the first HDD in 1956. No correct answer exists Foundation Topics What Is a Storage Device? Data is stored on hard disk drives that can be both read and written on. power consumption. also called IDE or EIDE. 0 to n – 1 is thus the address space of the drive. Data is read in a random-access manner. the read and write function can be faster or slower. The platters are all bound together around the spindle. The basic interface for all modern drives is straightforward. Serial ATA (SATA). LUN masking c. Most current units are manufactured by Seagate. A disk can have one or more platters. The sectors are numbered from 0 to n – 1 on a disk with n sectors. HDDs are accessed over one of a number of bus types. The drive consists of a large number of sectors (512-byte blocks). write latency. and Western Digital. Serial Attached SCSI (SAS). HDDs are expected to remain the dominant medium for secondary storage due to predicted continuing advantages in recording capacity. and durability are more important considerations. LUN zoning e. An HDD retains its data even when powered off. and Fibre Channel. Continuously improved. LUN mapping b. A platter is a circular hard surface on which data is stored persistently by inducing magnetic changes to it. each platter has two sides. each of which is called a surface. More than 200 companies have produced HDD units. and product lifetime. SSDs are replacing HDDs where speed. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1. such as aluminum. price per unit of storage. where 1 gigabyte = 1 billion bytes). and the HDD technology on which they were built. described before the introduction of SATA as ATA). including as of 2011. each of which can be read or written. These platters are usually made of some hard material. Zoning d. SCSI. A hard disk drive (HDD) is a data storage device used for storing and retrieving digital information using rapidly rotating disks (platters) coated with magnetic material.000 gigabytes (GB. Depending on the methods that are used to run those tasks. Thus. which is connected to a motor that spins the . Multisector operations are possible. HDDs have maintained this position into the modern era of servers and personal computers. Toshiba. many file systems will read or write 4KB at a time (or more). meaning individual blocks of data can be stored or retrieved in any order rather than sequentially. the primary competing technology for secondary storage is flash memory in the form of solid-state drives (SSD). As of 2014. and HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. indeed. and then coated with a thin magnetic layer that enables the drive to persistently store bits even when the drive is powered off. we can view the disk as an array of sectors.a. The primary characteristics of an HDD are its capacity and performance.

There are usually 1024 tracks on a single disk. Cylinder. for example.000RPM range. JBOD (derived from “just a bunch of . referred to as RAID levels. doing so enables the drive to quickly respond to any subsequent requests to the same track. The rate of rotation is often measured in rotations per minute (RPM).” all the RAID levels are discussed in great detail. availability. tracks. The different schemes or architectures are named by the word RAID followed by a number (RAID 0. depending on the specific level of redundancy and performance required. which comes in several standard configurations and nonstandard configurations. and capacity. For example. performance. when reading a sector from the disk. logical block addressing facilitates the flexibility of storage networks by allowing many types of disk drives to be integrated more easily into a large heterogeneous storage environment.platters around (while the drive is powered on) at a constant (fixed) rate. and sectors is interesting. a drive that rotates at 10. Note that we will often be interested in the time of a single rotation. which the drive can use to hold data read from or written to the disk. it is also not used much anymore by the systems and subsystems that use disk drives. and all corresponding tracks from all disks define a cylinder (see Figure 6-1). Figure 6-1 Components of Hard Disk Drive A specific sector must be referenced through a three-part address composed of cylinder/head/sector information. To a large degree. track. the drive might decide to read in all of the sectors on that track and cache them in its memory. as well as whole disk failure. RAID levels greater than RAID 0 provide protection against unrecoverable (sector) read errors. which makes disks much easier to work with by presenting a single flat address space. Data is encoded on each surface in concentric circles of sectors. “Virtualizing Storage.000 RPM means that a single rotation takes about 6 milliseconds (6 ms). In Chapter 8. Although the internal system of cylinders. The most widespread standard for configuring multiple hard drives is Redundant Array of Inexpensive/Independent Disks (RAID). RAID 1). and sector addresses have been replaced by a method called logical block addressing (LBA). we call one such concentric circle a track. Each scheme provides a different balance between the key goals: reliability. Data is distributed across the drives in one of several ways.200 RPM to 15. and typical modern values are in the 7. And cache is just some small amount of memory (usually around 8 or 16 MB).

Each block or storage volume can be formatted with the file system required by the application (NFS/NTFS/SMB). In this type of storage. NFS (Linux). Typically. The following are advantages of block-level storage systems: Block-level storage systems offer better performance and speed than file-level storage systems. . integration with corporate directories. cache. Generally. File-level storage systems are well suited for bulk file storage. File-level storage systems are more popular with NAS-based storage systems—Network Attached Storage. the storage disk is configured with a particular protocol (such as NFS. Additionally. disk array components are often hot swappable. and each block can be controlled like an individual hard drive. Advantages of file-level storage systems are the following: A file-level storage system is simple to implement and simple to use. The raw blocks (storage volumes) are created. these blocks are controlled by the server-based operating systems. Storage-area network (SAN) arrays are based on block-level storage. Block-level storage systems are very popular with SANs. or they may be combined into a single logical volume using a volume manager. while making them accessible either as independent hard drives or as a combined (spanned) single logical volume with no actual RAID functionality. They can be configured with common file-level protocols like NTFS (Windows). often up to the point where all single points of failure (SPOF) are eliminated from the design. fans. access ports. File-level storage systems are generally inexpensive when compared to block-level storage systems. Each block or storage volume can be individually formatted with the required file system. power supplies. a disk array provides increased availability. and so on) and files are stored and accessed from it as such. The file level storage device itself can generally handle operations such as access control. and the like. in bulk. Each block or storage volume can be treated as an independent disk drive and is controlled by external server OS. Hard drives may be handled independently as separate logical volumes. and so on. and so on).disks”) is an architecture involving multiple hard drives. Disk arrays are divided into categories: Network attached storage (NAS) arrays are based on file-level storage. and maintainability by using existing components (controllers. resiliency. CIFS. It stores files and folders and is visible as such to both the systems storing the files and the systems accessing them. disk enclosures.

To access data with DAS. and their transport systems are very efficient. For example.Block-level storage systems are more reliable. Servers are connected to the connection port of the disk subsystem using standard I/O techniques such as Small Computer System Interface (SCSI). DAS devices provide little or no mobility to other servers and little scalability. Infortrend. Standard . or Internet SCSI (iSCSI) and can thus use the storage capacity that the disk subsystem provides (see Figure 6-2). and other companies that often act as OEMs for the previously mentioned vendors and do not themselves market the storage components they manufacture. Oracle Corporation. Hewlett-Packard. Hitachi Data Systems. and the like. NetApp. Dell. Figure 6-2 Servers Connected to a Disk Subsystem Using Different I/O Techniques Direct-attached storage (DAS) is often implemented within a parallel SCSI implementation. The internal structure of the disk subsystem is completely hidden from the server. to support data backups. IBM. DAS devices require resources on the host and spare disk systems that cannot be used on other systems. which sees only the hard disks that the disk subsystem provides to the server. They can support external boot of the systems connected to them. Devices in a captive storage topology do not have direct access to the storage network and do not support efficient sharing of storage. The controller of the disk subsystem must ultimately store all data on physical hard disks. Block-level storage can be used to store files and also provide the storage required for special applications like databases. Virtual Machine File Systems (VMFS). DAS is commonly described as captive storage. Fibre Channel. a user must go through some sort of front-end network. Primary vendors of storage systems include EMC Corporation. DAS devices limit file sharing and can be complex to implement and manage.

A disk drive can move to any position on the disk in a few milliseconds. tape drives can stream data very quickly. in normal operation the controller divides the load dynamically between the two I/O channels so that the available hardware can be optimally utilized. Sometimes. Active/active (no load sharing): In this cabling method the controller uses both I/O channels in normal operation. which provides random access storage. tape robot. The term SAN is usually (but not necessarily) identified with block I/O services rather than file-access . which provides physical connections and a management layer. A tape library. A tape drive provides sequential access storage. If one I/O channel fails. As a result. There are four main I/O channel designs: Active: In active cabling the individual physical hard disks are connected via only one I/O channel. Linear Tape-Open (LTO) supported continuous data transfer rates of up to 140 MBps. and computer systems so that data transfer is secure and robust. However. all hard disks are addressed via both I/O channels. storage elements. tape drives have very slow average seek times for sequential access after the tape is positioned. a number of slots to hold tape cartridges. Magnetic tape data storage is typically used for offline. For example. If one I/O channel fails. and increasingly Serial ATA (SATA). it is no longer possible to access the data. however. Active/active (load sharing): In this approach. both groups are addressed via the other I/O channel. unlike a disk drive. manufacturer-specific—I/O techniques are used. A SAN consists of a communication infrastructure. Active/passive: In active/passive cabling the individual hard disks are connected via two I/O channels. The hard disks are divided into two groups: in normal operation the first group is addressed via the first I/O channel and the second via the second I/O channel. proprietary—that is. and an automated method for loading tapes (a robot). or tape jukebox. if this access path fails. a bar-code reader to identify tape cartridges. in normal operation the controller communicates with the hard disks via the first I/O channel and the second I/O channel is not used.I/O techniques such as SCSI. Serial Attached SCSI (SAS). In the event of the failure of the first I/O channel. and Serial Storage Architecture (SSA) are being used for internal I/O channels between connection ports and controllers as well as between controllers and internal hard disks. is a storage device that contains one or more tape drives. These devices can store immense amounts of data. the disk subsystem switches from the first to the second I/O channel. as of 2010. but a tape drive must physically wind tape between reels to read any one particular piece of data. A tape drive is a data storage device that reads and writes data on a magnetic tape. archival data storage. the communication goes through the other channel only. This layer organizes the connections. currently ranging from 20 terabytes up to 2. sometimes called a tape silo. What Is a Storage-Area Network? The Storage Networking Industry Association (SNIA) defines the meaning of the storage-area network (SAN) as a network whose primary purpose is the transfer of data between computer systems and storage elements and among storage elements. Fibre Channel. comparable to hard disk drives. The I/O channels can be designed with built-in redundancy to increase the fault-tolerance of a disk subsystem.1 exabytes of data or multiple thousand times the capacity of a typical hard drive and well in excess of capacities achievable with network attached storage.

and business continuity. It also eliminates any restriction to the amount of data that a server can access. and disaster tolerance. Storage virtualization helps in making complexity nearly transparent and can offer a composite . It eliminates the traditional dedicated connection between a server and storage. Instead. tape. Figure 6-3 portrays a sample SAN topology. A SAN allows an any-to-any connection across the network by using interconnect elements such as switches and directors. and the concept that the server effectively owns and manages the storage devices. In addition. including disk.services. the storage utility might be located far from the servers that it uses. Consolidation is concentrating the systems and resources into locations with fewer. Figure 6-3 Storage Connectivity The key benefits that a SAN might bring to a highly data-dependent business infrastructure can be summarized into three concepts: simplification of the infrastructure. centralized storage management tools can help improve scalability. Additionally. A SAN is a specialized. and optical storage. a SAN introduces the flexibility of networking to enable one server or many heterogeneous servers to share a common storage utility. A network might include many storage devices. high-speed network that attaches servers and storage devices. The simplification of the infrastructure consists of six main areas. servers and storage pools that can help increase IT efficiency and simplify the infrastructure. availability. information life-cycle management (ILS). but more powerful. currently limited by the number of storage devices that are attached to the individual server.

and bridges. Automation is choosing storage components with autonomic capabilities. The committee that is standardizing Fibre Channel is the INCITS Fibre Channel (T11) Technical Committee. gateways. It gives you a solid framework to lean on in times of crisis and provides stability and security. it is about devising plans and strategies that will enable you to continue your business operations and enable you to recover quickly and effectively from any type of disruption. access to mainframe storage (FICON). routers. SANs play a key role in the business continuity. switches. It is the deployment of these different types of interconnect entities that enables Fibre Channel networks of varying scale to be built. electrical interface. and storage . We talk about storage virtualization in detail in Chapter 8. plus all control software. arbitrated loop. How to Access a Storage Device Each new storage technology introduced a new physical interface. and networking (TCP/IP). storage devices. computer systems. Given this definition.view of storage assets. storage administrators might be able to spend more time on critical. including access to open system storage (FCP). a Fibre Channel network might be composed of many types of interconnect entities. By deploying a consistent and safe infrastructure. Business continuity (BC) is about building and improving resilience in your business. The ILM process manages this information in a manner that optimizes storage and maintains a high level of access at the lowest cost. Integrated storage environments simplify system management tasks and improve security. storage devices. then. which can improve availability and responsiveness and help protect data as storage needs grow. which communicates over a network. from conception until intentional disposal. higher-level tasks that are unique to the company’s business mission. Storage subsystems. embedding BC into your business is proven to bring business benefits. As soon as day-today tasks are automated. SANs make it possible to meet any availability requirements. As the name suggests. several components can be used to build a SAN. In fact. hubs. including directors. so any combination of devices that can interoperate is likely to be used. Fibre Channel directors can be introduced to facilitate a more flexible and fault-tolerant configuration. When all servers have secure access to all data. Information life-cycle management (ILM) is a process for managing information through its life cycle. Depending on the implementation. whatever its size or cause. and appliances. your infrastructure might be better able to respond to the information needs of your users. A storage system consists of storage elements. after that analysis is complete. and server systems can be attached to a Fibre Channel SAN. it’s about identifying your key products and services and the most urgent activities that underpin them. and switched topologies with various copper and optical links that are running at speeds from 1 Gbps to 16 Gbps. Fibre Channel supports point-to-point. Fibre Channel is a serial I/O interconnect that is capable of supporting multiple protocols. Each component that composes a Fibre Channel SAN provides an individual management capability and participates in an often-complex end-to-end management environment. As SANs increase in size and complexity. In smaller SAN environments you can deploy hubs for Fibre Channel arbitrated loop topologies. it is a network. or switches and directors for Fibre Channel switched fabric topologies. A SAN implementation makes it easier to manage the information life cycle because it integrates applications and data into a single-view system in which the information resides.

the formats can be fixed-length or variable length. In general. are sequences of bytes or bits. SCSI is an American National Standards Institute (ANSI) standard that is one of the leading I/O buses in the computer industry. although the block size in file systems may be a multiple of the physical block size. By separating the data into individual pieces and giving each piece a name. Since then. SCSI has been continuously developed. the details vary depending on the particular system. Most file systems are based on a block device. SCSI is a block-level I/O protocol for writing and reading data blocks to and from a storage device. . others provide file access via a network protocol. a block size. with different physical organizations or padding mechanisms. Data transfer is also governed by standards. In this chapter we will be categorizing them in three major groups: Blocks. Files are granular containers of data created by the system or an application. the information is easily separated and identified. Different methods to access records may be provided. which comes in a number of variants. metadata may be associated with the file records to define the record length. or the data might be part of the record. The SCSI bus is a parallel bus. The structure and logic rule used to manage the groups of information and their names is called a file system. by key. The first version of the SCSI standard was released in 1986. sequential. There are several record formats. Some file systems are used on local data storage devices. This type of structured data is called blocked. The intelligent interface model abstracts device-specific details from the storage protocol and decouples the storage protocol from the physical and electrical interfaces. as shown in Table 6-2. which is a level of abstraction for the hardware responsible for storing and retrieving specified blocks of data. This enables multiple storage technologies to use a common storage protocol. sometimes called physical records. Records are similar to a file system.protocol. usually containing some whole number of records having a maximum length. Block-Level Protocols Block-oriented protocols (also known as block-level protocols) read and write individual fixedlength blocks of data. or by record number. for example.

INCITS is accredited by and operates under rules approved by the American National Standards Institute (ANSI). producers. These rules are designed to ensure that voluntary standards are developed by the consensus of directly and materially affected interests. Drafts are sent to INCITS for approval. and users for the creation and maintenance of formal daily IT standards. The standard process is that a technical committee (T10.Table 6-2 SCSI Standards Comparison The International Committee for Information Technology Standards (INCITS) is the forum of choice for information technology developers. T13) prepares drafts. ANSI promotes American National Standards to ISO as a joint technical committee member (JTC-1). The Information Technology Industry Council (ITI) sponsors INCITS (see Figure 6-4). After the drafts are approved by INCITS. drafts become standards and are published by ANSI. T11. Figure 6-4 Standard Groups: Storage .

Logical units are architecturally independent of target ports and can be accessed through any of the target’s ports.As a medium. Table 6-2 lists the SCSI standards. A so-called daisy chain can connect up to 16 devices together. via a LUN. Figure 6-5 SCSI Commands. a disk drive might use a single LUN. Essentially. SCSI defines a parallel bus for the transmission of data with additional lines for the control of communication. Status. An . LUN 0. a LUN is an access point for exchanging commands and status information between initiators and targets. and the LUN is a way to identify SCSI black boxes. Also note that array-based replication software requires a storage controller in the initiating storage array to act as initiator both externally and internally. Thus. a logical unit is a virtual machine (or virtual controller) that handles SCSI communications on behalf of real or virtual storage devices in a target. The process of provisioning storage in a SAN storage subsystem involves defining a LUN on a particular target port and then assigning that particular target/LUN pair to a specific logical unit. the SCSI protocol is half-duplex by design and is considered a command/response protocol. A SCSI controller in a modern storage array acts as a target externally and acts as an initiator internally. SCSI targets have logical units that provide the processing context for SCSI commands. For instance. One SCSI device (the initiator) initiates communication with another SCSI device (the target) by issuing a command. and might optionally support additional LUNs. numerous cable and plug types have been defined that are not directly compatible with one another. The initiating device is usually a SCSI controller. SCSI storage devices typically are called targets (see Figure 6-5). Over time. The bus can be realized in the form of printed conductors on the circuit board or as a cable. A target must have at least one LUN. but SCSI operates as a master/slave model. A logical unit can be thought of as a “black box” processor. Commands received by targets are directed to the appropriate logical unit by a task router in the target controller. and Block Data Between Initiators and Targets The logical unit number (LUN) identifies a specific logical unit (virtual controller) in a target. All SCSI devices are intelligent. so SCSI controllers typically are called initiators. Although we tend to use the term LUN to refer to a real or virtual storage device. to which a response is expected. whereas a subsystem might allow hundreds of LUNs to be defined.

The SCSI protocol permitted only eight IDs. During the arbitration of the bus. target. the IDs 7 to 0 should retain the highest priority so that the IDs 15 to 8 have a lower priority (see Figure 6-6). and LUN triad is defined from parallel SCSI technology. The bus. this can lead to devices with lower priorities never being . and LUN. Figure 6-6 SCSI Addressing Devices (servers and storage devices) must reserve the SCSI bus (arbitrate) before they can send data through it. The target represents a single disk controller on the string. A server can be equipped with several SCSI controllers. For reasons of compatibility. For instance. a logical unit could be accessed through LUN 1 on Port 0 of a target and also accessed as LUN 8 on port 1 of the same target. the operating system must note three things for the differentiation of devices: controller ID. The bus represents one of several potential SCSI interfaces that are installed in the host. SCSI ID. Therefore. a maximum of 8 or 16 IDs are permitted per SCSI bus. If the bus is heavily loaded. Depending on the version of the SCSI standard. each supporting a separate string of disks. with the ID 7 having the highest priority. More recent versions of the SCSI protocol permit 16 different IDs.individual logical unit can be represented by multiple LUNs on different ports. the device that has the highest priority SCSI ID always wins.

cables. It is often also assumed that SCSI provides in-order delivery to maintain data integrity. The SCSI version 3 (SCSI-3) application client resides in the host and represents the upper-layer application. SCSI-3 is an all-encompassing term that refers to a collection of standards that were written as a result of breaking SCSI-2 into smaller. did not. hierarchical modules that fit within a general framework called SCSI Architecture Model (SAM). and TCP/IP I/O Networking It is important to understand that while SCSI-1 and SCSI-2 represent actual standards that completely define SCSI (connectors. even faster bus types are introduced. In other words. the SCSI protocol assumes that proper ordering is provided by the underlying connection technology. so it has different functional components (see Figure 6-7). These functions assemble raw data blocks into a coherent file that an application can manipulate. In SCSI3. and command Protocol). therefore. Although this information might be stored on disk or tape media in the form of data blocks. and operating system I/O requests. was not needed by the SCSI protocol layer. such as UDP. responding to requests.allowed to send data.” The SCSI protocol layer sits between the operating system and the peripheral resources. In-order delivery was traditionally provided by the SCSI bus and. The SCSI-3 device server sits in the target device. retrieval of the file requires a hierarchy of functions. This is the main reason why TCP was considered essential for the iSCSI protocol that transports SCSI commands and data transfers over IP networking equipment: TCP provided ordering while other upper-layer protocols. The SCSI arbitration procedure is therefore “unfair. Applications typically access data as files or records. file system. signals. along with serial SCSI buses that reduce the cabling overhead . In SCSI-3. the SCSI protocol does not provide its own reordering mechanism and the network is responsible for the reordering of transmission frames that are received out of order. Fibre Channel I/O Channel. Figure 6-7 SCSI I/O Channel.

but distance is limited to 25 m and is half-duplex. By eliminating the need for a second network technology just for storage. Often. where the SAN switch devices allow the mixture of open systems and mainframe traffic. solid state drives (SSD). ISCSI is SCSI over TCP/IP: Internet Small Computer System Interface (iSCSI) is a transport protocol that carries SCSI commands from an initiator to a target.and allow a higher maximum bus length. the mix of sequential and random access patterns. IOPS (Input/output Operations Per Second. Fibre Channel has a lossless delivery mechanism by using bufferto-buffer credits (BB_Credits). iSCSI has the potential to lower the costs of deploying networked storage. Therefore. For HDDs and similar electromechanical storage devices. SCSI is carried in the payload of a Fibre Channel frame between Fibre Channel ports. the traditional IBM Enterprise Systems Connection (ESCON) architecture. the sequential IOPS numbers (especially when using a large block size) typically indicate the maximum sustained bandwidth that the storage device can handle. It is a data storage networking protocol that transports standard SCSI requests over the standard Transmission Control Protocol/Internet Protocol (TCP/IP) networking technology. Latency is low. for both storage and data networks. the random IOPS numbers are primarily dependent upon the storage device’s random seek time. the demands and needs of the market push for new technologies. structured applications such as database applications that require many I/O operations per second (IOPS) to achieve high performance. A SAN is Fibre Channel-based (FC-based). sequential IOPS are reported as a simple MB/s number as follows: IOPS × TransferSizeInBytes = BytesPerSec (and typically this is converted to megabytes per sec) SCSI messages and data can be transported in several ways: Parallel SCSI cable: This transport is mainly used in traditional deployments. In particular. The specific number of IOPS possible in any system configuration will vary greatly. FICON is a prerequisite for IBM z/OS systems to fully participate in a heterogeneous SAN. The SCSI protocol is suitable for block-based. and the data block sizes. rather than a replacement for. whereas for SSDs and similar solid state storage devices. It is at this point where the Fibre Channel model is introduced. pronounced eye-ops) is a common performance measurement used to benchmark computer storage devices like hard disk drives (HDD). Because it uses TCP/IP. and storage-area networks (SAN). Latency is low and bandwidth is high (16 GBps). ISCSI enables the implementation of IP-based SANs. FICON is a protocol that uses Fibre Channel as . there is always a push for faster communications without limitations on distance or on the number of connected devices. iSCSI is also suited to run over almost any physical network. depending on the variables the tester enters into the program. As always. Fibre Channel cable: This transport is the basis for a traditional SAN deployment. including the balance of read and write operations. On both types of storage devices. the random IOPS numbers are primarily dependent upon the storage device’s internal controller and memory interface speeds. enabling clients to use the same networking technologies. Fibre Channel connection (FICON) architecture: An enhancement of. so data can flow in only one direction at a time. the number of worker threads and queue depth. which are explained in detail later in this chapter.

Fibre . at a relatively low cost. FCoE is discussed in detail in Chapters 10 and 11. Because most organizations already have an existing IP infrastructure. TCP/IP services are used to establish connectivity between remote SANs. Figure 6-8 portrays networking stack comparison for all block I/O protocols. Fibre Channel over IP (FCIP): Also known as Fibre Channel tunneling or storage tunneling. is enormous. At the time of writing this book.its physical medium. the attraction of being able to link geographically dispersed SANs. FICON channels can achieve data rates up to 850 MBps and extend the channel distance (up to 100 km).) HBAs are I/O adapters that are designed to maximize performance by performing protocol-processing functions in silicon. it allows deployments of Fibre Channel fabrics by using IP tunneling. HBAs are roughly analogous to network interface cards (NIC). It is a method to allow the transmission of Fibre Channel information to be tunneled through the IP network. Any congestion control and management and data error and data loss recovery is handled by TCP/IP services and does not affect Fibre Channel fabric services. Fibre Channel over Ethernet (FCoE): This transport replaces the Fibre Channel cabling with 10 Gigabit Ethernet cables and provides lossless delivery over unified I/O. The major consideration with FCIP is that it does not replace Fibre Channel with IP. FICON can also increase the number of control unit images per link and the number of device addresses per control unit link. Another possible assumption is that the only need for the IP connection is to facilitate any distance requirement that is beyond the current scope of an FCP SAN. but HBAs are optimized for SANs and provide features that are specific to storage. (See Figure 6-9. The protocol can also retain the topology and switch management characteristics of ESCON. The assumption that this might lead to is that the industry decided that Fibre Channel-based SANs are more than appropriate. FCIP encapsulates Fibre Channel block data and then transports it over a TCP socket. Figure 6-8 Networking Stack Comparison for All Block I/O Protocols Fibre Channel provides high-speed transport for SCSI payload via Host Bus Adapter (HBA).

and combines best attributes of a channel and a network together. support for multiple protocols. including addressing for up to 16 million nodes. The diagram shows the Fibre Channel. and error correction. host speeds of 100 to 1600 MBps (1–16 Gbps). . FC-2. HIPPI data framing. such as SCSI-3. and FC-3) and one upper layer (FC-4). Offloading these functions is necessary to provide the performance that storage networks require. software drivers perform protocol-processing functions such as flow control. and Fibre Channel connection (FICON). Figure 6-9 Comparison Between Fibre Channel HBA and Ethernet NIC Figure 6-9 contrasts HBAs with NICs. and others. Internet Protocol (IP). illustrating that HBAs offload protocol-processing functions into silicon. The HBA offloads these protocol-processing functions onto the HBA hardware with some combination of an ASIC and firmware. HBAs are I/O adapters that are designed to maximize performance by performing protocol-processing functions in silicon. Figure 6-10 shows an overview of the Fibre Channel model. FC-4 is where the upper-level protocols are used. With NICs. HBAs are roughly analogous to NICs. sequencing. FC-1. loop (shared) and fabric (switched) transport. Fibre Channel Protocol (FCP) has an ANSI-based layered architecture that can be considered a general transport vehicle for upper-layer protocols (ULP) such as SCSI command sets. segmentation and reassembly. which is divided into four lower layers (FC-0. but HBAs are optimized for SANs and provide features that are specific to storage. IP.Channel overcomes many shortcomings of Parallel I/O.

connectors. In the initial FC specification. 200 MBps as 2-Gbps. FC-1 variants up to and including 8 Gbps use the same encoding scheme (8B/10B) as GE fiber optic variants. the FC-2 header and interframe spacing overhead must be subtracted. and 100 MBps were introduced on several different copper and fiber media. The FC-PI specification redefines baud rate more accurately and states explicitly that FC encodes 1 bit per baud. fields within the FC-2 .25 Gbps. alias address definition. Additional rates were subsequently introduced.125 GBaud. 1-Gbps FC operates at 1. and provides a data bit rate of 1. FC-0 physical interface: Provides physical connectivity. and 400 MBps as 4-Gbps. decoding. provides a raw bit rate of 2. Fibre Channel throughput is commonly expressed in bytes per second rather than bits per second. protocols for hunt group.Figure 6-10 Fibre Channel Protocol Architecture These are the main functions of each Fibre Channel layer: FC-4 upper-layer protocol (ULP) mapping: Provides protocol mapping to identify the ULP that is encapsulated into a protocol data unit (PDU) for delivery to the FC-2 layer.7 Gbps. FC-2 framing and flow control: Provides the framing and flow control that is required to transport the ULP over Fibre Channel. and multicast management and stacked connect-requests. and provides a data bit rate of 850 Mbps. 4-Gbps FC operates at 4.0625 Gbps. 100 MBps FC is also known as 1-Gbps FC. which means the baud rate and raw bit rate are equal. 50 MBps. Instead.25 GBaud. Table 6-3 shows the evolution of Fibre Channel speeds.125 Gbps. and error control. and so on. provides a raw bit rate of 4. FC-2 functions include several classes of service.5 MBps.0625 GBaud. FC-3 generic services: Provides the Fibre Channel Generic Services (FC-GS) that is required for fabric management. The FC-PH specification defines baud rate as the encoded bit rate per second. provides a raw bit rate of 1.4 Gbps. 2-Gbps FC operates at 2. FC-1 encoding: Defines the transmission protocol that includes the serial encoding. exchange management. To derive ULP throughput. 25 MBps. frame format definition. including cabling. and provides a data bit rate of 3. including 200 MBps and 400 MBps. Fibre Channel is described in greater depth throughout this chapter. Note that FCP does not define its own header. Specifications exist here but are rarely implemented. FCP is the FC-4 mapping of SCSI-3 onto FC. address assignment. sequence disassembly and reassembly. rates of 12.

2 Gbps. database. CIFS is found primarily on Microsoft Windows servers (a Samba service implements CIFS on UNIX systems). Table 6-3 Evolution of Fibre Channel Speeds File-Level Protocols File-oriented protocols (also known as file-level protocols) read and write variable-length files. and 4 Gbps. the ULP throughput rate is 826. and files. 4 Gbps FC. Examples of resources are e-mail. Common Internet File System (CIFS) and Network File System (NFS) are file-based protocols that are used for reading and writing files across a network. The 1 Gbps FC. Files are segmented into blocks before being stored on disk or tape. and 3.65304 Gbps. The configuration of the network depends on the resource requirement of the environment. and NFS is found primarily on UNIX and Linux servers. Inter-frame spacing adds another 24 bytes. These resources can be made available through NFS. The system with the resources is called the server. 16 Gbps FC provides backward compatibility with 4 Gbps FC and 8 Gbps FC. and the system that requires the resources is called the client.header are used by FCP. The theory of client/server architecture is based on the concept that one computer has the resources that another computer requires. respectively. A distributed (client/server) network might contain multiple servers and multiple clients. Assuming the maximum payload (2112 bytes) and no optional FC-2 headers. The client and the server communicate with each other through established protocols. and the 10G and 16GFC standard uses 64B/66B encoding. The benefits of client/server architecture include cost reduction because of hardware and space requirements. The local workstations do not need as much disk space because commonly used data . Unlike the 10 Gbps FC standards.519 Mbps.30608 Gbps for 1 Gbps. 8 Gbps FC designs all use 8B/10B encoding. The basic FC-2 header adds 36 bytes of overhead. or multiple clients and one server. These ULP throughput rates are available directly to SCSI. 2 Gbps FC. 1.

These file-based protocols are suitable for file-based applications: Microsoft Office. Status Monitor (sm_1_main). latency is high. Figure 6-11 Block I/O Versus File I/O The storage system provides services to the client. SCSI is an efficient protocol and is the accepted standard within the industry. In Figure 6-11. NFS is a widely used protocol for sharing files across networks. and the client can be one of many versions of the Unix or Linux operating system. CIFS and NFS are verbose protocols that send thousands of messages between the servers and NAS devices when reading and writing files. and portmap (also known as rpcbind). NFS communication cannot be established between the client and the server. SCSI is mostly used in Fibre Channel SANs to transfer blocks of data between servers and storage arrays in a high-bandwidth. . SCSI may be transported over many physical media. such as databases that transfer small blocks of data when updating fields. NFS is stateless. Other benefits include centralized support (backups and maintenance) that is performed on the server. and File and Print services. Block I/O and File I/O are complementary solutions for reading and writing data to and from storage devices. low. These services include mount daemon (mounted). the server in the network is a network appliance storage system. One benefit of NAS storage is that files can be shared easily between users within a workgroup. to allow for easy recovery in the event of server failure. and tables. Similarly. Because CIFS and NFS are transported over TCP/IP.can be stored on the server. For example. a client cannot mount a resource if mountd is not running on the server. Network Lock Manager (nlm_main). Each service is required for successful operation of an NFS process. Microsoft SharePoint. records.latency redundant SAN. NFS daemon (nfsd). quota daemon (rquot_1_main). Block I/O protocols are suitable for well-structured applications. Block I/O uses the SCSI protocol to transfer data in 512-byte blocks. if rpcbind is not running on the server.

Network Attached Storage (NAS) appliances reside on the front-end. FCoE. and block-level data access applies to Fibre Channel (FC) back-end networks. and iSCSI the data exchange between servers and storage devices takes place in a block-based fashion. NAS servers are turnkey file servers. Putting It All Together Fibre Channel SAN. SCSI is limited by distance. loop or . SAN. file-level data access applies to IP front-end networks. NFS. Storage networks are more difficult to configure. resiliency. SMB (server message block—Microsoft NT). Fibre Channel offers a greater sustained data rate (1 GFC–16 GFC at the time of writing this book). and management flexibility. speed. On the other hand. IDE along with file-system formatting standards such as FAT. FCoE SAN. NAS transfers files or file fragments. Figure 6-12 DAS. number of storage devices per chain. CIFS (common Internet file-system) or NCP (Netware Core protocol—Novell) Back-end protocols include SCSI. iSCSI. SANs include the back end along with front-end components. Fibre Channel supplies optimal performance for the data exchange between server and storage device. and NAS Comparison At this stage. Storage networks can be realized with NAS servers by installing an additional LAN between the NAS server and the application servers. NAS servers have only limited suitability as data storage for databases due to lack of performance. and NAS are four techniques with which storage networks can be realized (see Figure 6-12). Therefore. you already figured out why Fibre Channel was introduced. FCoE. Fibre Channel was introduced to overcome these limitations. in Fibre Channel. NTFS. In contrast to NAS. Back-end data communication is hostto-storage or program-to-device. Front-end protocols include FTP. lack of device sharing. iSCSI SAN. and iSCSI.In the scenario portrayed in Figure 6-11. In contrast to Fibre Channel. Front-end data communications are considered host-to-host or client/server.

The Storage Networking Industry Association (SNIA) standards group defines tiered storage as “storage that is physically partitioned into multiple distinct classes based on price. Scale-up and Scale-out Storage Storage systems increase their capacity in one of two ways: by adding storage components to an existing system or by adding storage systems to an existing group of systems. nondisruptive device addition. and products designed with it are generally classified as monolithic storage. Data may be dynamically moved among classes in a tiered storage implementation . performance or other attributes. A scale-up solution is a way to increase only the capacity. Scale-up and scale-out storage architectures have hardware. This is the reason IT organizations tend to spend such a high percentage of their budget on storage every year. if necessary. Storage Architectures Storage systems are designed in a way that customers can purchase some amount of capacity initially with the option of adding more as the existing capacity fills up with data. An architecture that adds capacity by adding storage components to an existing system is called a scale-up architecture. Scale-up storage systems tend to be the least flexible and involve the most planning and longest service interruptions when adding capacity. and centralized management with local or remote access. Increasing storage capacity is an important task for keeping up with data growth. Tiered Storage Many storage environments support a diversity of needs and use disparate technologies that cause storage sprawl. A scaleout solution is like a set of multiple arrays with a single logical view. High availability with scale-up storage is achieved through component redundancy that eliminates single points of failure. or nodes. Products designed with this architecture are often classified as distributed. Scaling architectures determine how much capacity can be added. and by software upgrades processes that take several steps to complete. and how much work and planning is involved. allowing system updates to be rolled back.switched networks for device sharing and low latency. In a large-scale storage infrastructure. The cost of the added capacity depends on the price of the devices. how quickly it can be added. and maintenance costs. the impact on application availability. virtually unlimited devices in a fabric. In addition. which can be relatively high with some storage systems. facilities. distances of hundreds of kilometers or greater with extension devices. this environment yields a suboptimal storage design that can be improved only with a focus on data access characteristics analysis and management to provide optimum performance. to a group of systems is called a scale-out architecture. An architecture that adds capacity by adding individual storage systems. but not the number of the storage controllers. the lifespan of storage products built on these architectures is typically four years or less.

However. the overall management effort increases when managing storage capacity and storage performance needs across multiple storage classes. an optimal design keeps the active operational data in Tier 0 and Tier 1 and uses the benefits associated with a tiered storage approach mostly related to cost. such as the performance needs. and energy costs. Environmental savings. footprint. By introducing solid-state drive (SSD) storage as tier 0. system footprint. you might more efficiently address the highest performance needs while reducing the Enterprise class storage. because lower-tier storage is less expensive. are possible. and importance of data availability. . Table 6-4 lists the different storage tier levels. Figure 6-13 Cost Versus Performance in a Tiered Storage Environment Typically.” Tiered storage is a mix of high performing/high-cost storage with low performing/low-cost storage and placing data based on specific characteristics. you can see the concept and a cost versus performance relationship of a tiered storage environment. In Figure 6-13. such as energy. and cooling reductions. A tiered storage approach can provide the performance you need and save significant costs associated with storage. age.based on access activity or other considerations.

Table 6-4 Comparison of Storage Tier Levels .

Because the traditional FC SAN design doubles the cost of network implementation.SAN Design SAN design doesn’t have to be rocket science. To achieve 99. many companies are actively seeking alternatives. It is also about making sure the network design and topology look as clean and functional one. Some companies take the same approach with their traditional IP/Ethernet networks. This is primarily motivated by a desire to achieve 99. Modern SAN design is about deploying ports and switches in a configuration that provides flexibility and scalability. or five years later as the day they were first deployed. each host and storage device is dual-attached to the network. and each end node (host or storage) is connected to both networks. and others are considering single path FC SANs. Figure 6-14 SAN A and SAN B FC Networks Infrastructures for data access necessarily include options for redundant server systems. but most do not for reasons of cost. Figure 6-14 illustrates typical dual path FC-SAN designs. Some companies are looking to iSCSI and FCoE as the answer. two.999 percent availability. Although .999 percent availability the network should be built with redundant switches that are not interconnected. The FC network is built on two separate networks (commonly called path A and path B). In the traditional FC SAN design.

Figure 6-15 portrays two types of multipathing. they are clearly key pieces of data access infrastructures.server systems might not necessarily be thought of as storage elements. all downstream connections. Multipathing software depends on having redundant I/O paths between systems and storage. Multipathing establishes two or more SCSI communication connections between a host system and the storage it uses. most current . Active/Active: balanced I/O over both paths (implementation specific) and Active/Passive: I/O over primary path—switches to standby path upon failure. However. This includes switches and network cables that are being used to transfer I/O data between a computer and its storage. From a storage I/O perspective. Farms are loosely coupled individual systems that have common access to shared data. another SCSI communication connection is used in its place. fault-tolerant system. servers are the starting point for most data access. by extension. although multiport HBAs provide path redundancy. and clusters are tightly coupled systems that function as a single. If one of these communication connections fails. Redundant server systems and filing systems are created through one of two approaches: clustered systems or server farms. changing an I/O path involves changing the initiator used for I/O transmissions and. In general. Server filing systems are clearly in the domain of storage. Figure 6-15 Storage Multipathing Failover A single multiport HBA can have two or more ports connecting to the SAN that can be used by multipathing software.

the Dijkstra algorithm on which it is commonly based has a worst-case running time that is the square of the number of nodes in the fabric. As an example. SAN switches use the Fabric Shortest Path First (FSPF) routing protocol to converge new routes through the network following a change to the network configuration. Multipathing is not the only automated way to recover from a network problem in a SAN. the design for a SAN that will handle a network with 100 end ports will be very different from the design for a SAN that has to handle a network with 1500 end ports. Port Density and Topology Requirements The single most important factor in determining the most suitable SAN design is determining the number of end ports for now and over the anticipated lifespan of the design. FSPF automatically calculates the best path between any two devices in a fabric through dynamically computing routes. SAN Design Considerations The underlying principles of SAN design are relatively straightforward: plan a network topology that can handle the number of ports necessary now and into the future. These underlying principles fall into five general categories: Port density and topology requirements: Number of ports required now and in the future Device performance and oversubscription ratios: Determination of what is acceptable and what is unavoidable Traffic management: Preferential routing or resource allocation Fault isolation: Consolidation while maintaining isolation Control plane scalability: Reduced routing complexity 1. From a design standpoint. Designing for a 1500-port SAN does not necessarily imply that 1500 ports need to be purchased initially. the storage process uses the route. It also selects an alternative path in the event of failure of the primary path. establishing the shortest and quickest path between any two devices. it is typically better to overestimate the port count requirements than to underestimate them. design a network topology with a given end-to-end performance and throughput level in mind. and provide the necessary connectivity with remote data centers to handle the business requirements of business continuity and disaster recovery. including link or switch failures. Considering this. Multipathing software in a host system can use new network routes that have been created in the SAN by switch routing algorithms. It is about helping ensure that a design remains functional if . This depends on switches in the network recognizing a change to the network and completing their route convergence process prior to the I/O operation timing out in the multipathing software. it could be advantageous to implement multipathing so that the timeout values for storage paths exceed the times needed to converge new network routes.multipathing implementations use dual HBAs to provide redundancy for the HBA adapter. or even at all. taking into account any physical requirements of a design. As long as the new network route allows the storage path’s initiator and LUN to communicate. Although FSPF itself provides for optimal routing between nodes. FSPF is the standard routing protocol used in Fibre Channel fabrics.

but once again. determining the approximate port count requirements is not difficult. coupled with an estimated growth rate of 30 percent per year. it is not difficult to plan based on an estimate of the immediate server connectivity requirements. 12. A design should also consider physical space requirements. Figure 6-16 portrays the SAN major design factors. is the data center all on one floor? Is it all in one building? Is there a desire to use lower-cost connectivity options such as iSCSI for servers with minimal I/O requirements? Do you want to use IP SAN extension for disaster recovery connectivity? Any design should also consider increases in future port speeds. the lifespan for any design should encompass the depreciation schedule for equipment. rather than later finding the design is unworkable. and 18 months as rough guidelines for the projected growth in number of end-port devices in the future. For example. Where existing SAN infrastructure is present. unused module . and densities. a design should last longer than this. protocols. Although it is difficult to predict future requirements and capabilities. Preferably. typically three years or more. it is more difficult to determine future port-count growth requirements. because redesigning and reengineering a network topology become both more time-consuming and more difficult as the number of devices on a SAN expands. Figure 6-16 SAN Major Design Factors For new environments. As a minimum.that number of ports is attained. You can use the current number of end-port devices and the increase in number of devices during the previous 6.

Device Performance and Oversubscription Ratios Oversubscription. Note A fan-out ratio is the relationship in quantity between a single port on a storage device and the number of servers that are attached to it. and actual disks. Most major disk subsystem vendors provide guidelines as to the recommended fan-out ratio of subsystem client-side ports to server connections.slots in switches that have a proven investment protection record open the possibility to future expansion. 2. Oversubscription is a necessity of any networked infrastructure and directly relates to the major benefit of network—to share common resources among numerous clients. cache. two oversubscription metrics must be managed: IOPS and network bandwidth capacity of the SAN. application I/O requests. Because storage subsystem I/O resources are not commonly consumed at 100 percent all the time by a single client. in a SAN switching environment. percentage of reads versus writes. SAN switch ports are rarely run at their maximum speed for a prolonged period. Key factors in deciding the optimum fan-out ratio are server HBA queue depth. Fan-in is how many storage ports can be served from a single host channel. On the storage side. the lower the cost of the underlying network infrastructure and shared resources. It is important to know the fan-out ratio in a SAN design so that each server gets optimal access to storage resources. whereas bandwidth capacity relates to all devices in the SAN. which is derived from factors including I/O size. and port throughput. the supported bandwidth is again strictly derived from the IOPS capacity of the disk subsystem itself. The two metrics are closely related. These recommendations are often in the range of 7:1 to 15:1. Too low a fan-out ratio. storage device input/output (IOPS). although they pertain to different elements of the SAN. . the required bandwidth is strictly derived from the I/O load. and I/O service time from the target device. When considering all the performance characteristics of the SAN infrastructure and the servers and storage devices. multiple slower devices may fan in to a single port to take advantage of unused capacity. however. including the system architecture. Figure 617 portrays the major SAN oversubscription factors. CPU capacity. When the fan-out ratio is high and the storage array becomes overloaded. application performance will be affected negatively. results in an uneconomic use of storage. disk controllers. The higher the rate of oversubscription. is the practice of connecting multiple devices to the same switch port to optimize switch use. including the SAN infrastructure itself. a fan-out ratio of storage subsystem ports can be achieved based on the I/O demands of various applications and server platforms. IOPS performance relates only to the servers and storage devices and their ability to handle high numbers of I/O operations. On the server side.

which might temporarily require high-rate I/O service. and application I/O patterns do not result in sustained full-bandwidth utilization. 3. and application performance can be monitored closely.Figure 6-17 SAN Oversubscription Design Considerations In most cases. Traffic Management . and sequential I/O operations to show wire-rate performance. Because of this fact. and the SAN design oversubscription has little effect on I/O contention. However. I/O composition. The general principle in optimizing design oversubscription is to group applications or servers that burst high I/O rates at different time slots within the daily production cycle. application server performance is acceptable. oversubscription can be safely factored into SAN design. you must account for burst I/O traffic. Best-practice would be to build a SAN design using a topology that derives a relatively conservative oversubscription ratio (for example. Although ideal scenario tests can be contrived using larger I/Os. 8:1) coupled with monitoring of the traffic on the switch ports connected to storage arrays and Inter-Switch Links (ISL) to see if bandwidth is a limiting factor. peak time I/O traffic contention is minimized. neither application server HBAs nor disk subsystem client-side controllers can handle full wire-rate sustained bandwidth. the oversubscription ratio can be increased gradually to a level that is both maximizing performance while minimizing cost. In more common scenarios. large CPUs. If bandwidth is not the limited factor. server-side resources. In this case. this is far from a practical real-world implementation. This grouping can examine either complementary application I/O profiles or careful scheduling of I/O-intensive activities such as backups and batch jobs.

see Figure 6-18) enables this consolidation while increasing the security and stability of Fibre Channel fabrics by logically isolating devices that are physically connected to the same set of switches. Fault Isolation Consolidating multiple areas of storage into a single physical fabric both increases storage utilization and reduces the administrative overhead associated with centralized storage management. Many organizations would like to consolidate their storage infrastructure into a single physical fabric.Are there any differing performance requirements for different application servers? Should bandwidth be reserved or preference be given to traffic in the case of congestion? Given two alternate traffic paths between data centers with differing distances. but both technical and business challenges make this difficult. Technology such as virtual SANs (VSANs. should traffic use one path in preference to the other? For some SAN designs it makes sense to implement traffic management policies that influence traffic flow and relative traffic priorities. . 4. Faults within one fabric are contained within a single fabric (VSAN) and are not propagated to other fabrics. The major drawback is that faults are no longer isolated within individual storage areas.

any service disruption to the control plane can result in business impacting network outages. routing protocols. name server and domain-controller queries. and other Fibre Channel fabric services. Control Plane Scalability A SAN switch can be logically divided into two parts: a data plane. which handles the forwarding of data frames within the SAN. Control plane scalability is the primary reason storage vendors set limits on the number of switches and devices they have certified and qualified for operation in a single fabric. Because the control plane is critical to network operations. Fibre Channel frames destined for the switch itself. such as Fabric Shortest Path First (FSPF).Figure 6-18 Fault Isolation with VSANs 5. routing updates and keepalives. typically through a high rate of traffic destined to the switch itself. which handles switch management functions. These result in excessive CPU utilization and/or deprive the switch of CPU resources . Control plane service disruptions (perpetrated either inadvertently or maliciously) are possible. and a control plane.

A goal of SAN design should be to try to minimize the processing required with a given SAN topology. Attention should be paid to the CPU and memory resources available for control plane functionality and to port aggregation features such as Cisco port channels.for normal processing. the standard permits the connection of one or more arbitrated loops to a fabric. and this is why more emphasis is placed upon the fabric topology than on the two other topologies in this chapter. which provide all the benefits of multiple parallel ISLs between switches (higher throughput and resiliency) but only appear in the topology as a single logical link rather than multiple parallel links. Finally. the Dijkstra algorithm on which it is commonly based has a worstcase running time that is the square of the number of nodes in the fabric. . A fabric basically requires one or more Fibre Channel switches connected together to form a control center between the end devices. That is. Furthermore. Point-to-point defines a bidirectional connection between two devices. FSPF is the standard routing protocol used in Fibre Channel fabrics. The fabric topology is the most frequently used of all topologies. loss of a major switch or significant change in topology) occurs. and point-topoint (see Figure 6-19). doubling the number of devices in a SAN can result in a quadrupling of the CPU processing required to maintain that routing. Control plane CPU deprivation can also occur when there is insufficient control plane CPU relative to the size of the network topology and a networkwide event (for example. Although FSPF itself provides for optimal routing between nodes. Arbitrated loop defines a unidirectional ring in which only two devices can ever exchange data with one another at any one time. arbitrated loop. fabric defines a network in which several devices can exchange data simultaneously at full bandwidth. SAN Topologies The Fibre Channel standard defines three different topologies: fabric.

but that exceeds the practical and physical limitation of the switches that make up a fabric. In servers. The more active devices connected the less available bandwidth will be available. but this creates one physical network and compromises resilience against networkwide disruptions (for example. Cisco’s virtual SAN (VSAN) technology increases the number of switches that can be physically interconnected by reusing the entire FC address space within each VSAN. Unlike Ethernet. compared to arbitrated-loop at 126. . A switched fabric topology allows dynamic interconnections between nodes through ports connected to a fabric. of which there is a limit of 126. However. so that every output channel is connected to an input channel. single host-to-storage FC connections are common in cluster and grid environments because host-based failover mechanisms are inherent to such environments. In this configuration the end devices are bidirectionally connected to the hub. The edge switches in the core-edge design may be connected to both core switches. In both the core-only and core-edge designs. therefore. Host-to-storage FC connections are generally redundant. The connection between two ports is called a link. any port can communicate with another cut-through or store and forward switching. A port always consists of two channels. and switches) must be equipped with one or more Fibre Channel ports. The cabling of an arbitrated loop can be simplified with the aid of a hub. there is a limit to the number of FC switches that can be interconnected. The complexity and the size of the FC SANs increase when the number of connections increases. Most FC SANs are deployed in one of two designs commonly known as the two-tier topology: core-edge-only and threetier topology (edge-core-edge) designs. Like Ethernet.Figure 6-19 Fibre Channel Topologies Common to all topologies is that devices (servers. the links of the arbitrated loop topology are unidirectional: each output channel is connected to the input channel of the next port until the circle is closed. the port is generally realized by means of so-called HBAs. Bandwidth is shared equally among all connected devices. one input and one output channel. the wiring within the hub ensures that the unidirectional data flow within the arbitrated loop is maintained. FSPF convergence). On the other hand. It is possible for any port in a node to communicate with any port in another node connected to the same fabric. the redundant paths are usually not interconnected. storage devices. Switched fabrics have a theoretical address support for over 16 million connections. In the point-to-point topology and in the fabric topology the links are always bidirectional: in this case the input channel and the output channel of the two ports involved in the link are connected by a cross. FC switches employ a routing protocol called fabric shortest path first (FSPF) based on a link-state algorithm. Adding a new device increases aggregate bandwidth. FSPF reduces all physical topologies to a logical tree topology. The core-only is a star topology. FC switches can be interconnected in any manner. Figure 6-20 illustrates the typical FC SAN topologies. Also switched FC fabrics are aware of arbitrated loop devices when attached. Address space constraints limit FC-SANs to a maximum of 239 switches.

Both have a direct impact over a major part of the entire data center cabling system. The proliferation of NPIV-capable end devices such as host bus adaptors (HBAs) and Cisco NPV-mode switches makes the number of fabric logins needed on a perport. per-line-card. With the use of features such as NPort ID Virtualization (NPIV) and Cisco N-Port Virtualization (NPV). (See Figure 6-21. On the other hand. the budget. Typically. But EoR flexibility taxes data centers with a great quantity of horizontal cabling running under the raised floor. in a SAN design. they offer the storage team the challenge to manage a higher number of devices. SAN designs that can be built using a single physical switch are commonly referred to as collapsed core designs. EoR designs are based on inter-rack cabling between servers and high-density switches installed on the same row as the server racks. the data rate for the connections. The increase in blade server deployments and the consolidation of servers through the use of server virtualization technologies affect the design of the network. the number of fabric logins needed has increased even more. making use of core ports (non oversubscribed) and edge ports (oversubscribed) but . The total number of hosts and NPV switches determine the number of fabric logins required on the core switch.Figure 6-20 Sample FC SAN Topologies The Top-of-Rack (ToR) and End-of-Row (EoR) designs represent how access switches and servers are connected to each other. the number of end devices determines the number of fabric logins needed. ToR designs are based on intra-rack cabling between servers and smaller switches. and per-physical-fabric basis a critical consideration.) This terminology refers to the fact that a design is conceptually a core/edge design. which can be installed on the same racks as the servers. and the operational complexity. Although these designs reduce the amount of cabling and optimize the space used by network equipment. per-switch. The fabric login limits determine the design of the current SAN as well as its potential for future growth. EoR designs reduce the number of network devices and optimize port utilization on the network devices. The best design choice leans on the number of servers per rack.

although it could be argued that multiple physical fabrics provide increased physical protection (for example. and there is nondisruptive failover between fabrics in the event of a problem in one fabric. English spelling (Fiber) is retained when referring generically to fiber optics and cabling. which was developed by a vendor and submitted for standardization afterward. There are many products on the market that take advantage of the high-speed and high-availability characteristics of the Fibre Channel architecture. unlike Small Computer System Interface (SCSI). .S. Traditionally. Many standards bodies. Fibre Channel Fibre Channel (FC) is the predominant architecture upon which SAN implementations are built.that it has been collapsed into a single physical switch. When copper support was added. Fibre Channel was developed through industry cooperation. Current implementations support data transfers at up to 16 Gbps or even more. Both provide separation of fabric services. with both hosts and storage connecting to both fabrics. Fabric isolation can be achieved using either VSANs or dual physical switches. a collapsed core design on Cisco MDS 9000 family switches would utilize both non oversubscribed (storage) and oversubscribed (host-optimized) line-cards. the committee decided to keep the name in principle. protection against a sprinkler head failing above a switch) and protection against equipment failure. but to use the UK English spelling (Fibre) when referring to the standard. Fibre Channel is a technology standard that allows data to be transferred at extremely high speeds. vendors. Figure 6-21 Sample Collapsed Core FC SAN Design SAN designs should always use two isolated fabrics for high availability. technical associations. Note Is it Fibre or Fiber? Fibre Channel was originally designed to support fiber optic cabling only. The U. Multipathing software should be deployed on the hosts to manage connectivity between the host and storage so that I/O uses both paths. and industry-wide consortiums accredit the Fibre Channel standard.

The upper layer. ULPs) are mapped on the underlying Fibre Channel network. FC-0: Physical Interface FC-0 defines the physical transmission medium (cable. that is. define the fundamental communication techniques. FC-0 to FC-3. The use of the various ULPs decides. Because of its channel-like qualities. and the addressing.Fibre Channel is an open. technical standard for networking that incorporates the channel transport characteristics of an I/O bus. and it can be managed as a network. hosts and applications see storage devices that are attached to the SAN as though they are locally attached storage. defines how application protocols (upper layer protocols. with the flexible connectivity and distance characteristics of a traditional network. FC-4. Figure 6-22 portrays just a few of them.org. Figure 6-22 Fibre Channel Standards The Fibre Channel protocol stack is subdivided into five layers (see Figure 6-10 in the “Block Level Protocols” section). The link services and fabric services are located quasi-adjacent to the Fibre Channel protocol stack. the transmission. it can support multiple protocols and a broad range of devices. Fibre Channel is structured by a lengthy list of standards.t11. the physical levels. Because of its network characteristics. a Fibre Channel SAN (as a storage network) or both at the same time. for example. plug) and specifies which physical signals are used to transmit the bits 0 and 1. In contrast to the SCSI bus. in which each bit has its own data line . Detailed information on all FC Standards can be downloaded from T11 at www. whether a real Fibre Channel network is used as an IP network. These services will be required to administer and operate a Fibre Channel network. The lower four layers.

Mixing fiber-optical and copper components in the same environment is supported. 9-pin connector. but it is not as simple as connecting two switches together in a single rack. Multimode cabling is used with shortwave laser light and has either a 50-micron or a 62. Single-mode fiber is for longer distances. or mode. An example of this type of application is on long spans between two system or network devices. The 50micron or 562. The core size is typically 8. the fiberoptic cable that they lease is known as dark fiber. Single-mode fiber (SMF) allows only one pathway. is typically associated with more expensive. up to 30 meters (98 feet). MMF is more economical because it can be used with inexpensive connectors and laser devices.3 micron. This fiber might need to be laid underground or through a conduit.plus additional control lines. they lease fiber-optic cables to meet their needs. Multimode fiber is for shorter distances. MMF fiber is better suited for shorter distance applications. When a company leases equipment. Because most businesses are not in the business of laying cable.5-micron diameter is sufficiently large for injected light waves to be reflected off the core interior. SMFs are used in applications where low signal loss and high data rates are required.5 micron. fiber-optic cabling is referred to by mode or the frequencies of light waves that are carried by a particular cable type. In such a case. in this case. If the two units that need to be connected are in different cities. although not all products provide that flexibility. a fiber link might need to be laid. Larger. Dark fiber generically refers to a long. Fibre Channel architecture supports both short wave and long wave optical transmitter technologies in the following ways: Short wave laser: This technology uses a wavelength of 780 nanometers and is compatible only with MMF. Over a slightly longer distance—for example. where repeater and amplifier spacing needs to be maximized. from one building to the next. It is compatible with both SMF and MMF. which reduces the total system cost. Long wave laser: This technology uses a wavelength of 1300 nanometers. To connect one optical device to another. and can be identified by their DB-9. Fibre Channel can use either optical fiber (for distance) or copper cable links (for short distance at low cost). which was refined and proven over many years. the problem is much larger. FC-1: Encode/Decode FC-1 defines how data is encoded before it is transmitted via a Fibre Channel cable (8b/10b data transmission code scheme patented by IBM). Copper cables tend to be used for short distances. Multimode fiber (MMF) allows more than one mode of light. Where costly electronics are heavily concentrated. dedicated fiber-optic link that can be used without the need for any additional equipment. optical fibers provide a high-performance transmission medium. Fiber-optic cables have a major advantage in noise immunity. a standard fiber cable suffices. Common multimode core sizes are 50 micron and 62. of light to travel within the fiber.5-micron core with a cladding of 125 micron. If the distance is short. Fibre Channel transmits the bits sequentially via a single line. Product flexibility needs to be considered when you plan a SAN. the primary cost of the system does not lie with the cable. Normally. some form of fiber-optic link is required. Overall. FC-1 also describes certain transmission words .

This scheme is called 8b/10b encoding because it refers to the number of data bits input to the encoder and the number of bits output from the encoder.5. which also establishes word boundary alignment. If the preamble is 10. and end-of-frame (EOF) ordered sets establish the boundaries of a frame. This 8b/10b encoding finds errors that a parity check cannot. Transmission word consists of four continuous transmission characters treated as a unit. The ordered sets are 4-byte transmission words that contain data and special characters. A parity check does not find the even-numbered bit errors. Ordered Sets Fibre Channel uses a command syntax. There are two categories of transmission words: data words and ordered sets. the data is encoded before transmission and decoded upon reception. They are 40 bits long. Encoding and Decoding To transfer data over a high-speed serial interface. which means that a continuous stream of zeros or ones cannot be valid data. Data words occur between the start of frame (SOF) and end of frame (EOF) delimiters. to move the data across the network. The 8b/10b encoding process converts each 8-bit byte into two possible 10-bit characters.5). There are 11 types of SOF and eight types of EOF delimiters that are defined for the fabric and N_Port sequence . Ordered sets begin with a special transmission character (K28.y: D or K D = Data K =Special Character “xx” = Decimal value of the 5 Least Significant bits “y” = Decimal value of the 3 Most Significant bits Communications of 10 and 16 Gbps use 64/66b encoding. If the preamble is 01. an 8-bit type field follows. The 8b/10b encoding logic finds almost all errors. Three major types of ordered sets are defined by the signaling protocol. The overhead of the 64B/66B encoding is considerably less than the more common 8b/10b-encoding scheme. and generate an error if seen. aligned on a word boundary. Ordered sets provide the availability to obtain bit and word synchronization. which is outside the normal data space. It also allows easier clock and timer synchronization because a transmission must be seen every 66 bits. The use of the 01 and 10 preambles guarantees a bit transmission every 66 bits. plus 56 bits of control information and data.(ordered sets) that are required for the administration of a Fibre Channel connection (link control protocol). The encoding process ensures that sufficient clock information is present in the serial data stream. The 66-bit entity is made by prefixing one of two possible 2-bit preambles to the 64 bits to be transmitted. They immediately precede or follow the contents of a frame. the start-of-frame (SOF). The format of the 8b/10b character is of the format D/Kxx. Sixty-four bits of data are transmitted as a 66-bit entity. The preambles 00 and 11 are not used. only the odd numbers. which have a special meaning. Ordered sets delimit frame boundaries and occur outside of frames. The frame delimiters. the 64 bits are entirely data. An ordered set always begins with the special character K28. This information allows the receiver to synchronize to the embedded clock information and successfully recover the data at the required error rate. which is known as an ordered set. The information sent on a link consists of transmission words containing four transmission characters each.

a file) are transmitted via the Fibre Channel network. Recognition of a primitive sequence requires consecutive detection of three instances of the same ordered set. Table 6-5 Fibre Channel Primitive Sequences FC-2: Framing Protocol FC-2 is the most comprehensive layer in the Fibre Channel protocol stack. The two primitive signals. And it defines various service classes that are tailored to the requirements of various applications: Multiple exchanges are initiated between initiators (hosts) and targets (disks). a corresponding primitive sequence or idle is transmitted in response. are ordered sets that are designated by the standard to have a special meaning. It regulates the flow control that ensures that the transmitter sends the data only at a speed that the receiver can process it. It determines how larger data units (for example. Or the set might indicate conditions that are encountered by the receiver logic of a port. An idle is a primitive signal that is transmitted on the link to indicate that an operational port facility is ready for frame transmission and reception. The primitive sequences that are supported by the standard are illustrated in Table 6-5.control. . The R_RDY primitive signal indicates that the interface buffer is available for receiving further frames. Each exchange consists of one or more bidirectional sequences (see Figure 6-23). idle and receiver ready (R_RDY). When a primitive sequence is received and recognized. A primitive sequence is an ordered set that is transmitted and repeated continuously to indicate specific conditions within a port.

and a Cyclic Redundancy Checksum (CRC) (see Figure 6-24). have to be broken down into several frames. A sequence is a larger data unit that is transferred from a transmitter to a receiver. hence the name “sequence.Figure 6-23 Fibre Channel FC-2 Hierarchy Each sequence consists of one or more frames. Fibre Channel is an integrated whole: the layers of the Fibre Channel protocol stack are so well harmonized with one another that the ratio of payload to protocol overhead is very efficient at up to 98%. Control frames contain no useful data. useful data (payload). Finally. The CRC checking procedure is designed to recognize all transmission errors if the underlying medium does not exceed the specified error rate of 10-12. . A Fibre Channel network transmits control frames and data frames. For the SCSI3 ULP. Only one sequence can be transferred after another within an exchange. Larger sequences.112 bytes of useful data. sequences are only delivered to the next protocol layer up when all frames of the sequence have arrived at the receiver. six filling words must be transmitted by means of a link between two frames. A sequence could be representing an individual database transaction. The frame is bracketed by a start-of-frame (SOF) delimiter and an end-of-frame (EOF) delimiter. therefore. each exchange maps to a SCSI command. In contrast to Ethernet and TCP/IP. Data frames transmit up to 2. they signal events such as the successful delivery of a data frame. FC-2 guarantees that sequences are delivered to the receiver in the same order they were sent from the transmitter.” Furthermore. A Fibre Channel frame consists of a header.

with hardly any products providing the connection-oriented Class 1. In addition. Class 2 and Class 3. switches. and Class 3) are realized in products available on the market. Almost all new Fibre Channel products (HBAs. the frames are individually routed through the Fibre Channel network. Class 3 achieves less than Class 2: frames are not acknowledged. In Fibre Channel SAN implementations. Class 2. . Several Class 2 and Class 3 connections can thus share the bandwidth. the end devices themselves negotiate whether they communicate by Class 2 or Class 3. are packet-oriented services: no dedicated connection is built up. Instead. Class F serves for the data exchange between the switches within a fabric. storage devices) support the service classes Class 2 and Class 3. In Class 2 the receiver acknowledges each received frame. Three of these defined classes (Class 1. This means that only link flow control takes place. Table 6-6 lists the different Fibre Channel classes of service. In addition. which realize a packet-oriented service (datagram service). on the other hand.Figure 6-24 Fibre Channel Frame Format Fibre Channel Service Classes The Fibre Channel standard defines six service classes for exchange of data between end devices. Class 1 defines a connection-oriented communication connection between two node ports: a Class 1 connection is opened before the transmission of frames. Class 2 uses end-to-end flow control and link flow control. not end-to-end flow control. the higher protocol layers must notice for themselves whether a frame has been lost. A port can thus maintain several connections at the same time.

Initial credit levels are established with the fabric at login and then replenished upon receipt of receiver-ready (R_RDYY) from the target. The link flow control takes place at each physical connection. BB_Credits are the more practical application and are very important for switches that are communicating across an extension (E_Port) because of the inherent delays present with transport over LAN/WAN/MAN. If the receiver awards the transmitter a credit of 3. The sender decrements one BB_Credit for each frame sent and increments one for each R_RDY received. Fibre Channel uses credit model. the transmitter may only send the receiver three frames. Each credit represents the capacity of the receiver to receive a Fibre Channel frame. Two communicating ports negotiating the buffer-to-buffer credits achieve this. FC-2 defines two mechanisms for flow control: end-to-end flow control and link flow control (see Figure 6-25).Table 6-6 Fibre Channel Classes of Service Fibre Channel Flow Control Flow control ensures that the transmitter sends data only at a speed that the receiver can receive it. In end-to-end flow control. The transmitter may not send further frames until the receiver has acknowledged the receipt of at least some of the transmitted frames. two end devices negotiate the end-to-end credit before the exchange of data. This means that the link flow control also takes place at the Fibre Channel switches. . The end-to-end flow control is realized on the HBA cards of the end devices. Buffer-to-buffer credits are used in Class 2 and Class 3 services between a node and a switch.

there is no concept of end-to-end buffering exists. As distance increases or frame size decreases. the number of available BB_Credits required increases as well. Flow control management occurs between two connected Fibre Channel ports to prevent a target device from being overwhelmed with frames from the initiator device. Figure 6-26 portrays the Fibre Channel Frame buffering on an FC switched fabric. After the initial credit level is set. Hop-by-hop traffic flow is paced by return of Receiver Ready (R_RDY) frames and can only transmit up to the number of BB_Credits before traffic is throttled. regardless of its frame size. the sender decrements one EE_Credit for each frame sent and increments one when an acknowledgement (ACK) is received. a small FC frame uses the same buffer as a large FC frame. regardless of the number of switches in the network.Figure 6-25 Fibre Channel End-to-End and Buffer-to-Buffer Flow Control Mechanism End-to-end credits are used in Class 1 and Class 2 services between two end nodes. FC flow control is critical to the efficiency of the SAN fabric. Insufficient BB_Credits will throttle performance—no data will be transmitted until R_RDY is returned. . Buffer to buffer credits (BB_Credit) are negotiated between each device in a FC fabric. FC frames are buffered and queued in intermediate switches. One buffer is used per FC frame.

/. Table 6-7 illustrates the required BB_Credits for a specific FC frame size with specific link speeds. Multipathing combines several paths between two multiport end devices to form a logical path .000 bytes * 10. FC-3 is empty. 50μsec × 2 =100μsec. the time it takes for a frame to travel from a sender to receiver and response to the sender is 100 μsec.000 frames per sec). For a 10km link.) If the sender waits for a response before sending the next frame. 0001 = 10. the maximum number of frames in a second is 10.000). Table 6-7 Fibre Channel BB_Credits FC-3: Common Services FC-3 has been in its conceptual phase since 1988. A Fibre Channel frame size is ~2 Kbytes (2148 bytes maximum). The speed of light over a fiber optic link is ~ 5μsec per km. This helps keep the fiber optic link full. and without running out of BB_Credits. then BB_Credit >= 5 (100 μsec / 2 μsec =5). The transmit time for 2. (Speed of light over fiber optic links ~5μsec/km.000 (1sec. If the round-trip time is 100 μsec. The following functions are being discussed for FC-3: Striping manages several paths between multiport end devices.000m characters (2 KB frame) is approximately 20 μsec (2000 /1 Gbps or 100 MBps FC data rate).Figure 6-26 Fibre Channel Frame Buffering The optimal buffer credit value can be calculated by determining the transmit time per FC frame and dividing that into the total round-trip time per frame. Striping could distribute the frames of an exchange over several ports and thus increase the throughput between the two devices. in currently available products. This implies that the effective data rate is ~ 20 MBps (2.

group. For example. . The service or application queries the switch name server service with the WWN of the target device. the pWWN is used to uniquely identify each port. The name server looks up and returns the current port address that is associated with the target WWN. such as an I/O adapter. Every HBA. Each Fibre Channel port has at least one WWN. The nWWNs and pWWNs are both needed because devices can have multiple ports. This capability is an important consideration for Cisco Fabric Services. The pWWNs uniquely identify each port in a device. The nWWNs are required because the node itself must sometimes be uniquely identified. When a management service or application needs to quickly locate a specific device. Figure 6-27 portrays Fibre Channel addressing on an FC fabric. The service or application communicates with the target device. path failover and multiplexing software can detect redundant paths to a device by observing that the same nWWN is associated with multiple pWWNs. Compressing and encryption of the data to be transmitted preferably realized in the hardware on the HBA. switch. array or tape controller. FCIDs are dynamically acquired addresses that are routable in a switch fabric. using the port address. A dual-ported HBA has three WWNs: one nWWN and one pWWN for each port. 2. Fibre Channel ports are intelligent interface points on the Fibre Channel SAN. and Fibre Channel disk drive has a single unique nWWN. the following occurs: 1. They are found embedded in various devices. array controller. WWNs are important for enabling Cisco Fabric Services because they have these characteristics— they are guaranteed to be globally unique and are permanently associated with devices. The two types of WWNs are node WWNs (nWWN) and port WWNs (pWWN): The nWWNs uniquely identify devices. Fibre Channel Addressing Fibre Channel uses World Wide Names (WWN) and Fibre Channel IDs (FCID). gateway. Failure or overloading of a path can be hidden from the higher protocol layers. On a single-port device. 3. On multiport devices. WWNs are unique identifiers that are hardcoded into Fibre Channel devices. and switched fabric. Vendors buy blocks of WWNs from the IEEE and allocate them to devices in the factory. These characteristics ensure that the fabric can reliably identify and locate devices. the nWWN and pWWN may be the same. Ports must be uniquely identifiable because each port participates in a unique data path.

Addresses are cooperatively chosen during loop initialization.Figure 6-27 Fibre Channel Naming on FC Fabric The Fibre Channel point-to-point topology uses a 1-bit addressing scheme. . so 126 addresses are available for nodes. only a subset of 127 addresses is available because of the 8B/10B encoding requirements. One address is reserved for an FL port. Figure 6-28 FC ID Addressing The FC-AL topology uses an 8-bit addressing scheme: The arbitrated loop physical address (AL-PA) is an 8-bit address. Figure 6-28 portrays FCID addressing. One port assigns itself an address of 0x000000 and then assigns the other port an address of 0x000001. Each switch receives a unique domain ID. which provides 256 potential addresses. Switched Fabric Address Space The 24-bit Fibre Channel address consists of three 8-bit elements: The domain ID is used to define a switch. However.

extended link services. Loop Initialize (LINIT). Most of the ELSs are performed as a two sequence exchange: a request from the originator and a response from the responder. FC-GS shares a Common Transport (CT) at the FC-4 level and the CT provides access to the services. In practice. Areas can be used to group ports within a switch. A new N Port Fabric Login source address is 0x000000 and destination address is 0xFFFFFE. State Change Registration (SCR). Registered State Change Notification (RSCN). There are three types of link services defined. Each switch must have a unique domain ID. Fabric Login (FLOGI) is used by the N_Port to discover if a fabric is present. The fabric also assigns or confirms (if trying to reuse an old ID) the N_port identifier of the port initiating the BB_Credit value. State Change Notification (SCN). . Extended link services (ELS) are performed in a single exchange. depending upon the type of function provided and whether the frame contains a payload. maximum payload size. Each fabric-attached loop receives a unique area ID. Generic services include the following: Directory Service. such as BB Credit. FC-CT (Fibre Channel Common Transport) protocol is used as a transport media for these services. Domains 00 and F0 through FF are reserved for use by switch services. Port login is required for generic services. They are basic link services. Alias Services. Process Login (PRLI). Time Services and Key Distribution Services. ELS service requests are not permitted prior to a port login except the fabric login. only 239 domains are available to the fabric: Domains 01 through EF are available. Management Services. it provides operating characteristics associated with the fabric (service parameters. so there can be no more than 239 switches in a fabric. Figure 6-29 portrays FC fabric and port login/logout and query to switch process. Areas are also used to uniquely identify fabric-attached arbitrated loops. and FC-4 link services (generic services). If fabric is present.The area ID is used to identify groups of ports within a domain. Basic link service commands support low-level functions like aborting a sequence (ABTS) and passing control bit information. Process Logout (PRLO). and class of service supported. such as max frame size). F_Port Login (FLOGI). Fibre Channel Link Services Link services provide a number of architected functions that are available to the users of the Fibre Channel port. Logout (LOGO). N_Ports perform fabric login by transmitting the FLOGI extended link service command to the well-known address of 0xFFFFFE of the Fabric F_Port and exchanges service parameters. Following are ELS command examples: N_Port Login (PLOGI). the number of switches in a fabric is far smaller because of storage-vendor qualifications. Although the domain ID is an 8-bit field.

6. ACCEPT (ELS reply): Name server to initiator/target using Port_Identifier as D_ID for port login acceptance. FLOGI extended Link Service (ELS): Initiator/target to login server (0xFFFFFE) for switch fabric login. Fibre Channel ID (Port_Identifier) is assigned by login server to initiator/target. Port Login. and State Change Registration 1. ACCEPT (ELS reply): Login server to initiator/target for fabric login acceptance. SCR (ELS): Initiator/target issues State Change Registration Command (SCRC) to fabric controller (0xFFFFFD) for notification when a change in the switch fabric occurs. 7. 2.Figure 6-29 Fibre Channel Fabric. PLOGI (ELS): Initiator to target to establish an end-to-end Fibre Channel session (see Figure 6-30). 5. GA_NXT (Name Server Query): Initiator to name server (0xFFFFFC) requesting attributes about a specific port. ACCEPT (Query reply): Name server to initiator with list of attributes of the requested port. ACCEPT (ELS reply): Fabric controller to initiator/target acknowledging registration command. 4. 3. 9. . 8. PLOGI (ELS): Initiator/target to name server (0xFFFFFC) for port login and establish a Fibre Channel session.

supported FC-4 protocols. or SCR).Figure 6-30 Fibre Channel Port/Process Login to Target 10. It stores information such as port WWN. The fabric controller then informs registered N-Ports of changes to the fabric (Registered State Change Notification. N-Ports must log on with the name server by means of port login before they can use its services. The name server administers a database on N-Ports under the address 0×FF FF FC. node WWN. supported service classes. Table 6-8 lists the reserved Fibre Channel addresses. This information is managed by fabric services. port address. N-Ports can register their own properties with the name server and request information on other N-Ports. RSCN). ACCEPT (ELS reply): Target to initiator acknowledging session establishment. Servers can use this service to monitor their storage devices. The fabric login server processes incoming fabric login requests with the address 0×FF FF FE. ACCEPT (ELS reply): Target to initiator acknowledging initiator’s port login and Fibre Channel session. . Initiator may now issue SCSI commands to the target. and so on. Like all services. All FC services have in common that they are addressed via FC-2 frames. N-Ports can register for state changes in the fabric controller (State Change Registration. and they can be reached by well-defined addresses. 11. 12. PRLI (ELS): Initiator to target process login requesting an FC-4 session. The fabric controller manages changes to the fabric under the address 0×FF FF FD. Fibre Channel Fabric Services In a Fibre Channel topology the switches manage a range of information that operates the fabric. the name server appears as an N-Port to the other ports.

) FCP maps the SCSI protocol onto the underlying Fibre Channel network. However. no further modifications are necessary to the operating system except for the installation of a new device driver. This is where the application protocols (Upper Layer Protocols. . This means that the FC-4 protocol mappings support the Application Programming Interface (API) of existing protocols upward in the direction of the operating system and realize these downward in the direction of the medium via the Fibre Channel network. For the connection of storage devices to servers. which it addresses like “normal” SCSI devices. (See Figure 6-31. The idea of the FCP protocol is that the system administrator merely installs a new device driver on the server and this realizes the FCP protocol. The task of the FC-4 protocol mappings is to map the application protocols onto the underlying Fibre Channel network. The application protocol for SCSI is called Fibre Channel Protocol (FCP). The protocol mappings determine how the mechanisms of Fibre Channel are used in order to realize the application protocol by means of Fibre Channel. A specific Fibre Channel network can serve as a medium for several application protocols—for example. The operating system recognizes storage devices connected via Fibre Channel as SCSI devices.Table 6-8 FC-PH Has Defined a Block of Addresses for Special Functions (Reserved Addresses) FC-4: ULPs—Application Protocols The layers FC-0 to FC-3 serve to connect end devices together by means of a Fibre Channel network. the type of data that end devices exchange via Fibre Channel connections remains open. the SCSI cable is therefore replaced by a Fibre Channel network. or ULPs) come into play. they specify which service classes will be used and how the data flow in the application protocol will be projected onto the exchange sequence frame mechanism of Fibre Channel. For example. This emulation of traditional SCSI devices should make it possible for Fibre Channel SANs to be simply and painlessly integrated into existing hardware and software. SCSI and IP. This mapping of existing protocols aims to ease the transition to Fibre Channel networks: ideally.

Novell. Fibre Channel is therefore taking the old familiar storage networks from the world of mainframes into the Open System world (Unix.Figure 6-31 FCP Protocol Stack A further application protocol is IPFC: IPFC uses a Fibre Channel connection between two servers as a medium for IP data traffic. Fibre Channel—Standard Port Types Fibre Channel ports are intelligent interface points on the Fibre Channel SAN. array or tape controller. These ports understand Fibre Channel. Windows. and switched fabric. the interface point acts as the initiator or target for the connection. . MacOS). it has been possible to realize storage networks in the world of mainframes since the 1990s. which the interface point formats to be sent to the target device. such as an I/O adapter. Figure 6-32 portrays the various Fibre Channel port types. FICON maps the ESCON protocol (Enterprise System Connection) used in the world of mainframes onto Fibre Channel networks. When a server or storage device communicates. The server or storage device issues SCSI commands. IPFC defines how IP packets will be transferred via a Fibre Channel network. Fibre Connection (FICON) is a further important application protocol. The Fibre Channel switch recognizes which type of device is attaching to the SAN and configures the ports accordingly. Using ESCON. They are found embedded in various devices. OS/400.

Fabric port (F Port): In F Port mode. This port connects to one or more NL Ports (including FL Ports in other switches) to form a public FC-AL. TE Ports are specific to Cisco MDS 9000 Series switches and expand the functionality of E Ports to support virtual SAN (VSAN) trunking. an interface functions as a fabric port. Node-proxy port (NP Port): An NP Port is a port on a device that is in N-Port Virtualization (NPV) mode and connects to the core switch via an F Port. Fabric loop port (FL Port): In FL Port mode. all frames that are transmitted are in the EISL frame format. F Ports support class 2 and class 3 service. FL Ports support class 2 and class 3 service. They also serve as a conduit between switches for frames that are destined to remote node ports (N Ports) and node loop ports (NL Ports). class 3. An F Port can be attached to only 1 N Port. This port connects to another E Port to create an interswitch link (ISL) between two switches. This port connects to another TE Port to create an extended ISL (EISL) between two switches. When an interface is in TE Port mode. This port connects to a peripheral device (such as a host or disk) that operates as an N Port. If more than one FL Port is detected on the FC-AL during initialization. which contains VSAN information. E Ports support class 2. and class F service. Interconnected switches use the VSAN ID to multiplex traffic from one or more VSANs across the same physical link. E Ports carry frames between switches for configuration and fabric management. the other FL Ports enter nonparticipating mode. an interface functions as a fabric loop port. NP Ports function like node ports (N .Figure 6-32 Fibre Channel Standard Port Types Expansion port (E Port): In E Port mode. an interface functions as a fabric expansion port. transport quality of service (QoS) parameters. an interface functions as a trunking expansion port. only one FL Port becomes operational. and the Fibre Channel Traceroute (fctrace) feature. Trunking expansion port (TE Port): In TE Port mode.

This interface connects to another trunking node port (TN Port) or trunking node-proxy port (TNP Port) to create a link between a core switch and an NPV switch or a host bus adapter (HBA) to carry tagged frames. Virtual Storage-Area Network A VSAN is a virtual storage-area network (SAN). depending on the attached N or NL Port. In TF Port mode. This interface connects to a TF Port to create a link to a core N Port ID Virtualization (NPIV) switch from an NPV switch to carry tagged frames. When a port is configured as an ST Port. an interface functions as a SPAN. This model uses B Ports that are as described in the T11 Standard Fibre Channel Backbone 2 (FC-BB-2). some SAN extender devices implement a B Port model to connect geographically dispersed fabrics. it cannot be attached to any device and therefore cannot be used for normal Fibre Channel traffic. Bridge port (B Port): Whereas E Ports typically interconnect Fibre Channel switches. TE Port. A SAN is a dedicated network that interconnects hosts and storage devices primarily to exchange SCSI traffic. and zoning. a Fibre Channel switch is connected to a further Fibre Channel switch via a G-Port. which contains VSAN information. for example. Fx Port mode is determined during interface initialization. G-Port (Generic_Port): Modern Fibre Channel switches configure their ports automatically. TNP Port: In TNP Port mode. A set of protocols runs over the SAN to handle routing. E Port. An SD Port monitors network traffic that passes through a Fibre Channel interface. Auto mode: An interface that is configured in auto mode can operate in one of the following modes: F Port. an interface functions as a trunking expansion port. Monitoring is performed using a standard Fibre Channel analyzer (or a similar Switch Probe) that is attached to the SD Port. an interface functions as a trunking expansion port. an interface functions as an entry-point port in the source switch for the Remote SPAN (RSPAN) Fibre Channel tunnel. with the port mode being determined during interface initialization. ST Port mode and the RSPAN feature are specific to Cisco MDS 9000 Series switches.Ports) but in addition to providing N Port operations. SPAN tunnel port (ST Port): In ST Port mode. all frames are transmitted in the EISL frame format. If. you use the physical links to make these interconnections. . SD Ports cannot receive frames and transmit only a copy of the source traffic. Trunking fabric port (TF Port): In TF Port mode. Such ports are called G-Ports. naming. the G-Port configures itself as an E-Port. Switched Port Analyzer (SPAN) destination port (SD Port): In SD Port mode. or TF Port. they also function as proxies for multiple physical N Ports. Fx Port: An interface that is configured as an Fx Port can operate in either F or FL Port mode. In SANs. You can design multiple SANs with different topologies. TF Ports are specific to Cisco MDS 9000 Series switches and expand the functionality of F Ports to support VSAN trunking. This feature is nonintrusive and does not affect switching of network traffic for any SPAN source port. FL Port. The SPAN feature is specific to Cisco MDS 9000 Series switches.

The reasons for building SAN islands include to isolate different applications into their own fabric or to raise availability by minimizing the impact of fabric-wide disruptive events. each housing multiple applications and still providing island-like isolation. when a VSAN is created. in practice this situation can become costly and wasteful in terms of fabric ports and resources. Instead. the network administrator can build a single topology containing switches. Therefore. links. . and one or more VSANs. VSANs increase the efficiency of a SAN fabric by alleviating the need to build multiple physically isolated fabrics to meet organizational or application needs. VSANs provide not only a hardware-based isolation. Each VSAN in this topology has the same behavior and property of a SAN. fewer less-costly redundant fabrics can be built. providing a clean method of virtually growing application-specific SAN islands. configuration management capability. and policies are created within the new VSAN.A SAN island refers to a completely physically isolated switch or group of switches that are used to connect hosts to storage devices. A VSAN has the following additional features: Multiple VSANs can share the same physical topology. Spare ports within the fabric can be quickly and nondisruptively assigned to existing VSANs. thus increasing VSAN scalability. a completely separate set of Cisco Fabric Services. With the introduction of VSANs. Unfortunately. The same Fibre Channel IDs (FC IDs) can be assigned to a host in another VSAN. but also a complete replicated set of Fibre Channel services for each VSAN. Figure 6-33 VSANs Physically separate SAN islands offer a higher degree of security because each physical infrastructure contains a separate set of Cisco Fabric Services and management access. Figure 6-33 portrays VSAN topology and a representation of VSANs on an MDS 9700 Director switch by per port allocation.

Using a hardware-based frame tagging mechanism on VSAN member ports and EISL links isolates each separate virtual fabric. one is a default VSAN (VSAN 1) and another is an isolated VSAN (VSAN 4094). Scalability: VSANs are overlaid on top of a single physical fabric. domain manager. Fabric-related configurations in one VSAN do not affect the associated traffic in another VSAN. The EISL link type has been created and includes added tagging information for each frame within the fabric. If one VSAN fails. or changed between VSANs without changing the physical structure of a SAN. Moving a device from one VSAN to another only requires configuration at the port level. The EISL link is supported between Cisco MDS and Nexus switch products. Redundancy: Several VSANs created on the same physical SAN ensure redundancy. Per VSAN fabric services: Replication of fabric services on a per VSAN basis provides increased scalability and availability. redundant protection (to another VSAN in the same physical SAN) is configured using a backup path between the host and the device. The ability to create several logical VSAN layers increases the scalability of the SAN. Ease of configuration: Users can be added. if desired. User-specified VSAN IDs range from 2 to 4093. Table 6-9 lists the different characteristics of VSAN and Zone. Of these. Membership to a VSAN is based on physical ports. moved. not at a physical level. such as FSPF. Up to 256 VSANs can be configured in a switch. and devices reside in only one VSAN.Every instance of a VSAN runs all required protocols. and zoning. and no physical port may belong to more than one VSAN. Events causing traffic disruptions in one VSAN are contained within that VSAN and are not propagated to other VSANs. VSANs offer the following advantages: Traffic isolation: Traffic is contained within VSAN boundaries. . ensuring absolute separation between user groups.

preference is given to the pWWN. DPVM offers flexibility and eliminates the need to reconfigure the port VSAN membership to maintain fabric topology when a host or storage device connection is moved between two Cisco MDS switches or two ports within a switch. You can assign any one of these identifiers or any combination of these identifiers to configure DPVM mapping. DPVM uses the Cisco Fabric Services (CFS) infrastructure to allow efficient database management and distribution. The pWWN identifies the host or device and the nWWN identifies a node consisting of multiple devices. and security by allowing multiple Fibre Channel SANs to share a common physical infrastructure of switches and ISLs. These benefits are derived from the separation of Fibre Channel services in each VSAN and the isolation of traffic between . The Cisco NX-OS software checks the database during a device FLOGI and obtains the required VSAN details. This method is referred to as Dynamic Port VSAN Membership (DPVM). You can dynamically assign VSAN membership to ports by assigning VSANs based on the device WWN. DPVM configurations are based on port world wide name (pWWN) and node world wide name (nWWN) assignments. If you assign a combination.Table 6-9 VSAN and Zone Comparison Dynamic Port VSAN Membership Port VSAN membership on the switch is assigned on a port-by-port basis. availability. By default each port belongs to the default VSAN. A DPVM database contains mapping information for each device pWWN/nWWN assignment and the corresponding VSAN. It retains the configured VSAN regardless of where a device is connected or moved. Inter-VSAN Routing VSANs improve SAN scalability.

VSANs (see Figure 6-34). It provides efficient business continuity or disaster recovery solutions when used with Fibre Channel over IP (FCIP). . you can access resources across VSANs without compromising other VSAN benefits. LUN masking is used to provide additional security to LUNs after an initiator has reached the target device. Using IVR. such as robotic tape libraries. whether physical or logical. Fibre Channel traffic does not flow between VSANs. IVR is not limited to VSANs present on a common switch. It establishes proper interconnected routes that traverse one or more VSANs across multiple switches. Zoning provides security within a single fabric. nor can initiators access resources across VSANs other than the designated VSAN. Data traffic isolation between the VSANs also inherently prevents sharing of resources attached to a VSAN. to restrict access between initiators and targets (see Figure 6-35). IVR transports data traffic between specific initiators and targets on different VSANs without merging VSANs into a single logical fabric. Figure 6-34 Inter-VSAN Topology Fibre Channel Zoning and LUN Masking VSANs are used to segment the physical fabric into multiple logical fabrics.

hard zoning restricts actual communication across a fabric. any server can still attempt to contact any device on the network by address. soft zoning is similar to the computing concept of security through obscurity. The primary goal is to prevent certain devices from accessing other devices within the fabric.Figure 6-35 VSANs Versus Zoning The zoning service within a Fibre Channel fabric is designed to provide security between devices that share the same fabric. can see which targets. the need for security is crucial. For example. potentially with a different operating system. This process dictates which devices. In this way. Therefore. the data on the disk could become corrupted. standards-compliant zoning capabilities. but is much more secure. namely hosts. More recently. There are two main methods of zoning. if a host were to access a disk that another host is using. The fabric name service allows each device to query the addresses of all other devices. All modern SAN switches then enforce soft zoning in hardware. Soft zoning restricts only the fabric name service. However. it will see only the devices it is allowed to see. That stated. In contrast. when a server looks at the content of the fabric. the differences between the two have blurred. Advanced zoning capabilities specified in the FC-GS-4 and FC-SW-3 standards are provided. You can use either the existing basic zoning capabilities or the advanced (enhanced). With many types of servers and storage devices on the network. reducing the risk of data loss. hard and soft. Cisco MDS 9000 and . This requires efficient hardware implementation (frame filtering) in the fabric switches. modern switches would employ hard zoning when you implement soft. to show only an allowed subset of devices. zoning allows the user to overlay a security map. To avoid any compromise of crucial data within the SAN.

Domain ID and port number: Specifies the domain ID of an MDS domain and additionally specifies a port belonging to a non-Cisco switch. Zoning does have its limitations. Interface and domain ID: Specifies the interface of a switch identified by the domain ID. IPv6 address: The IPv6 address of an attached device in 128 bits in colon-separated hexadecimal format. Fabric pWWN: Specifies the WWN of the fabric port (switch port’s WWN). Zoning can be administered from any switch in the fabric. Zoning has the following features: A zone consists of multiple zone members. If zoning is activated. A zone can be a member of more than one zone set. zone sets are acquired by the new switch. A zone set consists of one or more zones. Symbolic nodename: Specifies the member symbolic node name. The maximum length is 240 characters.Cisco Nexus family switches implement hard zoning. . New zones and zone sets can be activated without interrupting traffic on unaffected ports or devices. This membership is also referred to as interface-based zoning. full zone sets are distributed to all switches in the fabric. If a new switch is added to an existing fabric. Any installed changes to a zoning configuration are therefore disruptive to the entire connected fabric. any device that is not in an active zone (a zone that is part of an active zone set) is a member of the default zone. Interface and switch WWN (sWWN): Specifies the interface of a switch identified by the sWWN. Zone changes can be configured nondisruptively. This membership is also referred to as port-based zoning. all devices are members of the default zone. When you activate a zone (from any switch). members in different zones cannot access each other. Only one zone set can be activated at any time. A Cisco MDS switch can have a maximum of 500 zone sets. It is a distributed service that is common throughout the fabric. Zones can vary in size. Zoning was designed to do nothing more than prevent devices from communicating with other unauthorized devices. Members in a zone can access each other. if this feature is enabled in the source switch. Devices can belong to more than one zone. FC ID: Specifies the FC ID of an N port attached to the switch as a member of the zone. all switches in the fabric receive the active zone set. IPv4 address: Specifies the IPv4 address (and optionally the subnet mask) of an attached device. A zone set can be activated or deactivated as a single entity across all switches in the fabric. Port world wide name (pWWN): Specifies the pWWN of an N port attached to the switch as a member of the zone. Additionally. If zoning is not activated. Zoning was also not designed to address availability or scalability of a Fibre Channel infrastructure. Zone membership criteria is based mainly on WWNs or FC IDs.

Default zone membership includes all ports or WWNs that do not have a specific membership association. to ensure that only one host at a time can access each LUN. discover LUNs. For a zone with N members.” LUN masking is the most common method of ensuring LUN security. Many LUNs are to be visible. LUNs can be advertised on many storage ports and discovered by several hosts at the same time. it overwrites the previous signature. LUN mapping is configured in the HBA of each host. These are the commands: Identify: Which device are you? Report LUNs: How many LUNs are behind this storage array port? Report capacity: What is the capacity of each LUN? Request sense: Is a LUN online and available? It is important to ensure that only one host accesses each LUN on the storage array at a time. and which host is allowed to own which LUNs. a single initiator with a single target is the most efficient approach to zoning. but it is the responsibility of the administrator to configure each HBA so that each host has exclusive access to its LUNs. unless the hosts are configured in a cluster. All members of a zone can communicate with each other. N × (N – 1) access permissions need to be enabled. This type of configuration wastes switch resources by provisioning and managing many communicating pairs (initiator-to-initiator or target-totarget) that will never actually communicate with each other. If a second host should discover and try to mount the same LUN volume. Access between default zone members is controlled by the default zone policy. LUN masking ensures that only one host can access a LUN. mistakes are more likely to be made and more than one host might access the same LUN by accident. it writes a signature at the start of the LUN to claim exclusive access. As the host mounts each LUN volume. all other hosts are masked out. When there are many hosts. Configuring multiple initiators to multiple targets is not recommended. and obtain their size. An alternative method of LUN security is LUN mapping. Device Aliases Versus Zone-Based (FC) Aliases . LUN mapping can be used. If LUN masking is unavailable in the storage array. For this reason. The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. Each SCSI host uses a number of primary SCSI commands to identify its target. LUN masking is essentially a mapping table inside the front-end array controllers. “Fibre Channel Storage Area Networking. Both standards support the basic zoning functionalities and the enhanced zoning functionalities. Configuring the same initiator to multiple targets is accepted. Both features are explained in detail in Chapter 9. The following guidelines must be considered when creating zone members: Configuring only one initiator and one target for a zone provides the most efficient use of the switch resources. LUN masking determines which LUNs are to be advertised through which storage array ports. The best practice is to avoid configuring large numbers of targets or large numbers of initiators in a single zone. although both methods can be used concurrently.

port security) in a Cisco MDS 9000 family switch. QoS. You can avoid this problem if you define a user-friendly name for a port WWN and use this name in the entire configuration commands as required.org . This operation continues until the mode is changed to enhanced. you must assign the correct device name each time you configure these features. These user-friendly names are referred to as device aliases. the application immediately expands to pWWNs. Reference List Storage Network Industry Association (SNIA): http://www. The applications track the device alias database changes and take actions to enforce it.snia. When device alias runs in the enhanced mode. The applications store the device alias name in the configuration and distribute it in the device alias format instead of expanding to pWWN.Device Aliases Versus Zone-Based (FC) Aliases When the port WWN (pWWN) of a device must be specified to configure different features (zoning. all applications accept the device-alias configuration in the native format. An incorrect device name may cause unexpected results. Table 6-10 lists the different characteristics of zone aliases and device aliases. Table 6-10 Comparison Between Zone Aliases and Device Aliases Device alias supports two modes: basic and enhanced. When you configure the basic mode using device aliases.

Inc.htm ANSI T10 Technical Committee: www.charters/ips-charter.html ANSI T11—Fibre Channel: http://www.. .org/index. Microsoft Press.ietf.snia.t11. 2013 Kembel.ieee.org SNIA dictionary: http://www. Fibre Channel a Comprehensive Introduction. Table 6-11 lists a reference for these key topics and the page numbers on which each is found.org/education/dictionary Farley. Rethinking Enterprise Storage: A Hybrid Cloud Mode.org/html. noted with the key topic icon in the outer margin of the page. Marc. 2000 Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. Northwest Learning Associates.T10.Internet Engineering Task Force—IP Storage: http://www.org IEEE Standards site: http://standards. Robert W.

.

.

includes completed tables and lists to check your work. or at least the section for this chapter.Table 6-11 Key Topics for Chapter 6 Complete Tables and Lists from Memory Print a copy of Appendix B. “Memory Tables Answer Key. Appendix C. and check your answers in the glossary: . “Memory Tables” (found on the disc). Define Key Terms Define the following key terms from this chapter.” also on the disc. and complete the tables and lists from memory.

virtual storage-area network (VSAN) zoning fan-out ratio fan-in ratio logical unit number (LUN) masking LUN mapping Extended Link Services (ELS) multipathing Big Data IoE N_Port Login (PLOGI) F_Port Login (FLOGI) logout (LOGO) process login (PRLI) process logout (PRLO) state change notification (SCN) registered state change notification (RSCN) state change registration (SCR) inter-FSAN routing (IVR) Fibre Channel Generic Services (FC-GS) hard disk drive (HDD) solid-state drive (SSD) Small Computer System Interface (SCSI) initiator target American National Standards Institute (ANSI) International Committee for Information Technology Standards (INCITS) Fibre Connection (FICON) T11 Input/output Operations per Second (IOPS) Common Internet File System (CIFS) Network File System (NFS) Cisco Fabric Services (CFS) Top-of-Rack (ToR) End-of-Row (EoR) Dynamic Port VSAN Membership (DPVM) .

inter-VSAN routing (IVR) Fabric Shortest Path First (FSPF) .

8-. 16-. Cisco MDS Multiservice and Multilayer Fabric Switches.Chapter 7. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. the focus is on the current generation of modules and storage services solutions. read the entire chapter. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. second-. and 10-Gbps Fibre Channel along with being Fibre Channel over Ethernet (FCoE) ready. In this chapter. providing true investment protection. Cisco MDS Product Family This chapter covers the following exam topics: Describe the Cisco MDS Product Family Describe the Cisco MDS Multilayer Directors Describe the Cisco MDS Fabric Switches Describe the Cisco MDS Software and Storage Services Describe the Cisco Prime Data Center Network Management Evolving business requirements highlight the need for high-density. Table 7-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. This chapter discusses the Cisco Multilayer Data Center Switch (MDS) product family in four main areas: Cisco MDS Multilayer Directors. One major benefit of the Cisco MDS 9000 family architecture is investment protection: the capability of first-. With the addition of the fourth-generation modules. the Cisco MDS 9000 family of storage networking products supports now 1-. A quick look at the storage-area network (SAN) marketplace and the track record of delivery reveals that Cisco is the leader among SAN providers in delivering compatible architectures and designs that can adapt to future changes in technology. and Data Center Network Management. high-bandwidth storage networking solution along with integrated fabric applications to support dynamic data center requirements.” . This chapter goes directly into the key concepts of the Cisco Storage Networking product family. 4-. and third-generation modules to all coexist in both existing customer chassis and new switch configurations. high-speed. and low-latency networks that improve data center scalability and manageability while controlling IT costs. 2-. You can find the answers in Appendix A. Cisco MDS Software and Storage Services. and discusses topics relevant to “Introducing Cisco Data Center Technologies” (DCICT) certification. “Answers to the ‘Do I Know This Already?’ Quizzes. The Cisco MDS 9000 family provides the leading high-density.

MDS 9506 b. MDS 9222i d. Which of the following features are NOT supported by Cisco MDS switches? a. Congestion on one port causes in blocking of traffic to other ports. you should mark that question as wrong for purposes of the self-assessment. It is an authentication mechanism for Fibre Channel frames. MDS 9520 c. Which of the following Cisco SAN switches are director model? a. Multiprotocol support b. MDS 9148S b. How many crossbar switching fabric modules exists on a Cisco MDS 9706 switch? a. onePK API e. b. MDS 9709 d. MDS 9513 6. 4 d. d. Which of the following MDS switches are qualified for IBM Fiber Connection (FICON)? a. 32 Gbps Fibre Channel 2. 2 5. 3. It is used for a forwarding mechanism in Cisco MDS switches. If you do not know the answer to a question or are only partially sure of the answer. Which of the following Cisco MDS fabric switches support Fibre Channel over IP (FCIP) and I/O Accelerator (IOA)? . 8 b. MDS 9706 c. SAN extension d. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. High availability for fabric modules c. c. 6 c. What is head-of-line blocking? a. MDS 9704 4. 1.Table 7-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. It is used for high availability for fabric modules.

Virtual Extensible LAN e. DCNM Data Mobility Manager e. MDS 9148 7. DCNM traffic accelerator Foundation Topics Once Upon a Time There Was a Company Called Andiamo In August 2002 Cisco Systems officially entered the storage networking business with the spin-in acquisition of Andiamo Systems. b. IP over Fibre Channel c. What is Cisco Data Mobility Manager (DMM)? a. It is an intelligent fabric application that does data migration between heterogeneous disk arrays. Fibre Channel over IP d. Which of the following options are fundamental components of Cisco Prime DCNM SAN? a. It is an industry standard that reduces the number of Fibre Channel domain IDs in core-edge SANs. Andiamo means “Let’s go” in Italian.” The MDS 9000 Multilayer Director switch family is composed of fabric and director-class Fibre Channel switches.a. It is a mainframe-based replication solution. DCNM-SAN Client c. b. It is a high-availability feature that allows managing the Fibre Channel traffic load. c. It is an industry standard that allows multiple N-port fabric login concurrently on a single physical Fibre Channel link. DCNM-SAN Server b. c. IBM Fiber Connection 8. What is N-port identifier virtualization (NPIV) in the Fibre Channel world? a. Performance Manager d. These devices are capable of deploying highly available and scalable storage-area networks (SAN) for the most demanding data center . Which protocol or protocols does the Cisco MDS 9250i fabric switch NOT support? a. Fibre Channel over Ethernet b. It is an intelligent fabric application that provides Fibre Channel link encryption. d. 10. MDS 9222i c. It is a new Fibre Channel login method that is used for storage arrays. MDS 9250i b. and Cisco entered the storage business by saying “Andiamo. d. 9. MDS 9148S d. It is an intelligent fabric application that provides SCSI acceleration.

so in this chapter the focus will not be on first-. Offering a maximum system density of 224 1/2 Gbps Fibre Channel ports.environments. . Cisco again set the benchmark for high-performance Fibre Channel Director switches. Figure 7-1 portrays the available Cisco MDS 9000 switches (at the time of writing). system density has grown to a maximum of 528 Fibre Channel ports operating at 1/2/4-Gbps or 44 10-Gbps Fibre Channel ports. On the other hand. the earlier generation line cards already became end of sale (EOS). increasing interface bandwidth per slot to 96 Gbps full duplex. resulting in a total system switching capacity of 1. crossbars (supervisors). the available line cards for MDS reached to fourth. Cisco set the benchmark for high-performance Fibre Channel Director switches with the release of the Cisco MDS 9509 Multilayer Director. and third-generation line cards. With the introduction of second-generation line cards. resulting in a total system switching capacity of 2.44 Tbps.and fifthgeneration. In 2006. the earlier generation line cards can still work on MDS 9500 Director switches together with the latest line cards.2 Tbps. and the flagship Cisco MDS 9513 Multilayer Director. first-generation Cisco MDS 9000 Family line cards connected to dual crossbar switch fabrics with an aggregate of 80 Gbps full-duplex bandwidth per slot. Figure 7-1 Cisco MDS 9000 Product Family In December 2002. At the time of writing this chapter. second-.

the MDS 9710 provides enhanced reliability. Being considered for future development are 32 Gbps Fibre Channel and 40 Gbps Fibre Channel over Ethernet (FCoE). Future proofing: The MDS 9710 will support the Cisco Open Network Environment (ONE) Platform Kit (onePK) API to enable future services access and programmability. Multiprotocol support: The MDS 9710 Multilayer Director supports a range of Fibre Channel speeds. or Internet Small Computer System Interface (iSCSI) and FICON. 10 Gbps Fibre Channel over IP (FCIP). and 10 Gbps Fiber Channel over Ethernet (FCoE). and disk replication as well as migrate data between arrays or data centers. The N+1 concept allows eight power supplies and up to six fabric modules. Comprehensive and simplified management of the entire data center network: Organizations looking to simplify management and integrate previously disparate networking teams. storage. they can manage Cisco MDS and Nexus platforms. Organizations should also be able to leverage and report on the data captured and plan for additional capacity more effectively. nor is there ever any requirement for forwarding to drop back into software for processing. This is coupled with the 384 ports of line rate 16 Gbps FC. starting with the MDS 9710 Multilayer Director and MDS 9250i Multiservice Fabric Switch. From a single management tool. resulting in a distributed forwarding architecture. These new MDS solutions provide the following: Enhanced availability: Built on the experience from the MDS 9500 and the Nexus product line. The latter allows for tighter integration with the server and network admins.The Cisco approach to Fibre Channel storage-area networking reflected its strong heritage in data networking. including N+1 fabric modules and three fan trays with front-to-back airflow. Frame forwarding logic is distributed in application-specific integrated circuits (ASIC) on the line cards themselves. DCNM is also integrated with VMware vSphere to provide a more comprehensive VM-to-storage visibility. . in 2012 Cisco introduced its next generation of Fibre Channel platforms. and data should find the Cisco Prime Data Center Network Manager (DCNM) appealing. the MDS 9710 crossbar and central arbiter design enable up to 1. All frame forwarding is performed in dedicated forwarding hardware and distributed across all line cards in the system. 10 Gbps FCoE. The MDS 9250i has been designed to help accelerate SAN extension. The MDS 9250i Multiservice Fabric Switch accommodates 16 Gbps FC. IBM Fiber Connection (FICON for mainframe connectivity). essentially extending the concept of Software Defined Networks (SDN) into the SAN. The Secret Sauce of Cisco MDS: Architecture All Cisco MDS 9500 Series Director switches are based on the same underlying crossbar architecture. tape backup. A single resource should help to accelerate troubleshooting with easy-to-understand dynamic topology views and the ability to monitor the health of all the devices and filters on events that occur.5 Tbps of FC throughput per slot. There is never any requirement for frames to be forwarded to the supervisor for centralized forwarding. high-performance storage director: To handle the rapid scale accompanied by data center consolidation. High-density. namely the concept of building a modular chassis that could be upgraded to incorporate future services without ripping and replacing the original chassis. Following a 10-year run with the initial MDS platform.

It is used for Fibre Channel NAT. From a switch architecture perspective. this ensures that when multiple input ports on multiple line cards are contending for a congested output port. crossbar capacity is provided to each line card slot as a small number of high-bandwidth channels. . multiprotocol (Small Computer System Interface over IP [iSCSI] and Fibre Channel over IP [FCIP]) and intelligent (storage services) line cards. 6. there is complete fairness between ports regardless of port location within the switch. the frame is tagged with its ingress port and virtual storage-area networking (VSAN). It also ensures that congestion on one port does not result in blocking of traffic to other ports (head-of-line blocking). the number of crossbar switch fabrics that are installed might change. IVR is an MDS NX-OS software feature that is similar to Network Address Translation (NAT) in the IP world. To ensure fairness between ports. resulting in an identical amount of active crossbar switching capacity regardless of whether the system is operating with one or two crossbar switch fabrics. and the VSAN tag used internal to the switch is stripped from the front of the frame. 2. If the destination port is congested or busy. 3. Figure 72 portrays the Central Arbitration in MDS 9000 switches. enabling the switch to handle duplicate addresses within different fabrics separated by VSANs. the switch will choose an egress port based on the load-balancing policy configured for that particular VSAN. A lookup is issued to the forwarding and frame routing tables to determine where to switch the frame to and whether to route the frame to a new VSAN and/or rewrite the source/destination of the frame if Inter VSAN Routing (IVR) is being used. If there are multiple equal-cost paths for a given destination device. if priority is desired for a given set of traffic flows. 5. One active channel and one standby channel are used on each of the crossbar switch fabrics. When a frame is scheduled for transmission. Starting with frame error checking on ingress. non-oversubscribed) line cards and host-optimized (nonblocking. the standby channel on the remaining crossbar becomes active. resulting in both crossbars being active (1:1 redundancy). Depending on the MDS Director switch model. oversubscribed) line cards. Input access control lists (ACL) are used to enforce hard zoning. making use of QoS policy maps to determine the order in which frames are scheduled. In the event of a crossbar failure.Forwarding of frames on the Cisco MDS always follows the same processing sequence: 1. the frame will be queued on the ingress line card for delivery. frame switching uses a technique called virtual output queuing (VOQ). This provides significant flexibility for line card module design and architecture and makes it possible to build both performance-optimized (nonblocking. 4. In addition. and the frame is transmitted out the output port. Combined with centralized arbitration. QoS and CoS can be used to give priority to those traffic flows. it is transferred across the crossbar switch fabric to the egress line card where there is a further output access control list (ACL).

as well as other Fibre Channel signaling primitives. The various frame-processing stages are outlined in detail in this section (see Figure 7-3). Each front-panel port has an associated physical layer (PHY) device port and a media access controller (MAC).Figure 7-2 Crossbar Switch with Central Arbitration A Day in the Life of a Fibre Channel Frame in MDS 9000 All Cisco MDS 9000 family line cards use the same process for frame forwarding. The primary function of the MAC is to decipher Fibre Channel frames from the incoming bit stream by identifying start-of-frame (SoF) and end-of-frame (EoF) markers in the bit stream. Figure 7-3 Cisco MDS 9000 Series Switches Packet Flow 1. . Ingress PHY/MAC Modules: Fibre Channel frames enter the switch front-panel optical Fibre Channel ports through Small Form-Factor Pluggable (SFP) optics for Fibre Channel interfaces or X2 transceiver optical modules for 10 Gbps Fibre Channel interfaces. sending the electrical stream of bits into the MAC. On ingress. the PHY converts optical signals received into electrical signals.

The third lookup is a per-VSAN ingress ACL lookup determined by VSAN. based on the input port. type of port. blank if not). or perform any additional inspection on the frame. This table lookup also indicates whether there is a requirement for any Inter VSAN routing (IVR—in simple terms Fibre Channel NAT). providing the forwarding module with details such as ingress port. If ingress rate limiting has been enabled. drop the frame. This also forms the basis for the Cisco implementation of hard zoning within Fibre Channel. VSAN. If traffic from a given source address to a given destination address is marked for IVR. The second lookup is a statistics lookup. source address. a load-balancing decision is made to choose a single physical egress interface from a set of interfaces. frame QoS markings (if the ingress port is set to trust the sender. The switch uses this to maintain a series of statistics about what devices are communicating between one another. destination address. D_ID) or a hash also based on the exchange ID (S_ID. The result from this lookup tells the forwarding module where to switch the frame to (egress port). if there are multiple equal-cost Fabric Shortest Path First [FSPF] routes or the destination is a Cisco port channel bundle). guaranteeing in-order delivery. the MAC communicates with the forwarding and queuing modules and issues return buffer-credits (R_RDY primitives) to the sending device to recredit the received frames. The concept of an internal switch header is an important architectural element of what enables the multiprotocol and multitransport capabilities of Cisco MDS 9000 family switches. Return buffer credits are issued if the ingress-queuing buffers have not exceeded an internal threshold. If the lookup fails. . The load-balancing policy (and algorithm) can be configured on a per-VSAN basis to be either a hash of the source and destination addresses (S_ID. The MAC also checks that the received frame contains no errors by validating its cyclic redundancy check (CRC). Statistics gathered include frame and byte counters from a given source to a given destination. The switch uses the result from this lookup to either permit the frame to be forwarded. D_ID) of the frame. When a frame is received by the ingress forwarding module. If the frame has multiple possible egress ports (for example.Along with frames being received. the final forwarding step is to rewrite the VSAN ID and optionally the source and destination addresses (S_ID. Ingress Forwarding Module: The ingress forwarding module determines which output port on the switch to send the ingress frame to. In this manner. 2. ingress VSAN. and a variety of other fields from the interswitch header and Fibre Channel frame header. Queuing Module: The queuing module is responsible for scheduling the flow of frames through the switch (see Figure 7-4). Before forwarding the frame. and the destination address within the Fibre Channel frame. ingress port. all frames within the same flow (either between a single source to a single destination or within a single SCSI I/O operation) will always be forwarded on the same physical path. three simultaneous lookups are initiated followed by forwarding logic based on the result of those lookups: The first of these is a per-VSAN forwarding table lookup determined by VSAN and the destination address. OX_ID) of the frame. the frame is dropped because there is no destination to forward to. 3. D_ID. return buffer credits are paced to match the configured throughput on the port. the MAC prepends an internal switch header onto the frame. and a timestamp of when the frame entered the switch.

Point A is causing a head-of-line blocking. head-of-line blocking is generally solved by “dropping” frames. Figure 7-5 shows a congested destination. if there is congestion the ingress port is blocked. In the Fibre Channel world.Figure 7-4 Queuing Module Logic It uses distributed virtual output queuing (VOQ) to prevent head-of-line blocking and is the functional block responsible for implementing QoS and fairness between frames contending for a congested output port. which in this picture is point A. So head-ofline blocking is less of an issue in the Ethernet world. All the packets sent from the source device to all the destination points are held up because the destination point A cannot accept any packet. In the Ethernet world. Note Head-of-line blocking is the congestion of one port resulting in blocking traffic to other ports. Figure 7-5 Head-of-Line Blocking .

Multicast Fibre Channel frames are replicated across multiple virtual output queues for each output port to which a multicast frame is destined to be forwarded.The queuing module is also responsible for providing frame buffering for ingress queuing if an output interface is congested. these can be set to any ratio. however. the DWRR algorithm keeps track of this excess usage and subtracts it from the queue’s next transmission bandwidth allocation. The default weights are 50 for the high queue. Central Arbitration Module: The central arbiters are responsible for scheduling frame transmission from ingress to egress. If a queue uses excess bandwidth. It monitors utilization in the transmit queues. DWRR is a scheduling algorithm that is used for congestion management in the network scheduler. Figure 7-7 portrays the distributed virtual output queuing. corresponding to four QoS levels per port: one priority queue and three queues using deficit-weighted round robin (DWRR) scheduling. Frames enter the queuing module and are queued in a VOQ based on the output port to which the frame is due to be sent (see Figure 7-6). granting requests to transmit from ingress forwarding . Frames queued in the priority queue will be scheduled before traffic in any of the DWRR queues. 30 for the medium queue. Figure 7-6 Virtual Output Queues Four virtual output queues are available for each output port. Unicast Fibre Channel frames (frames with only a single destination) are queued in one virtual output queue. Figure 7-7 Distributed Virtual Output Queuing 4a. and 20 for the low queue. Frames queued in the three DWRR queues are scheduled using whatever relative weightings are configured.

If there is no free buffer space. 4b. it operates internally at a rate three times faster than its endpoints. If the primary arbiter fails. Rather than having to arbitrate each frame independently. in the event of congestion on an output port.modules whenever there is an empty slot in the output port buffers. Because switch-wide arbitration is operating in a centralized manner. and each QoS level for each output port has its own virtual output queue. any congestion on an output port will result in distributed buffering across ingress ports and line cards. This also helps ensure that any blocking does not spread throughout the system. Another feature to maximize crossbar performance under peak frame rate is a feature called superframing. . the originating line card will issue a request to the central arbiters asking for a scheduling slot on the output port of the output line card. the arbiter will grant transmission to the output port in a fair (round-robin) manner. and nonoversubscribed switching capacity between line card modules.) The over speed helps ensure that the crossbar remains nonblocking even while sustaining peak traffic levels. Congestion does not result in head-of-line blocking because queuing at ingress utilizes virtual output queues. that is. the redundant arbiter can resume operation without any loss of frames or performance or suffer any delays caused by switchover. (Figure 7-8 portrays the crossbar switch architecture. multiple frames can be transferred back to back across the crossbar. Two redundant central arbiters are available on each Cisco MDS Director switch. Superframing improves the crossbar efficiency in the situation where there are multiple frames queued originating from one line card with a common destination. high-throughput. with the redundant arbiter snooping all requests issued at and grants offered by the primary active arbiter. the arbiter will wait until there is a free buffer. Anytime a frame is to be transferred from ingress to egress. nonblocking. Crossbar Switch Fabric Module: Dual redundant crossbar switch fabrics are used on all Director switches to provide low-latency. They are capable of scheduling more than a billion frames per second. fairness is also guaranteed. If there is an empty slot in the output port buffer. In this manner. The crossbar itself operates three times over speed. the active central arbiter will respond to the request with a grant.

5.Figure 7-8 Centralized Crossbar Architecture Crossbar switch fabric modules depend on clock modules for crossbar channel synchronization. and enforce some additional ACL checks (logical unit number [LUN] zoning and read-only zoning if configured). If the frame is valid. There is also additional Inter VSAN routing IVR frame processing. the egress-forwarding module has signaled the central arbiter that there is output buffer space available for receiving frames. Figure 7-9 portrays it in a flow diagram. Figure 7-9 Egress Frame Forwarding Logic Before the frame arrives. make sure that the Fibre Channel frame is valid. if the frame requires IVR. the first processing step is to validate that the frame is error free and has a valid CRC. Egress Forwarding Module: The primary role of the egress forwarding module is to perform some additional frame checks. and outbound QoS queuing. When a frame arrives at the egressforwarding module from the crossbar switch fabric. the egress-forwarding module will issue an ACL table lookup to see if any .

the frame time stamp is checked. ACLs applied on egress include LUN zoning and read-only zoning ACLs. The next processing step is to finalize any Fibre Channel frame header rewrites associated with IVR. it is very unlikely for any frames to be expired. Dropping of expired frames is in accordance with Fibre Channel standards and sets an upper limit for how long a frame can be queued within a single switch. Egress PHY/MAC Modules: Frames arriving into the egress PHY/MAC module first have their switch internal header removed. the frame is queued for transmission to the destination port MAC with queuing on a per-CoS basis matching the DWRR queuing and configured QoS policy map. Finally. and if the frame has been queued within the switch for too long it is dropped. .additional ACLs need to be applied in permitting or denying the frame to flow to its destination. 6. Next. Note that 500 milliseconds is an extremely long time to buffer a frame and typically occurs only when long. an Enhanced ISL (EISL) header is prepended to the frame. If the output port is a Trunked E_Port (TE_Port). Under normal operating conditions. The default drop timer is 500 milliseconds and can be tuned using the fcdroplatency switch configuration setting. slow WAN links subject to packet loss are combined with a number of Fibre Channel sources. Figure 7-10 portrays how the in-order delivery frames are delivered in order.

a link failure occurs). Unfortunately. and inserting other Fibre Channel primitives over the cable. when a topology change occurs (for example. . inserting SoF and EoF markers. This guarantee extends across an entire multiswitch fabric. frames might be delivered out of order. provided the fabric is stable and no topology changes are occurring. newer frames transmitted down a new non-congested path might pass older frames queued in the previous congested path. In this case. The outbound MAC is responsible for formatting the frame into the correct encoding over the cable.Figure 7-10 In-Order Delivery Finally. the frames are transmitted onto the cable of the output port. shifting traffic from what might have been a congested link to one that is no longer congested. Reordering of frames can occur when routing changes are instantiated. Lossless Networkwide In-Order Delivery Guarantee The Cisco MDS switch architecture guarantees that frames can never be reordered within a switch.

and Fibre Channel over IP (FCIP) in a single platform. Small Computer System Interface over IP (iSCSI). Because of this. manageability. Cisco NX-OS Software is the network software for the Cisco MDS 9000 family and Cisco Nexus family data center switching products. stable. providing a modular and sustainable base for the long term. Cisco MDS 9000 NX-OS supports IBM Fibre Connection (FICON).To protect against this. For mainframe environments. availability. This partitioning of fabric services greatly reduces network instability by containing fabric reconfiguration settings and error conditions within an individual VSAN. and standard Linux core. The strict traffic segregation provided by VSANs helps ensure that the control and data traffic of a given VSAN are confined within the VSAN’s own domain. Users can create SAN administrator roles that are limited in scope . and security. Multiprotocol Support In addition to supporting Fibre Channel Protocol (FCP). FCoE. Cisco MDS Software and Storage Services The foundation of the data center network is the network software that runs the switches in the network. VSAN capabilities enable Cisco MDS 9000 NX-OS to logically divide a large physical fabric into separate. Each VSAN is a logically and functionally separate SAN with its own set of Fibre Channel fabric services. Formerly known as Cisco SAN-OS. Common Software Across All Platforms Cisco MDS 9000 NX-OS runs on all Cisco MDS 9000 family switches and Cisco Nexus family of data center Ethernet switches. Native iSCSI support in the Cisco MDS 9000 family helps customers consolidate storage for a wide range of servers into a common pool on the SAN. VSANs facilitate true hardware-based separation of FICON and open systems. It is based on a secure. all SAN switch vendors default to disable network-IOD except where its use is mandated by channel protocols such as IBM Fiber Connection (FICON). isolated environments to improve Fibre Channel SAN scalability. The general idea here is that it is better to stop all traffic for a period of time than to allow the possibility for some traffic to arrive out of order. Flexibility and Scalability Cisco MDS 9000 NX-OS is a highly flexible and scalable platform for enterprise SANs. all Fibre Channel switch vendors have a fabricwide configuration setting called a networkwide in-order delivery (network-IOD) guarantee. Virtual SANs Virtual SAN (VSAN) technology partitions a single physical SAN into multiple VSANs. This mechanism is a little crude because it guarantees that an ISL failure or topology change will be a disruptive event to the entire fabric.1 it is rebranded as Cisco MDS 9000 NX-OS Software. This stops the forwarding of new frames when there has been an ISL failure or a topology change. starting from release 4. No new forwarding decisions are made during the time it takes for traffic to flush from existing interface queues.

Valuable resources such as tape libraries can be easily shared without compromise. and other roles can be set up to allow configuration and management within specific VSANs only. nor can initiators access any resources aside from the ones designated with IVR. Cisco Data Mobility Manager Cisco Data Mobility Manager (DMM) is a SAN-based. VSANs are supported across FCIP links between SANs. intelligent fabric application offering data migration between heterogeneous disk arrays. Inter-VSAN (IVR) routing is used to transport data traffic between specific initiators and targets on different VSANs without merging VSANs into a single logical fabric. provide flexibility and meet the high-availability requirements of enterprise data centers. IVR also can be used with FCIP to create more efficient business-continuity and disaster-recovery solutions. Intelligent Fabric Applications Cisco MDS 9000 NX-OS forms a solid foundation for delivery of network-based storage applications and services such as virtualization.to certain VSANs. data migration. Cisco DCNM-SAN is used to administer Cisco DMM. Cisco DMM is transparent to host applications and storage devices. a SAN administrator role can be set up to allow configuration of all platform-specific capabilities. I/O Accelerator The Cisco MDS 9000 I/O Accelerator (IOA) feature is a SAN-based intelligent fabric application that provides SCSI acceleration to significantly improve the number of SCSI I/O operations per second (IOPS) over long distances in a Fibre Channel or FCIP SAN by reducing the effect of transport latency on the processing of each operation. The feature also extends the distance for . Fibre Channel control traffic does not flow between VSANs. Easy insertion and provisioning of appliance-based storage applications is achieved by removing the need to insert the appliance into the primary I/O path between servers and storage. and replication on Cisco MDS 9000 family switches. and multipath supports. such as data verification. This approach improves the manageability of large SANs and reduces disruptions resulting from human errors by isolating the effect of a SAN administrator’s action to a specific VSAN whose membership can be isolated based on the switch ports or World Wide Names (WWN) of attached devices. Trunking allows Inter-Switch Links (ISL) to carry traffic for multiple VSANs on the same physical link. unequal-size logical unit number (LUN) migration. Network-Assisted Applications Cisco MDS 9000 family network-assisted storage applications offer deployment flexibility and investment protection by allowing the deployment of appliance-based storage applications for any server or storage device in the SAN without the need to rewire SAN switches or end devices. continuous data protection. F-port trunking allows multiple VSANs on a single uplink in N-port virtualization (NPV) mode. Cisco DMM can be introduced without the need to rewire or reconfigure the SAN. extending VSANs to include devices at remote locations. For example. no additional management software is required. snapshots. Advanced capabilities. Cisco DMM offers rate-adjusted online migration to enable applications to continue uninterrupted operation while data migration is in progress. The Cisco MDS 9000 family also implements trunking for VSANs. data acceleration.

High availability and resiliency: IOA combines port channels and equal-cost multipath (ECMP) routing with disk and tape acceleration for higher availability and resiliency.and FCIP-based business-continuity and disaster-recovery solutions without the need to add or manage a separate device. Write acceleration significantly reduces latency and extends the distance for disk replication. is a mainframe-based software replication solution in widespread use in financial institutions worldwide. Cisco has supported XRC over FCIP at distances of up to 124 miles (200 km). This data is buffered within the Cisco MDS 9000 family module that is local to the SDM. IOA can be deployed with disk data-replication solutions such as EMC Symmetrix Remote Data Facility (SRDF). Network Security Cisco takes a comprehensive approach to network security with Cisco MDS 9000 NX-OS. and HDS TrueCopy to extend the distance between data centers or reduce the effects of latency. Integration of data compression into IOA enables implementation of more efficient Fibre Channel. XRC Acceleration IBM Extended Remote Copy (XRC). Service clustering: IOA delivers redundancy and load balancing for I/O acceleration. known as the System Data Mover (SDM). XRC Acceleration accelerates dynamic updates from the primary to the secondary direct-access storage device (DASD) by reading ahead of the remote replication IBM System z. Speed independence: IOA can accelerate 1/2/4/8/10/16-Gbps links and consolidate traffic over 8/10/16-Gbps ISLs. EMC MirrorView. Compression: Compression in IOA increases the effective MAN and WAN bandwidth without the need for costly infrastructure upgrades. In the past. The new Cisco MDS 9000 XRC Acceleration feature supports essentially unlimited distances. IOA includes the following features: Transport independence: IOA provides a unified solution to accelerate I/O operations over the MAN and WAN. Write acceleration: IOA provides write acceleration for Fibre Channel and FCIP networks. In .disaster-recovery and business-continuity applications over WANs and metropolitan-area networks (MAN). reducing or eliminating the latency effects that can otherwise reduce performance at distances of 124 miles (200 km) or greater. Tape acceleration improves the performance of tape devices and enables remote tape vaulting over extended distances for data backup for disaster-recovery purposes. Intuitive provisioning: IOA can be easily provisioned using Cisco DCNM-SAN. This process is sometimes referred to as XRC emulation or XRC extension. Tape acceleration: IOA provides tape acceleration for Fibre Channel and FCIP networks. IOA as a fabric service: IOA service units (interfaces) can be located anywhere in the fabric and can provide acceleration service to any port. Transparent insertion: IOA requires no fabric reconfiguration or rewiring and can be transparently turned on by enabling the IOA license. which is now officially renamed IBM z/OS Global Mirror. IOA can also be used to enable remote tape backup and restore operations without significant throughput degradation.

The encryption algorithm is 128-bit Advanced Encryption Standard (AES) and enables either AES-Galois/Counter Mode (AESGCM) or AES-Galois Message Authentication Code (AES-GMAC) for an interface. At ingress. Diffie-Hellman Challenge Handshake Authentication Protocol (DH-CHAP) is used to perform authentication locally in the Cisco MDS 9000 family or remotely through RADIUS or TACACS+. because the encryption is at full line rate with no performance penalty. Switch and Host Authentication Fibre Channel Security Protocol (FC-SP) capabilities in Cisco MDS 9000 NX-OS provide switchto-switch and host-to-switch authentication for enterprisewide fabrics. There are two primary use cases for Cisco TrustSec Fibre Channel Link Encryption: Many customers will want to help ensure the privacy and integrity of any data that leaves the secure confines of their data center through a native Fibre Channel link. AES-GCM encrypts and authenticates frames. which provide true isolation of SAN-attached devices.2(1). and supports the industry-standard security protocol for authentication. If authentication fails.addition to VSANs. IP Security for FCIP and iSCSI Traffic flowing outside the data center must be protected. and accounting (AAA). Cisco MDS 9000 NX-OS uses Internet Key Exchange Version 1 (IKEv1) and IKEv2 protocols to dynamically set up security associations for IPsec using preshared keys for remote-side authentication. such as role-based access control (RBAC) and Cisco TrustSec Fibre Channel Link Encryption. A future release of Cisco NX-OS software will enable encryption of Fibre Channel data between Eports of Cisco MDS 9000 16-Gbps Fibre Channel switching modules. authorization. Fibre Channel data between E-ports of Cisco MDS 9000 8-Gbps Fibre Channel switching modules can be encrypted. Encryption is performed at line rate by encapsulating frames at egress with encryption using the GCM mode of AES 128-bit encryption. Cisco MDS 9000 NX-OS offers other significant security features. Role-Based Access Control . coarse wavelength-division multiplexing (CWDM). such as dark fiber. Starting with Cisco MDS 9000 NX-OS 4. data encryption for privacy. such as those in defense and intelligence services. and AES-GMAC authenticates only the frames that are being passed between the two E-ports. frames are decrypted and authenticated with integrity check. a switch or host cannot join the fabric. and data integrity for both FCIP and iSCSI connections on the Cisco MDS 9000 family switches. Cisco TrustSec Fibre Channel Link Encryption is an extension of the FC-SP feature and uses the existing FC-SP architecture. Cisco MDS 9000 family management has been certified for Federal Information Processing Standards (FIPS) 140-2 Level 2 and validated for Common Criteria (CC) Evaluation Assurance Level 3 (EAL 3). Other customers. Cisco TrustSec Fibre Channel Link Encryption Cisco TrustSec Fibre Channel Link Encryption addresses customer needs for data integrity and privacy. or dense wavelength-division multiplexing (DWDM). The proven IETF standard IP Security (IPsec) capabilities in Cisco MDS 9000 NX-OS offer secure authentication. may be even more security focused and choose to encrypt all traffic within their data center as well.

or a virtualized cluster level without sacrificing switch resources. or switches. The entities can be hosts. All zoning policies are enforced in hardware. Both standards support the basic and enhanced zoning functionalities. targets. Port Security and Fabric Binding Port security locks the mapping of an entity to a switch port. a physical cluster level. zoning is always enforced per frame using access control lists (ACLs) that are applied at the ingress switch. CLI and SNMP users and passwords are shared. and none of them cause performance degradation. or plus switch WWN of the interface iSCSI zoning: Defines zone members based on the host zone iSCSI name IP address To provide strict network security. Zoning Zoning provides access control for devices within a SAN. Applications using SNMP Version 3 (SNMPv3). Smart zoning enables customers to intuitively create fewer multimember zones at an application level. Cisco MDS 9000 NX-OS supports the following types of zoning: N-port zoning: Defines zone members based on the end-device (host and storage) port WWN Fibre Channel identifier (FC-ID) Fx-port zoning: Defines zone members based on the switch port WWN WWN plus interface index. or domain ID plus interface index Domain ID plus port number (for Brocade interoperability) Fabric port WWN Interface plus domain ID. and only a single administrative account is required for each user. up to 64 user-defined roles can be configured. In addition. offer full RBAC for switch features managed using this protocol. instead of creating multiple two-member zones. This locking mechanism helps ensure that unauthorized devices connecting to the switch port do not disrupt the SAN fabric.Role-Based Access Control Cisco MDS 9000 NX-OS provides RBAC for management access to the Cisco MDS 9000 family command-line interface (CLI) and Simple Network Management Protocol (SNMP). Enhanced zoning session-management capabilities further enhance security by allowing only one user at a time to modify zones. the software provides support for smart zoning. The roles describe the access-control policies for various featurespecific commands on one or more VSANs. Fabric binding extends port security to allow ISLs between only specified switches. The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. In addition to the two default roles on the switch. which dramatically reduces operational complexity while consuming fewer resources on the switch. Availability . identified by their WWNs. such as Cisco DCNM-SAN.

If a Cisco MDS 9000 family switch detects a WAN or MAN link failure. Nondisruptive Software Upgrades Cisco MDS 9000 NX-OS provides nondisruptive software upgrades for director-class products with redundant hardware and fabric switches. making control-traffic path failures transparent to applications. The autotrespass feature enables high-availability iSCSI connections to RAID subsystems. Otherwise. FCIP. Cisco MDS 9000 NX-OS uses a protocol to exchange port channel configuration information between adjacent switches to simplify port channel management. VRRP dynamically manages redundant paths for external Cisco MDS 9000 family management applications. no manual intervention is required. . Thus. including misconfiguration detection and auto creation of port channels among compatible ISLs. either locally or on another Cisco MDS 9000 family switch. the port channel continues to function properly without requiring fabric reconfiguration. Port Tracking for Resilient SAN Extension The Cisco MDS 9000 NX-OS port-tracking feature enhances SAN extension resiliency. independent of host software. and they do not need a designated master port. ISLs with compatible parameters automatically form channel groups. Minimally disruptive upgrades are provided for the other Cisco MDS 9000 family fabric switches that do not have redundant supervisor engine hardware. In the autoconfigure mode. so the array can redirect a failed I/O operation to another link without waiting for an I/O timeout.Cisco MDS 9000 NX-OS provides resilient software architecture for mission-critical hardware deployments. This feature facilitates the failover of an iSCSI volume from one IP services port to any other IP services port. With this feature. up to 16 expansion ports (Eports) or trunking E-ports (TE-ports) or F-ports connected to NP-ports can be bundled into a port channel. disk arrays must wait seconds for an I/O timeout to recover from a network link failure. Similarly. Stateful Process Failover Cisco MDS 9000 NX-OS automatically restarts failed software processes and provides stateful supervisor engine failover to help ensure that any hardware or software failures on the control plane do not disrupt traffic flow in the fabric. it takes down the associated diskarray link when port tracking is configured. VRRP increases IP network availability for iSCSI and FCIP connections by allowing failover of connections from one port to another. if a port or a switching module fails. and Management-Interface High Availability The Virtual Routing Redundancy Protocol (VRRP) increases the availability of Cisco MDS 9000 Family management traffic routed over both Ethernet and Fibre Channel networks. ISL ports can reside on any switching module. iSCSI. Trespass commands can be sent automatically when Cisco MDS 9000 NX-OS detects failures on active paths. ISL Resiliency Using Port Channels Port channels aggregate multiple physical ISLs into one logical link with higher bandwidth and port resiliency for both Fibre Channel and FICON traffic.

fabric. CiscoWorks DFM can also monitor the health of important components such as fans. which allows multiple N-port fabric logins concurrently on a single physical Fibre Channel link. management applications can gather HBA and host OS information without the need to install proprietary host agents. and management traffic routed in band and out of band Open APIs Cisco MDS 9000 NX-OS provides a truly open API for the Cisco MDS 9000 family based on the industry-standard SNMP. software-image management. HBAs that support NPIV can help improve SAN security by enabling configuration of zoning and port security . and switch ports. The open API also helps CiscoWorks Device Fault Manager (DFM) monitor Cisco MDS 9000 family device health. and temperature. Common Information Model (CIM). server. intelligent system log (syslog) management. including switch. such as supervisor memory and processor utilization. N-Port Virtualization Cisco MDS 9000 NX-OS supports industry-standard N-port identifier virtualization (NPIV). eliminating the need to reenter names when devices are moved. FCIP.Manageability Cisco MDS 9000 NX-OS incorporates many management features that facilitate effective management of growing storage environments with existing resources. and SMI-S standards. The Cisco DCNM-SAN Storage Management Initiative Specification (SMI-S) server provides an XML interface with an embedded agent that complies with the Web-Based Enterprise Management (WBEM). v2. and wireless devices. and in-band IP over Fibre Channel (IPFC) SNMPv1. all major storage and network management software vendors use the Cisco MDS 9000 NX-OS management API. With FDMI. Configuration and Software-Image Management The CiscoWorks solution is a commonly used suite of tools for a wide range of Cisco devices such as IP switches. power supplies. The Cisco MDS 9000 NX-OS open API allows the CiscoWorks Resource Manager Essentials (RME) application to provide centralized Cisco MDS 9000 family configuration management. and zoning profiles. Commands performed on the switches by Cisco DCNM-SAN use this open API extensively. Also. and v3 over an OOB management port and in-band IPFC Cisco FICON Control Unit Port (CUP) for in-band management from IBM S/390 and z/900 processors IPv6 support for iSCSI. Fabric Device Management Interface (FDMI) capabilities provided by Cisco MDS 9000 NX-OS simplify management of devices such as Fibre Channel HBAs though in-band communications. storage devices. and inventory management. routers. Management interfaces supported by Cisco MDS 9000 NX-OS include the following: CLI through a serial port or out-of-band (OOB) Ethernet management port. Distributed device alias services provide fabricwide alias names for host bus adapters (HBA). Cisco fabric services simplify SAN provisioning by automatically distributing configuration information to all switches in a storage network.

NPIV is beneficial for connectivity between core and edge SAN switches. and logs from a single web browser. The process involves both SAN and server administrators. Autolearn for Network Security Configuration The autolearn feature allows the Cisco MDS 9000 family to automatically learn about devices and switches that connect to it. . The administrator can use this feature to configure and activate network security features such as port security without having to manually configure the security for each port. distributed iSNS built in to Cisco MDS 9000 NX-OS or through external iSNS servers. reducing the number of configuration changes and the time and coordination required by SAN and server administrators when installing and replacing servers. Network Boot for iSCSI Hosts Cisco MDS 9000 NX-OS simplifies iSCSI-attached host management by providing network-boot capability. FlexAttach Cisco MDS 9000 NX-OS supports the FlexAttach feature. NPIV is used by edge switches in the NPV mode to log in to multiple end devices that share a link to the core switch.000 end devices. inventory. and the interaction and coordination between them can make the process time consuming. Up to 10 Cisco DCNM instances can form a federation (or cluster) that can manage more than 75. NPV is a complementary feature that reduces the number of Fibre Channel domain IDs in core-edge SANs. FlexAttach addresses these problems. In addition to being useful for server connections. Cisco MDS 9000 NXOS 6. and event-handling processes. One of the main problems faced today in SAN environments is the time and effort required to install and replace servers. management. Cisco MDS 9000 family fabric switches operating in the NPV mode do not join a fabric. To alleviate the need for interaction between SAN and server administrators. which eliminates the domain IDs for these switches. Internet Storage Name Service The Internet Storage Name Service (iSNS) helps existing TCP/IP networks function more effectively as SANs by automating discovery. Cisco DCNM provides a single management pane for viewing and managing all fabrics within a single federation.2(1) supports the FlexAttach feature on Cisco MDS 9148 Multilayer Fabric Switch when the NPV mode is enabled. performance-monitoring. Cisco Data Center Network Manager Server Federation Cisco DCNM server federation improves management availability and scalability by load balancing fabric-discovery. A storage administrator can discover and move fabrics within a federation for the purposes of load balancing. high availability. either through the highly available. and configuration of iSCSI devices. and disaster recovery. In addition. iSCSI targets presented by Cisco MDS 9000 family IP storage services and Fibre Channel device-state-change notifications are registered by Cisco MDS 9000 NX-OS. users can connect to any Cisco DCNM instance and view all reports. This feature is available only for Cisco MDS 9000 family blade switches and the Cisco MDS 9100 Series Multilayer Fabric Switches. statistics. the SAN configuration should not have to be changed when a new server is installed or an existing server is replaced.independently for each virtual machine (OS partition) on a host. they just pass traffic between core switch links and end devices.

iSCSI server load balancing (iSLB) is available to automatically redirect servers to the next available Gigabit Ethernet port. Cisco MDS 9000 NX-OS enhances the architecture of the Cisco MDS 9000 family with several advanced traffic-management features that help ensure consistent performance of the SAN under varying load conditions. iSCSI Server Load Balancing Cisco MDS 9000 NX-OS helps simplify large-scale deployment and management of iSCSI servers. Data traffic can be classified for QoS by the VSAN identifier. Fibre Channel data traffic for latency-sensitive applications can be configured to receive higher priority than throughput-intensive applications using data QoS priority levels. iSCSI. and management traffic routed inband and out-of-band. Quality of Service Four distinct quality-of-service (QoS) priority levels are available: three for Fibre Channel data traffic and one for Fibre Channel control traffic. IPv6 Cisco MDS 9000 NX-OS provides IPv6 support for FCIP. routers. and pure IPv6 data networks. and principal switch selection. to accelerate convergence of fabric-wide protocols such as FSPF. up to 4095 buffer credits from a pool of more than 6000 buffer credits for a module can be allocated to ports as needed to greatly extend the distance of Fibre Channel SANs. zone merges. Virtual Output Queuing Virtual output queuing (VOQ) buffers Fibre Channel traffic at the ingress port to eliminate head-of- . or FC-ID. This dual-stack approach allows the Cisco MDS 9000 family switches to easily connect to older IP networks. Zonebased QoS helps simplify configuration and administration by using the familiar zoning concept. iSLB greatly simplifies iSCSI configuration and provides automatic. Extended Credits Full line-rate Fibre Channel ports provide at least 255 buffer credits as standard. N-port WWN. Traffic Management In addition to implementing the Fabric Shortest Path First (FSPF) Protocol to calculate the best path between two switches and providing in-order delivery features. transitional networks with a mixture of both versions. A complete dual stack has been implemented for IPv4 and IPv6 to remain compatible with the large base of IPv4-compatible hosts. Using extended credits. In addition to allowing fabric-wide iSCSI configuration from a single switch. Adding credits lengthens distances for Fibre Channel SAN extension. Proxy mode reduces the number of separate times that back-end tasks such as Fibre Channel zoning and storage-device configuration must be performed. Control traffic is assigned the highest QoS priority automatically. and Cisco MDS 9000 family switches running previous software revisions. zone.Proxy iSCSI Initiator The proxy iSCSI initiator simplifies configuration procedures when multiple iSCSI initiators (hosts) are assigned to the same iSCSI target ports. rapid recovery from IP connectivity problems for high availability.

Load Balancing of Port Channel Traffic Port channels load balance Fibre Channel traffic using a hash of the source FC-ID and destination FC-ID. the next-generation intelligent services switch in the Cisco MDS 9200 Series Multiservice Switch portfolio. helping determine the number of concurrent I/O operations needed to increase FCIP throughput. Fibre Channel Port Rate Limiting The Fibre Channel port rate-limiting feature for the Cisco MDS 9100 Series Multilayer Fabric Switches controls the amount of bandwidth available to individual Fibre Channel ports within groups of four host-optimized ports. Limiting bandwidth on one or more Fibre Channel ports allows the other ports in the group to receive a greater share of the available bandwidth under high-use conditions. optimize transfer sizes for the IP network topology.line blocking. A future release of Cisco NX-OS software will support similar compression ratios on the 10 Gigabit Ethernet FCIP ports on the new Cisco MDS 9250i. and MDS 9000 16-Port Storage Services Node (SSN) achieve up to a 43:1 compression ratio. and reduce latency by eliminating TCP connection setup for most data transfers. with typical ratios of 4:1 over a variety of data sources. iSCSI and SAN Extension Performance Enhancements iSCSI and FCIP enhancements address out-of-order delivery problems. By integrating data compression into the Cisco MDS 9000 family. This centralization poses a challenge for remote backup media servers that need to transfer data across a WAN. Port rate limiting is also beneficial for throttling WAN traffic at the source to help eliminate excessive buffering in Fibre Channel and IP data network devices. . The switch is designed so that the presence of a slow N-port on the SAN does not affect the performance of any other port on the SAN. which dramatically reduce write throughput. Cisco MDS 9000 NX-OS includes a SAN extension tuner. Highperformance streaming tape drives require a continuous flow of data to avoid write-data underruns. FCIP performance is further enhanced for SAN extension by compression and write acceleration. more efficient FCIP-based business-continuity and disaster-recovery solutions can be implemented without the need to add and manage a separate device. Gigabit Ethernet ports for the Cisco MDS 9222i Multiservice Modular Switch (MMS). MDS 9000 18/4-Port Multiservice Module (MSM). and optionally the exchange ID. FCIP Compression FCIP compression in Cisco MDS 9000 NX-OS increases the effective WAN bandwidth without the need for costly infrastructure upgrades. FCIP Tape Acceleration Centralization of tape backup and archive operations provides significant cost savings by allowing expensive robotic tape libraries and high-speed drives to be shared. For WAN performance optimization. which directs SCSI I/O commands to a specific virtual target and reports IOPS and I/O latency results. Cisco MDS 9000 NX-OS also can be configured to load balance across multiple same-cost FSPF routes. Load balancing using port channels is performed over both Fibre Channel and FCIP links.

The scope of these statistics includes read. the effective WAN throughput for remote tape operations decreases exponentially as the WAN latency increases. administrators can check the reachability of a switch by tracing the path followed by frames and determining hop-by-hop latency.Without FCIP tape acceleration. With Fibre Channel ping. This capability helps isolate problems and reduce risks by preventing errors from spreading to the whole fabric. SPAN sources can include Fibre Channel ports and FCIP and iSCSI virtual ports for IP services. These . packaged with other relevant information in a standard format. With Fibre Channel traceroute. The embedded Cisco Fabric Analyzer allows the Cisco MDS 9000 family to save Fibre Channel control traffic within the switch for text-based analysis. and control commands and error statistics. Serviceability. Fibre Channel Ping and Fibre Channel Traceroute Cisco MDS 9000 NX-OS brings to storage networks features such as Fibre Channel ping and Fibre Channel traceroute. expanding. The SPAN destination port does not have to be on the same switch as the SPAN source ports. The Call Home feature forwards the alarms and events. Cisco MDS 9000 NX-OS can shut down malfunctioning ports if errors exceed user-defined thresholds. administrators can check the connectivity of an N-port and determine its round-trip latency. Cisco Call Home provides a notification system triggered by software and hardware events. Alert grouping capabilities and customizable destination profiles offer the flexibility needed to notify specific individuals or support organizations only when necessary. FCIP tape acceleration helps achieve nearly full throughput over WAN links for remote tape-backup operations for both open systems and mainframe environments. Fibre Channel control traffic therefore can be captured and analyzed without an expensive Fibre Channel analyzer. This feature is available only on the Cisco MDS 9000 family storage service modules. and maintaining SANs. Troubleshooting. SCSI Flow Statistics LUN-level SCSI flow statistics can be collected for any combination of initiator and target. any Fibre Channel port in the fabric can be a source. debugging errors in a Fibre Channel SAN require the use of a Fibre Channel analyzer. write. The Cisco Switched Port Analyzer (SPAN) feature allows an administrator to analyze all traffic between ports (called the SPAN source ports) by nonintrusive directing the SPAN-session traffic to a SPAN destination port that has an external analyzer attached to it. or to send IP-encapsulated Fibre Channel control traffic to a remote PC for decoding and display using the open source Ethereal networkanalyzer application. Cisco Switched Port Analyzer and Cisco Fabric Analyzer Typically. These features also increase availability by decreasing SAN disruptions for maintenance and reducing recovery time from problems. Call Home Cisco MDS 9000 NX-OS offers a Call Home feature for proactive fault management. and restore operations for open systems. to external entities. and Diagnostics Cisco MDS 9000 NX-OS is among the first storage network OSs to provide a broad set of serviceability features that simplify the process of building. which causes significant disruption of traffic in the SAN.

Boot-time diagnostics. and interconnections are functioning properly. Messages can be selectively routed to a console and to log files. an administrator’s e-mail account or pager. switching modules. System Log The Cisco MDS 9000 family syslog capabilities greatly enhance debugging and management. With this feature. providing a precise time base for all switches. IPFC: The Cisco MDS 9000 Family provides the capability to carry IP packets over a Fibre Channel network. NTP messages are transported using IPFC. Cisco EEM helps customers harness the network intelligence intrinsic to the Cisco software and enables them to customize behavior based on network events as they happen. and they can be sent to external syslog servers. allowing them to be run in production SAN environments. Periodically tests are run to verify that supervisor engines. During testing. switch counters. Enhanced event logging and reporting with SNMP traps and syslog: Cisco MDS 9000 family events filtering and remote monitoring (RMON) provide complete and exceptionally flexible control over SNMP traps. the powerful Cisco Generic Online Diagnostics (GOLD) framework replaces the Cisco Online Health Management System (OHMS) diagnostic framework on the new Cisco MDS 9700 Series Multilayer Directors. Loopback testing: The Cisco MDS 9000 family uses offline port loopback testing to check port capabilities. a port is isolated from the external connection. and on-demand and scheduled tests are part of the Cisco GOLD feature set. An NTP server must be accessible from the fabric through the OOB Ethernet port. . Cisco Embedded Event Manager (EEM): Cisco EEM is a powerful device and system management technology integrated into Cisco NX-OS. Other Serviceability Features Additional serviceability features include the following: Online diagnostics: Cisco MDS 9000 NX-OS provides advanced online diagnostics capabilities. a server in-house or at a service provider’s facility. and the Cisco Technical Assistance Center (TAC). These online diagnostics do not adversely affect normal Fibre Channel operations. optics. Traps can be generated based on a threshold value. Network Time Protocol (NTP) support: NTP synchronizes system clocks in the fabric.notification messages can be used to automatically open technical-assistance tickets and resolve problems before they become critical. Syslog severity levels can be set individually for all Cisco MDS 9000 NX-OS functions. standby fabric loopback tests. continuous monitoring. Starting with Cisco MDS 9000 NX-OS 6. and traffic is looped internally from the transmit path back to the receive path. External entities can include. Within the fabric. or time stamps. Cisco GOLD is a suite of diagnostic facilities for verifying that hardware and internal data paths are operating as designed. an external management station attached through an OOB management port to a Cisco MDS 9000 family switch in the fabric can manage all other switches in the fabric using the in-band IPFC Protocol. facilitating logging and display of messages ranging from brief summaries to very detailed information for debugging.2. but are not restricted to. Messages are logged internally.

Messages ranging from only high-severity events to detailed debugging messages can be logged. However.Syslog provides a comprehensive supplemental source of information for managing Cisco MDS 9000 family switches. Cisco offers a set of advanced software features logically grouped into software license packages. . IOA Package. 9200. SAN Extension over IP Package. The grace period allows plenty of time to correct the licensing issue. Cisco MDS 9000 Family Software License Packages Most Cisco MDS 9000 family software features are included in the standard package: the base configuration of the switch. Cisco Data Center Network Manager for SAN includes a centralized license management console that provides a single interface for managing licenses across all Cisco MDS switches in the fabric. Cisco license packages require a simple installation of an electronic license: no software installation or upgrade is required. some features are logically grouped into add-on packages that must be licensed separately. Table 7-2 lists the Cisco MDS 9700 family licenses. DMM Package. Cisco MDS stores license keys on the chassis serial PROM (SPROM). If an administrative error does occur with licensing. the switch provides a grace period before the unlicensed features are disabled. DCNM Packages. such as the Cisco MDS 9000 Enterprise Package. This single console reduces management overhead and prevents problems because of improperly maintained licensing. Licenses can also be installed on the switch in the factory. On-demand port activation licenses are also available for the Cisco MDS 9000 family blade switches and 4-Gbps Cisco MDS 9100 Series Multilayer Fabric Switches. if desired. and 9100 family licenses. and Table 7-3 lists the Cisco MDS 9500. All licensed features may be evaluated for up to 120 days before a license is expired. so license keys are never lost even during a software reinstallation. Mainframe Package. and XRC Acceleration Package.

Table 7-2 Cisco MDS 9700 Family Licenses .

.

Figure 7-11 portrays the Cisco MDS 9000 Director switches since year 2002. and 9100 Family Licenses Cisco MDS Multilayer Directors Cisco MDS 9500 and 9700 Series Multilayer Directors are (SAN) switches designed for deployment in large. . scalable enterprise clouds. and Table 7-4 explains the models. and FCoE connectivity to achieve low total cost of ownership (TCO). They enable seamless deployment of unified fabrics with high-performance Fibre Channel. They share the same operating system and management interface with Cisco Nexus data center switches. 9200. FICON.Table 7-3 Cisco MDS 9500.

10-. MDS 9710 supports up to 384 2/4/8-Gbps or 4/8/16-Gbps autosensing line-rate Fibre Channel ports in a single chassis and up to 1152 Fibre Channel ports per rack with up to 24 terabits per second (Tbps) of Fibre Channel system bandwidth. The multilayer architecture of the Cisco MDS 9700 series enables a consistent feature set over a . and 16Gbps Fibre Channel and FCoE. The Cisco MDS 9710 also supports 10 Gigabit Ethernet clocked optics carrying Fibre Channel traffic. 8-. It supports 2-. 4-.Figure 7-11 Cisco MDS 9000 Director Switches Table 7-4 Details of Cisco MDS 9500 and 9700 Series Director Switch Models Cisco MDS 9700 Cisco MDS 9700 is available in a 10-slot and 6-slot configuration.

protocol-independent switch fabric. based on central arbitration and crossbar fabric. Figure 7-12 portrays the Cisco MDS 9700 Director switches and their modular components. non-blocking. The power supplies are redundant by default and they can be configured to be combined if desired. extending connectivity from FCoE and Fibre Channel fabrics to FCoE and Fibre Channel storage devices. MDS 9710 and MDS 9706 switches transparently integrate Fibre Channel. The Cisco MDS 9710 Director has a 10-slot chassis and it supports the following: Three fan trays at the back of the chassis. predictable performance across all traffic conditions for every port in the chassis. It provides grid redundancy on the power supply and 1+1 redundant supervisors. provides 16-Gbps line-rate. Eight power supply bays are located at the front of the chassis. the fabric modules are numbered 1–6 from left to right when facing the rear of chassis. Users can add an additional fabric card to enable N+1 fabric redundancy. Cisco MDS 9710 architecture. The combination of the 16-Gbps Fibre Channel switching module and the Fabric-1 crossbar switching modules enables up to 1. including the fabric card. two Supervisor-1 modules reside in slots 5 and 6. and FICON. When the system is running.5 Tbps of front-panel Fibre Channel throughput between modules in each direction for each of the eight Cisco MDS 9710 payload slots. They support multihop FCoE. It has three hot-swappable rear panel fan trays with redundant individual fans. Fabric modules are located behind the fan trays. Figure 7-12 Cisco MDS 9710 Director Switch The Cisco MDS 9700 series switches enable redundancy on all major components. remove only one fan tray at a time to access the appropriate fabric modules: Fabric Modules . This per-slot bandwidth is twice the bandwidth needed to support a 48-port 16-Gbps Fibre Channel module at full line rate. FCoE. MDS 9710 and MDS 9706 have six crossbar-switching modules located at the rear of the chassis.

Fabric modules may be installed in any slot. two Supervisor-1 modules reside in slots 3 and 4. Fabric Modules 5–6 are behind fan tray 3. . and power supplies. It has three hot-swappable rear panel fan trays with redundant individual fans. Figure 7-13 portrays the Cisco MDS 9710 Director switch details. The power supplies are redundant by default and they can be configured to be combined if desired. Figure 7-14 portrays the Cisco MDS 9706 Director switch chassis details. Fabric Modules 3–4 are behind fan tray 2. Blank panels must be installed in an empty line card or PSU slots to ensure proper airflow. MDS 9710 and MDS 9706 have front to back airflow. Air exits through perforations in fan trays.1–2 are behind fan tray 1. Figure 7-13 Cisco MDS 9710 Director Switch The Cisco MDS 9706 Director switch has a 6-slot chassis and it supports the following: Three fan trays at the back of the chassis. Air enters through perforations in line cards. the best practice is one behind each fan tray. supervisor modules. Four power supply bays are located at the front of the chassis.

and N_Port ID virtualization (NPIV) for mainframe Linux partitions. 1/2/4/8-Gbps and 10-Gbps FICON: The Cisco MDS 9513 supports advanced FICON services including cascaded FICON fabrics. VSAN-enabled intermix of mainframe and open systems environments. 10-Gbps FCoE. 8-. . Figure 7-15 portrays the Cisco MDS 9500 Director switch details. 1-Gbps Fibre Channel over IP (FCIP). and 10-Gbps Fibre Channel. 2-.Figure 7-14 Cisco MDS 9706 Director Switch Cisco MDS 9500 Cisco MDS 9500 is available in 6.and 13-slot configurations. It supports 1-. MDS 9500 supports up to 528 1/2/4/8-Gbps autosensing Fibre Channel ports in a single chassis and up to 1584 Fibre Channel ports per rack. Inclusion of Cisco Control Unit Port (CUP) support enables in-band management of Cisco MDS 9000 family switches from mainframe management applications. 4-. and 1-Gbps Small Computer System Interface over IP (iSCSI) port speeds.

for more information. The Cisco MDS 9513 Director uses a midplane. The Cisco MDS 9513 Director is a 13-slot Fibre Channel switch. One hot-swappable fan module for the crossbar modules is located at the rear of the chassis.cisco. A variable-speed fan tray with 15 individual fans is located on the front-left panel of the chassis. The power supplies are redundant by default and they can be configured to be combined if desired. where slots 1 to 6 and slots 9 to 13 are reserved for switching and services modules only. there are multiple redundant fans with dual fan controllers. The system has dual supervisors. two clock modules. the fan tray still provides sufficient cooling to keep the switch operational.com. It continues to be supported at the time of writing this chapter. external crossbar modules. and failure of one or more individual fans causes all other fans to speed up to compensate. All Cisco MDS 9500 Series Director switches are fully redundant with no single point of failure. two vertical. redundant. End of Sale status. Two power supplies are located at the rear of the chassis.Figure 7-15 Cisco MDS 9500 Director Switch Note The Cisco MDS 9509 has reached End of Life. The front panel consists of 13 horizontal slots. Although there is a single fan tray. A side-mounted fan tray provides active cooling. redundant power supplies. Two hot-swappable clock modules are located at the rear of the chassis. and power supplies. and a variable-speed fan tray with two individual fans located above the crossbar modules. and slots 7 and 8 are for Supervisor-2A modules only. www. The rear of the chassis supports two vertical. and employs majority signal voting wherever 1:1 redundancy would not be appropriate. Two crossbar modules are located at the rear of the chassis. clock modules. Please refer to the Cisco website. All fans are variable speed. In the unlikely event of multiple simultaneous fan failures. and it supports the following: . The Cisco MDS 9513 Director supports the following: Two Supervisor-2A modules reside in slots 7 and 8. Modules exist on both sides of the plane. The Cisco MDS 9506 Director has a 6-slot chassis. It has one hot-swappable front panel fan tray with redundant individual fans. crossbar switch fabrics.

with a console port. It has one hot-swappable fan module with redundant fans. COM1 port.Up to two Supervisor-2A modules that provide a switching fabric. Table 7-5 compares the major details of Cisco 9500 and 9700 Series Director switch models. and a MGMT 10/100 Ethernet port on each module. . Slots 5 and 6 are reserved for the supervisor modules. Two power entry modules (PEM) are in the front of the chassis for easy access to power supply connectors and switches. Two power supplies are located in the back of the chassis. The power supplies are redundant by default and can be configured to be combined if desired. The first four slots are for optional modules that can include up to four switching modules or three IPS modules.

.

.

The supervisor module also supports stateful process restarts. resilient. 1584 Fibre Channel ports in a single rack.4 terabits per second (Tbps) of internal system bandwidth when combined with a Cisco MDS 9506 or MDS 9509 Multilayer Director chassis or up to 2. www. These modules are still be supported at the time of writing this chapter. The Cisco MDS 9500 Series directors include two supervisor modules—a primary module and a redundant module. End of Sale status.2 Tbps of internal system bandwidth when installed in the Cisco MDS 9513 Multilayer Director chassis. and storage applications onto highly scalable SAN switching platforms. Cisco MDS 9700–MDS 9500 Supervisor and Crossbar Switching Modules The Cisco MDS 9500 Series Supervisor-2A Module incorporates an integrated crossbar switching fabric.com. Supporting up to 528 Fibre Channel ports in a single chassis. allowing recovery from most process errors without a reset of the supervisor . The active/standby configuration of the supervisor modules allows support of nondisruptive software upgrades. so we will not be covering these modules in this book.Table 7-5 Details of Cisco MDS 9500 and 9700 Series Director Switch Models Note The Cisco Fabric Module 2. the Cisco MDS 9500 Series Supervisor-2A Module enables intelligent. intelligent SAN services. The modules occupy two slots in the Cisco MDS 9500 Series chassis. and 1. and secure high-performance multilayer SAN switching solutions. scalable. This topic describes the capacities of the Cisco MDS 9000 Series switching and supervisor modules. Designed to integrate multiprotocol switching and routing. Cisco MDS 9000 Series Multilayer Components Several modules are available for the modular switches in the Cisco MDS 9000 Series product range. Gen-3 1/2/4/8 Gbps modules. 4-port 10 Gbps FC Module have reached End of Life. for more information. with the remaining slots available for switching modules.cisco. Refer to the Cisco website.

They house the crossbar switch fabrics and central arbiters used to provide data-plane connectivity between all the line cards in the chassis. This capability is achieved through Cisco MDS 9500 Series Multilayer Directors crossbar architecture with a central arbiter arbitrating fairly between local traffic and traffic to and from other modules. this is used only on Cisco MDS 9509 and MDS 9506 chassis. Cisco MDS 9000 family supervisor modules provide two essential switch functions: They house the control-plane processors that are used to manage the switch and keep the switch operational.2 supports Cisco MDS 9513 Fabric 3 module. On the Cisco MDS 9513 chassis. Besides providing crossbar switch fabric capacity. the crossbar switch fabrics are on fabric cards (refer to Figure 7-15) inserted into the rear of the chassis. and optionally also a DB9 auxiliary console port suitable for modem connectivity. Figure 7-16 Cisco MDS 9000 Supervisor Models . and the crossbar switch fabrics housed on the supervisors are disabled. On Cisco MDS 9513 Director.module and with no disruption to traffic. Although each Supervisor-2A module contains a crossbar switch fabric. All frame forwarding and processing is handled within the distributed forwarding ASICs on the line cards themselves. Figure 7-16 portrays the Cisco MDS 9500 Series Supervisor-2A module and Cisco MDS 9700 Supervisor-1 module. DS-13SLT-FAB3 that is a generation-4 type module. an RJ-45 serial console port. Table 7-6 lists the details of switching capabilities per fabric for Cisco MDS 9710 and MDS 9706 Director switch models. The Cisco MDS 9000 arbitrated local switching feature provides line-rate switching across all ports on the same module without degrading performance or increasing latency for traffic to and from other modules in the chassis. the supervisor modules do not handle any frame forwarding or frame processing. Management connectivity is provided through an out-of-band Ethernet (interface mgmt0). the Cisco MDS NX-OS Release 5.

equivalent to 256 Gbps of Fibre Channel bandwidth. Table 7-7 lists the details of MDS 9700 Supervisor-1 and MDS 9500 Supervisor-2A. which is why three is the minimum number of fabric modules that are supported on MDS 9700 Series switches. Three fabric modules provide 660 Gbps data bandwidth and 768 Gbps of Fibre Channel bandwidth per slot. Figure 7-17 portrays the crossbar switching modules for MDS 9700 and MDS 9513 platforms. Figure 7-17 Cisco MDS 9700 Crossbar Switching Fabric-1 and MDS 9513 Crossbar Switching Fabric-3 . The line-rate on 48-port 16 Gbps Fibre Channel module needs only three Fabric modules.Table 7-6 Details of Switching Capabilities per Fabric for Cisco MDS 9710 and MDS 9706 Each fabric module provides 220 Gbps data bandwidth per slot.

Cisco MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module The MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module provides high port density and is ideal for connection of high-performance virtualized servers.5:1 oversubscription at 8-Gbps Fibre Channel (FC) rate across all ports. this module supports 1. With Arbitrated Local Switching. the 48-port 8-Gbps Advanced Fibre Channel switching module delivers full duplex aggregate backplane switching performance of 512 Gbps. For traffic switched across the backplane. For large-scale storage networking environments. Figure 7-18 portrays the Cisco MDS 9000 48-port 8-Gbps Advanced Fibre Channel Switching Module.0 ports provided on the front panel for simplified configuration-file uploading and downloading using common USB memory stick products.Table 7-7 Details of MDS 9700 Supervisor-1 and MDS 9500 Supervisor-2A Both MDS 9700 supervisor-1 and MDS 9500 supervisor-2A modules have two USB 2. the 48-port 8-Gbps Advanced module delivers fullduplex aggregate performance of 768 Gbps across locally switched ports on the module. .

in a single-form-factor. and cost-effective IP storage and mainframe connectivity in enterprise storage networks. and switch cascading. . Fibre Channel over IP (FCIP). Cisco Storage Services Enabler (SSE). Small Computer System Interface over IP (iSCSI). FICON Control Unit Port (CUP) management. Figure 7-19 Cisco MDS 9000 32-port 8-Gbps Advanced Fibre Channel Switching Module Cisco MDS 9000 18/4-Port Multiservice Module (MSM) The Cisco MDS 9000 18/4-Port Multiservice Module (Figure 7-20) offers 18 1-. 2-. The 32-port 8-Gbps Advanced switching module delivers full-duplex aggregate performance of 512 Gbps. and 4-Gbps Fibre Channel ports and four 1-Gigabit Ethernet IP storage services ports. Figure 7-19 portrays the Cisco MDS 9000 32-port 8-Gbps Advanced Fibre Channel Switching Module.Figure 7-18 Cisco MDS 9000 48-port 8-Gbps Advanced Fibre Channel Switching Module Cisco MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module The MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module delivers line-rate performance across all ports and is ideal for high-end storage subsystems and for Inter-Switch Link (ISL) connectivity. integrating. The Cisco MDS 9000 18/4-Port Multiservice Module (MSM) is optimized for deployment of high-performance SAN extension solutions. distributed intelligent fabric services. IBM Fibre Connection (FICON). The 10-Gbps ports can be used for long-haul DWDM connections. Cisco MDS 9000 Extended Remote Copy (XRC) Acceleration. Cisco Storage Media Encryption (SME). Fibre Channel. and the ports are grouped in different orders depending on the 8-Gbps and 10Gbps modes.and 10-Gbps) InterSwitch Link (ISL). making this module well suited for the high-performance 8-Gbps storage subsystems. Cisco MDS 9000 I/O Accelerator (IOA). The modules can support 8-Gbps or 10-Gbps and/or a combined (8. It provides multiprotocol capabilities.

and FC tape read-and-write acceleration. FC write acceleration. tape. unified platform for deploying enterprise-class disaster recovery. The Cisco MDS 9000 16-Port Storage Services Node hosts four independent service engines. Figure 7-21 portrays the Cisco MDS 9000 16-port Gigabit Ethernet Storage Services Node (SSN) Module. or be configured to consolidate up to four fabric application instances to dramatically decrease hardware footprint and free valuable slots in the MDS 9500 Director chassis. and VTL drives. and Cisco Storage Media Encryption (SME) to secure missioncritical data stored on heterogeneous disk.Figure 7-20 Cisco MDS 9000 18/4-port Multiservice Module Cisco MDS 9000 16-Port Gigabit Ethernet Storage Services Node (SSN) Module The Cisco MDS 9000 16-Port Storage Services Node provides a high-performance. flexible. Table 7-8 lists the details of Multiservice Module (MSM) and Storage Services Node (SSN). Figure 7-21 Cisco MDS 9000 16-port Gigabit Ethernet Storage Services Node Module . The 16-Port Storage Services Node offers FCIP for remote SAN Extension. which can each be individually and incrementally enabled to scale as business requirements change. and intelligent fabric applications. Cisco IO Accelerator (IOA) to optimize the utilization of metropolitan.and wide-area network (MAN/WAN) resources for backup and replication using hardware-based compression. business continuance.

Cisco MDS 9000 10-Gbps FCoE module (see Figure 7-23) provides the industry’s first multihop-capable FCoE module. and traffic management attributes of Fibre Channel while preserving investments in Fibre Channel tools.Table 7-8 Details of Multiservice Module (MSM) and Storage Services Node (SSN) Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module Cisco MDS 9000 Series supports 10-Gbps Fibre Channel switching with the Cisco MDS 9000 4-Port 10-Gbps Fibre Channel switching module. Not only does the Cisco MDS 9000 10-Gbps 8-Port FCoE Module take advantage of migration of FCoE into the core layer. FCoE enables the preservation of Fibre Channel as the storage protocol in the data center while giving customers a viable solution for I/O consolidation. Figure 7-23 Cisco MDS 9000 8-port 10-Gbps Fibre Channel over Ethernet (FCoE) Module FCoE allows an evolutionary approach to I/O consolidation by preserving all Fibre Channel constructs. This module supports hot-swappable X2 optical SC interfaces. maintaining the latency. respectively. Figure 7-22 portrays the MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module. . This capability Allows FCoE initiators to access remote Fibre Channel resources connected through Fibre Channel over IP (FCIP) for enterprise backup solutions Supports virtual SANs (VSANs) for resource separation and consolidation. training. Modules can be configured with either short-wavelength or long-wavelength X2 transceivers for connectivity up to 300 meters and 10 kilometers. Figure 7-22 Cisco MDS 9000 4-port 10-Gbps Fibre Channel Switching Module Cisco MDS 9000 8-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module Cisco MDS 9500 Series of Multilayer Directors support 10-Gbps 8-port FCoE switching module with full line-rate FCoE connectivity to extend the benefits of FCoE beyond the access layer into the core of the data center network. Supports inter-VSAN routing (IVR) to use resources that may be segmented. but it also extends enterprise-class Fibre Channel services to Cisco Nexus 7000 and 5000 Series Switches and FCoE initiators. and SANs. security.

MDS 9700 module also supports 10 Gigabit Ethernet clocked optics carrying 10 Gbps Fibre Channel traffic. Figure 7-24 portrays the Cisco MDS 9700 48-port 16-Gbps Fibre Channel Switching Module. and supports hot-swappable Enhanced Small Form-Factor Pluggable (SFP+) transceivers. Individual ports can be configured with Cisco 16-Gbps. 8-Gbps. With the Cisco Enterprise Package. 4/8/16-Gbps and 10-Gbps FC interfaces. Figure 7-24 Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module Cisco MDS 9700 48-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module The Cisco MDS 9700 48-Port 10-Gbps FCoE Module provides the scalability of 384 10-Gbps full line-rate autosensing ports in a single chassis. integrated VSANs and IVR. enabling full link bandwidth over long distances with no degradation in link utilization.Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module Cisco MDS 9700 Series Fibre Channel Switching Module is hot swappable and compatible with 2/4/8-Gbps. Each port supports 500 buffer credits without requiring additional licensing. thus providing a way to deploy dedicated FCoE SAN without requiring any convergence. Figure 7-25 portrays the Cisco MDS 9700 48 port 16-Gbps Fibre Channel Switching Module. resilient high-performance ISLs. . The 16-Gbps Fibre Channel switching module continues to provide previously available features such as predictable performance and high availability. up to 4095 buffer credits can be allocated to an individual port. This module also supports connectivity to FCoE initiators and targets that only send FCoE traffic. fault detection and isolation of error packets. or 10-Gbps SFP+ transceivers. and sophisticated diagnostics. comprehensive security frameworks. advanced traffic management capabilities.

and X2 devices are hot-swappable transceivers that plug in to Cisco MDS 9000 family director switching modules and fabric switch ports. They allow you to choose different cabling types and distances on a port-by-port basis.html. along with the chassis compatibility associated with them. The most up-to-date information can be found for pluggable transceivers in the following link: www. . Cisco MDS family Storage Modules offer a wide range of options for various SAN architectures.Figure 7-25 Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module Cisco MDS 9000 Family Pluggable Transceivers The Cisco Small Form-Factor Pluggable (SFP).cisco. Enhanced Small Form-Factor Pluggable (SFP+).com/c/en/us/products/collateral/storage-networking/mds-9000-seriesmultilayer-switches/product_data_sheet09186a00801bc698. Table 7-9 lists all the hardware modules available at the time of writing this chapter.

.

The Cisco MDS 9100 Series switches are cost-effective. With a compact form factor and advanced capabilities normally available only on director-class switches. offering high-performance SAN extension and disaster-recovery solutions. and highly configurable Fibre Channel switches that are ideal for small to medium-sized businesses. intelligent fabric services. the Cisco MDS 9200 Series switches provide an ideal solution for departmental and remote branch-office SANs. easy-to-install. The Cisco MDS 9200 and 9100 Series . and cost-effective multiprotocol connectivity for both open systems and mainframe environments. The Cisco MDS 9200 Series switches deliver state-of-theart multiprotocol and distributed multiservice convergence. the Cisco MDS 9100 Series switches easily scale from an entry-level departmental switch to a top-ofthe-rack switch. With high port densities.Table 7-9 Hardware Modules and the Chassis Compatibility Associated with Them Cisco MDS Multiservice and Multilayer Fabric Switches Cisco MDS 9200 and MDS 9100 Series switches are fabric switches designed for deployment in small to medium-sized enterprise SANs. providing edge-switch connectivity to enterprise core SANs. Cisco MDS NX-OS provides services optimized for virtual machines and blade servers that let IT managers dynamically respond to changing business needs in virtual environments.

36. The following additional capabilities are included: Each port group consisting of four ports has a pool of 128 buffer credits. The Cisco MDS 9148 can be used as the foundation for small. The base switch model comes with 12 ports enabled.switches are FIPS compliant. all the way to 48 ports by adding these licenses. helping ensure that the bundle remains active even in the event of a port group failure. federal government. or as an edge switch in larger. Figure 7-26 Cisco MDS 9148 Cisco MDS 9148S The Cisco MDS 9148S 16G Multilayer Fabric Switch (see Figure 7-27). as mandated by the U. or 48 ports and upgrading the 16. up to 125 buffer credits can be allocated to a single port within the port group. or 48 enabled ports. 32.and 32-port models onsite. Port channels enable users to aggregate up to 16 physical Inter-Switch Links (ISLs) into a single logical bundle. and can be upgraded as needed with the 12-port Cisco MDS 9148S OnDemand Port Activation license to support configurations of 24. compact one rack-unit (1RU) switch scales from 12 to 48 line-rate 16 Gbps Fibre Channel ports. . The bundle can consist of any port from the switch. and support IPv6. with a default of 32 buffer credits per port. When extended distances are required. Figure 7-26 portrays the Cisco MDS 9148 switch. Customers have the option of purchasing preconfigured models of 16.S. as a Top-of-Rack switch. standalone SANs. also supporting 1/2/4-Gbps connectivity. Cisco MDS 9148 The MDS 9148 is a 1 RU Fibre Channel switch with 48 line-rate 8-Gbps ports. core-edge SAN infrastructures. providing load-balancing bandwidth use across all links. Ports per chassis: the On-Demand Port Activation license allows the addition of eight 8-Gbps ports. This extensibility is available without additional licensing. The Cisco MDS 9148 Multilayer Fabric Switch is designed to support quick configuration with zerotouch plug-and-play features and task wizards that allow it to be deployed quickly and easily in networks of any size.

The Cisco MDS 9148S includes dual redundant hot-swappable power supplies and fan trays. port channels for Inter-Switch Link (ISL) resiliency. Table 7-10 lists the chassis comparison between MDS 9148 and MDS 9148S.Figure 7-27 Cisco MDS 9148S The Cisco MDS 9148S offers In-Service Software Upgrades (ISSU). This means that Cisco NX-OS Software can be upgraded while the Fibre Channel ports carry traffic. and F-port channeling for resiliency on uplinks from a Cisco MDS 9148S operating in NPV mode. Table 7-10 Chassis Comparison Between MDS 9148 and MDS 9148S . The Cisco MDS 9148S also supports Power On Auto Provisioning (POAP) to automate software image upgrades and configuration file installation on newly deployed switches.

the Cisco MDS 9222i scales up to 66 ports for both open systems and IBM Fiber Connection (FICON) mainframe environments. It can also provide remote site device aggregation and SAN extension connectivity to large data center directors. It can be used as a cost-effective SAN extension over IP switch for midrange. The Cisco MDS 9222i also supports a Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module in the expansion slot. Links can span any port on any module in a chassis for added scalability and resilience. replication.Cisco MDS 9222i The Cisco MDS 9222i provides multilayer and multiprotocol functions in a compact three-rack-unit (3RU) form factor. The Cisco MDS 9222i is designed to address the needs of medium-sized and large enterprises with a wide range of SAN capabilities. Intelligent application services engines: The Cisco MDS 9222i includes as standard a single application services engine in the fixed slot that enables the included SAN Extension over IP software solution package to run on the four fixed 1 Gigabit Ethernet storage services ports. Figure 7-28 Cisco MDS 9222i The Cisco MDS 9222i can add a second services engine when a Cisco MDS 9000 18/4-Port Multiservice Module (MSM) is installed in the expansion slot. Figure 7-28 portrays the Cisco MDS 9222i two-rack-unit switch. Thus. The Cisco MDS 9222i supports the Cisco MDS 9000 4/44-Port 8-Gbps Host-Optimized Fibre Channel Switching Module in the expansion slot. This additional engine can run a second advanced software application solution package such as Cisco MDS 9000 I/O Accelerator (IOA) to further improve the SAN Extension over IP performance and throughput . It provides 18 fixed 4-Gbps Fibre Channel ports and 4 fixed Gigabit Ethernet ports for Fibre Channel over IP (FCIP) and Small Computer System Interface over IP (iSCSI) storage services. Up to 4095 buffer-to-buffer credits can be assigned to a single Fibre Channel port to extend storage networks over long distances. midsize customers in support of IT simplification. The following additional capabilities are included: High-performance ISLs: Cisco MDS 9222i supports up to 16 Fibre Channel interswitch links (ISLs) in a single port channel. and disaster recovery and business continuity solutions. with four of the Fibre Channel ports capable of running at 8 Gbps.

two 1/10-Gigabit Ethernet IP storage services ports. and eight 10-Gigabit Ethernet FCoE ports in a fixed two-rack-unit (2RU) form factor. using the eight 10-Gigabit Ethernet FCoE ports. and management traffic routed in-band and out-ofband. If more intelligent services applications need to be deployed. . VSAN-enabled intermix of mainframe and open systems environments. This feature is sometimes referred to as tape pipelining. The Cisco SAN Extension over IP application package license is enabled as standard on the two fixed 1/10 Gigabit Ethernet IP storage services ports. The Cisco MDS 9250i connects to existing native Fibre Channel networks. iSCSI. the Cisco MDS 9000 16-Port Storage Services Node (SSN) can be installed in the expansion slot to provide four additional services engines for simultaneously running application software solution packages such as Cisco Storage Media Encryption (SME). IBM Control Unit Port (CUP) support enables in-band management of Cisco MDS 9200 Series switches from the mainframe management console. Table 7-11 lists the technical details of the Cisco MDS 9200 and 9100 Series models.running on the standard services engine. and N-port ID virtualization (NPIV) for mainframe Linux partitions. formerly known as XRC. the Cisco MDS 9250i platform attaches to directly connected FCoE and Fibre Channel storage devices and supports multitier unified network fabric connectivity directly over FCoE. Overall. Figure 7-29 portrays the Cisco MDS 9250i two-rack-unit switch. Also. In Service Software Upgrade (ISSU) for Fibre Channel interfaces: Cisco MDS 9222i provides high serviceability by allowing Cisco MDS 9000 NX-OS Software to be upgraded while the Fibre Channel ports are carrying traffic. The Cisco MDS 9000 XRC Acceleration feature enables acceleration of dynamic updates for IBM z/OS Global Mirror. FICON tape acceleration reduces latency effects for FICON channel extension over FCIP for FICON tapes read and write operations to mainframe physical or virtual tape. the Cisco MDS 9222i switch platform scales from a single application service solution in the standard offering to a maximum of five heterogeneous or homogeneous application services. depending on the choice of optional service module that is installed in the expansion slot. Cisco MDS 9250i The Cisco MDS 9250i offers up to forty 16-Gbps Fibre Channel ports. including cascaded FICON fabrics. IPv6 support is provided for FCIP. enabling features such as Fibre Channel over IP (FCIP) and compression on the switch without the need for additional licenses. Advanced FICON services: The Cisco MDS 9222i supports System z FICON environments.

Figure 7-29 Cisco MDS 9250i .

.

Links can span any port on any module in a chassis. Cisco MDS Fibre Channel Blade Switches . Advanced FICON services: The Cisco MDS 9250i supports FICON environments. FICON tape acceleration. and 8 ports of 10 Gigabit Ethernet for FCoE connectivity. including cascaded FICON fabrics. It also supports IBM CUP. Cisco Data Mobility Manager (DMM) as a distributed fabric service: Cisco DMM is a fabric-based data migration solution that transfers block data nondisruptively across heterogeneous storage volumes and across distances. whether the host is online or offline. High-performance ISLs: Cisco MDS 9250i supports up to 16 Fibre Channel ISLs in a single port channel. Up to 256 buffer-to-buffer credits can be assigned to a single Fibre Channel port.Table 7-11 Summary of the Cisco MDS 9200 and 9100 Series Models The following additional capabilities are included: SAN consolidation with integrated multiprotocol support: The Cisco MDS 9250i is available in a base configuration of 20 ports (upgradeable to 40 through On-Demand Port Activation license) of 16-Gbps Fibre Channel. FIPS compliance: The Cisco MDS 9250i is FIPS 140-2 compliant and it is IPv6 capable. Cisco MDS 9250i supports In Service Software Upgrade (ISSU) for Fibre Channel interfaces. VSAN-enabled intermix of mainframe and open systems environments. Platform for intelligent fabric applications: The Cisco MDS 9250i provides an open platform that delivers the intelligence and advanced features required for multilayer intelligent SANs. Intelligent application services engine: The Cisco MDS 9250i includes as standard a single application services engine that enables the included Cisco SAN Extension over IP software solution package to run on the two fixed 1/10 Gigabit Ethernet storage services ports. and IBM Extended Remote Copy (XRC) features like the MDS 9222i. 2 ports of 10-Gigabit Ethernet for FCIP and iSCSI storage services.

16 ports are internal for server connections. providing transparent. and 4 ports are external or storage area network (SAN) facing. The Cisco MDS Fibre Channel Blade Switch functions are equivalent to the Cisco MDS 9124 switch. 2. and BladeCenter H) and HP c-Class BladeSystem (see Figure 7-30). Architecturally. BladeCenter T. In the 20-port model. and IBM BladeCenter S (10-port model only). It is running Cisco MDS 9000 NX-OS Software. in the form factor to fit IBM BladeCenter (BladeCenter. it includes advanced storage networking features and functions and is compatible with Cisco MDS 9500 Series Multilayer Directors and Cisco MDS 9200 Series Multilayer Fabric Switches. In the 12-port model. BladeCenter HT. The switch modules have the following features: . With its flexibility to expand ports in increments. There is a 12-port license upgrade to upgrade the 12-port model from 12 ports to 24 ports. Cisco offers Fibre Channel blade switch solutions for HP c-Class BladeSystem and IBM BladeCenter. Supported IBM BladeCenter chassis are IBM BladeCenter E. The Cisco MDS Fibre Channel Blade Switch for HP c-Class BladeSystem has two models: the base 12-port model and the 24-port model.Cisco Fibre Channel Blade switches are built using the same hardware and software platform as Cisco MDS 9000 family Fibre Channel switches and hence are consistent in features and performance. 8 ports are for internal connections to the server. the Cisco MDS Fibre Channel Blade Switch offers the densities required from entry level to advance. The management of the module is integrated with the HP OpenView Management Suite. In the 24-port model. end-to-end service delivery in core-edge deployments. The management of the module is integrated with the IBM Management Suite. blade switches share the same design as the Cisco MDS 9000 family fabric switches. 7 ports are for internal connections to the server and 3 ports are external. Figure 7-30 Cisco MDS Fibre Channel Blade Switches The Cisco MDS Fibre Channel Blade Switch for IBM BladeCenter has two models: the 10-port switch and the 20-port switch. The Cisco MDS Fibre Channel Blade Switch is capable of speeds of 4. BladeCenter H. and 1 Gbps. There is a 10-port upgrade license to upgrade from 10 ports to 20 ports. BladeCenter T. and MSIM (20-port model only). and 8 ports are external or SAN facing. Figure 7-30 shows the Cisco MDS Fibre Channel Blade Switch for HP and IBM. 14-ports are for internal connections to the server and 6-ports are external. There is also the MDS 8Gbps Fibre Channel Blade Switch for HP cClass BladeSystem. In the 10-port model.

standards-based. or 1 Gbps. It enables you to provision. and troubleshoot the data center network infrastructure. Internal autosensing Fibre Channel ports that operate at 4 or 2 Gbps. Two internal full-duplex 100 Mbps Ethernet interfaces. DCNM software provides ready-to-use. Cisco Prime DCNM 7. or E_ports (expansion ports). which is composed of Nexus and MDS 9000 switches.0 policy-based deployment helps reduce labor and operating expenses (OpEx).0 (at the time of writing this chapter) includes the infrastructure necessary to operate and automate a Cisco DFA network fabric. monitor. Internal ports are configured as F_ports at 2 Gbps or 4 Gbps. and storage resources. Cisco Prime DCNM 7.0 integrates with external hypervisor solutions and third-party applications using a Representational State Transfer (REST) API. Figure 7-31 shows the Cisco MDS Fibre Channel Blade Switch for HP. External ports can be configured as F_ports (fabric ports). It provides visibility and control of the unified data center so that you can optimize for the quality of service (QoS) required to meet service-level agreements. large-scale. FL_ports (fabric loop ports). extensible management capabilities for Cisco Dynamic Fabric Automation (DFA). Power-on diagnostics and status reporting. computing. Figure 7-32 illustrates Cisco Prime DCNM as central point of integration for NX-OS Platforms with visibility across networking. .Six external autosensing Fibre Channel ports that operate at 4. Cisco Prime DCNM 7. Figure 7-31 Cisco MDS 8-Gbps Fibre Channel Blade Switch Cisco Prime Data Center Network Manager Cisco Prime Data Center Network Manager (DCNM) is a management system for the Cisco Data Center Unified Fabric. 2.

2 is retitled with the name Cisco DCNM for SAN. VMpath analysis provides the capability to view performance for every switch hop all the way to the individual VMware ESX server and virtual machine. and LDAP. Proactively monitors the SAN and LAN. and storage administration needs of data centers. and detects performance degradation. Cisco MDS.2 is retitled with the name Cisco DCNM for LAN. This advanced management product Provides self-service provisioning of intelligent and scalable fabrics. This management solution can be deployed in Windows and Linux. Eases diagnosis and troubleshooting of data center outages. performance. and through trending capacity manager predicts when an individual tier will be consumed. configuration. Tracks port utilization by tier. adds. . The VMware vCenter plug-in brings Cisco Prime DCNM computing dashboard into VMware vCenter for dependency mapping. and changes. Cisco DCNM provides a high level of visibility and control through a single web-based management console for Cisco Nexus. Cisco Prime DCNM product for Cisco DCNM Release 5. Simplifies operational management of virtualized data centers. It provides a robust framework and comprehensive feature set that meets the routing. Cisco DCNM authenticates administration through the following protocols: TACACS+. More Cisco DCNM servers can be added to a cluster. or as a virtual service blade on Nexus 1010 or 1100. inventory. Opens northbound application programming interfaces (API) for Cisco and third-party management and orchestration platforms. Cisco Fabric Manager product for Cisco DCNM Release 5. and Cisco Unified Computing System products. switching. and events. Centralizes fabric management to facilitate resource moves. Cisco DCNM also supports the installation of the Cisco DCNM for SAN and Cisco DCNM for LAN components with a single installer. RADIUS.Figure 7-32 Cisco Prime DCNM as Central Point of Integration Cisco Prime DCNM provisions and optimizes the overall uptime and reliability of the data center fabric.

DCNM-SAN Server provides centralized MDS management services and performance monitoring. which ensures that you can manage any of your MDS devices from a single console.6. and configuration capabilities.7. Up to 16 clients (by default) can connect to a single Cisco DCNM-SAN Server concurrently.0) or Windows 2008 R2 SP1. requires about 100 GB of hard disk space. The Cisco DCNM-SAN Clients can also connect directly to an MDS switch in fabrics that are not monitored by a Cisco DCNM-SAN Server. 5. Device Manager. Cisco DCNM-SAN Server has the following features: Multiple fabric management: DCNM-SAN Server monitors multiple physical fabrics under the same user interface. DCNM-SAN Web Client. 6. Cisco Traffic Analyzer. Red Hat Enterprise Linux 5. Cisco DCNMSAN Server runs on VMware ESXi (Cisco Prime DCNM 7. The Cisco DCNM-SAN software. A licensed DCNM-SAN Server maintains up-to-date discovery information on all configured fabrics so device status and interconnections are immediately available when you open the DCNM-SAN Client.4 (Cisco Prime DCNM 6. troubleshooting.Cisco DCNM SAN Fundamentals Overview Cisco Prime DCNM SAN has the following fundamental components: DCNM-SAN Server. Windows 2008 Standard SP2. Performance Manager. Continuous health monitoring: MDS health is monitored continuously. so any events that occurred since the last time you opened the DCNM-SAN Client are captured. Figure 7-33 shows how Cisco Prime DCNM manages highly virtualized data centers. Authentication in DCNM-SAN Client.3 and 6. SNMP operations are used to collect fabric information. Network Monitoring.3). Each computer configured as a Cisco DCNM-SAN Server can monitor multiple Fibre Channel SAN fabrics. Performance Monitoring. This facilitates managing redundant fabrics. Roaming user profiles: The licensed DCNM-SAN Server uses the roaming user profile feature . including the server components. DCNM-SAN Client. Figure 7-33 Cisco Prime DCNM Managing Virtualized Topology DCNM-SAN Server DCNM-SAN Server is a platform for advanced MDS monitoring.

the status of each port within each module. weekly. Device Manager also provides more detailed information for verifying or troubleshooting device-specific configuration than DCNM-SAN. trial versions of these licensed features are available. However. and continuously monitored fabrics. To get the licensed features. These reports are available only if you create a collection using Performance Manager and start the collector. To enable the trial version of a feature. You can also view the results for specific time intervals using the interactive zooming functionality. remote client support. the supervisor modules. and inventory from a remote location using a web browser. Performance Manager drill-down reports: Performance Manager can analyze daily. or Cisco Nexus 7000 Series switch chassis (FCoE only) along with the installed switching modules. the power supplies. and yearly trends. These reports list the average and peak throughput and provide hot links to additional performance graphs and tables with additional statistics. performance. Cisco Nexus 5000 Series switch chassis. DCNM-SAN Web Client With DCNM-SAN Web Client you can monitor Cisco MDS switch events. although DCNM-SAN tables show values for one or more switches. and the fan assemblies. However. so that your user interface will be consistent regardless of what computer you use to manage your storage networks. Licensing Requirements for Cisco DCNM-SAN Server When you install DCNM-SAN. Gradually the resolution is reduced as groups of the oldest samples are rolled up together. Zero maintenance databases for statistics storage: No maintenance is required to maintain Performance Manager’s round-robin database. A full two days of raw samples are saved for maximum resolution. You see a dialog box explaining that this is a demo version of the feature and that it is enabled for a limited time. The tables in the DCNM-SAN information pane basically correspond to the dialog boxes that appear in Device Manager. you need to buy and install the Cisco DCNM-SAN Server package. At prescribed intervals the oldest samples are averaged (rolled-up) and saved. you run the feature as if you had purchased the license. the basic unlicensed version of Cisco DCNM-SAN Server is installed with it. Both tabular and graphical reports are available for all interconnections monitored by Performance Manager. Device Manager Device Manager provides a graphical representation of a Cisco MDS 9000 family switch chassis. Performance Manager gathers network device statistics historically . a Device Manager dialog box shows values for a single switch. monthly. such as Performance Manager. Performance Manager summary reports: The Performance Manager summary report provides a high-level view of the network performance. because its size does not increase over time.to store your preferences and topology map layouts on the server. A key management capability is network performance monitoring. Performance Manager The primary purpose of DCNM-SAN is to manage the network.

Using information polled from a seed Cisco MDS 9000 family switch. either DCNM-SAN Client or DCNM-SAN Server opens a CLI session to the switch (SSH or Telnet) and retries the user name and password pair. SCSI read or traffic throughput and frame counts. hosts. Network Monitoring DCNM-SAN provides extensive SAN discovery. including Name Server registrations. and information viewing capabilities. SCSI I/Os per second. device view. and operation view. Based on this configuration. Flows are defined based on a host-to-storage (or storage-to-host) link. Performance Manager can collect statistics for ISLs. and provides inventory and configuration information in multiple viewing options such as fabric view. topology mapping.and provides this information graphically using a web browser. summary view. the switch creates a temporary SNMP username that is used by DCNM-SAN Client and DCNM-SAN Server. and management task information. presents it in a customizable map. Performance Manager also integrates with external tools such as Cisco Traffic Analyzer. The username and password are passed to DCNM-SAN Server and are used to authenticate to the seed switch. hosts. which is based on ntop. a SAN discovery process begins. These files determine which SAN elements and SAN links Performance Manager gathers statistics for. Collection: The Web Server Performance Collection screen collects information on desired fabrics. If the switch in either the local switch authentication database or through a remote AAA server recognizes the username and password. Traffic encapsulated by one or more Port Analyzer Adapter products can be analyzed concurrently with a single workstation running Cisco Traffic Analyzer. SCSI session status. Cisco Traffic Analyzer monitors round-trip response times. DCNM-SAN collects information on the fabric topology through SNMP queries to the switches connected to it. and configured flows. Performance Manager gathers statistics from across the fabric based on collection configuration files. Performance Manager presents recent statistics in detail and older statistics in summary. Performance Manager has three operational stages: Definition: The Flow Wizard sets up flows in the switches. If this username and password are not a recognized SNMP username and password. DCNM-SAN re-creates a fabric topology. Authentication in DCNM-SAN Client Administrators launch DCNM-SAN Client and select the seed switch that is used to discover the fabric. a public domain software enhanced by Cisco for Fibre Channel traffic analysis. Fibre Channel Generic . or storage elements) and collects the appropriate information at fixed five-minute intervals. storage elements. Additional statistics are also available on Fibre Channel frame sizes and network management protocols. Cisco Traffic Analyzer Cisco Traffic Analyzer provides real-time analysis of SPAN traffic or analysis of captured traffic through a web browser user interface. After DCNM-SAN is invoked. Performance Manager communicates with the appropriate devices (switches. Presentation: Generates web pages to present the collected data through DCNM-SAN Web Server.

class 2 traffic.com/c/en/us/products/storage-networking/index. Real-time performance statistics are a useful tool in dynamic troubleshooting and fault isolation within the fabric. vendor. you can monitor any of a number of statistics. and display the results based on a number of selectable options including absolute value. and minimum or maximum value per second.html .com/go/dcnm More information on the Cisco I/O Acceleration: http://www. This tool gathers statistics at a configurable interval and displays the results in tables or charts.com/c/en/us/products/collateral/storagenetworking/mds-9500-series-multilayer-directors/data_sheet_c78_605104.com/en/US/prod/collateral/ps4159/ps6409/ps6028/ps10531/data_sheet_c78538860. including traffic in and out.html More information on the Cisco MDS 9000 Family intelligent fabric applications: http://www. You can set the polling interval from ten seconds to one hour. Device Manager provides an easy tool for monitoring ports on the Cisco MDS 9000 family switches. and FICON data.cisco.cisco. Fabric Shortest Path First (FSPF).Services (FC-GS). Reference List The landing page for Cisco SAN portfolio is available at http://www. Real-time statistics gather data on parts of the fabric in user-configurable intervals and display these results in DCNM-SAN and Device Manager. All available switches. Performance Monitoring DCNM-SAN and Device Manager provide multiple tools for monitoring the performance of the overall fabric.cisco. and VSANs. and storage devices are discovered. port channels. and host operating-system type and version discovery without host agents.html Cisco MDS 9700 Data Sheets: http://www.cisco.pdf Cisco MDS 9000 Family Pluggable Transceivers Data Sheet: http://www.cisco.html More information on the Data Mobility Manager (DMM): http://www. and SCSI-3. ISLs. The device information discovered includes device names. SAN elements. These statistics show the performance of the selected port in real-time and can be used for performance monitoring and troubleshooting. The Cisco MDS 9000 family switches use Fabric-Device Management Interface (FDMI) to retrieve the HBA model. DCNMSAN gathers this information through SNMP queries to each switch. host bus adapters (HBAs).com/en/US/prod/collateral/ps4159/ps6409/ps6028/product_data_sheet0900ae More information on the Cisco Prime DCNM: http://www.html Cisco MDS 9500 Data Sheets: http://www.cisco. These tools provide real-time statistics as well as historical performance monitoring. value per second.cisco. serial number and firmware version. and SAN links. errors.com/en/US/products/ps6028/index. software revision levels.cisco.com/c/en/us/products/storagenetworking/mds-9700-series-multilayer-directors/datasheet-listing.com/c/en/us/products/collateral/storage-networking/mds-9000-seriesmultilayer-switches/product_data_sheet09186a00801bc698. DCNM-SAN automatically discovers all devices and interconnects on one or more fabrics. For a selected port.

com/en/US/prod/collateral/ps4159/ps6409/ps6028/ps10526/data_sheet_c78538834. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.cisco.More information on the XRC: http://www. noted with the key topics icon in the outer margin of the page. .html. Table 7-12 lists a reference of these key topics and the page numbers on which each is found.

“Memory Tables Answer Key. “Memory Tables” (found on the disc). or at least the section for this chapter. Appendix C. and complete the tables and lists from memory.” also on the disc. and check your answers in the glossary: storage-area network (SAN) Fibre Channel over Ethernet (FCoE) Data Center Network Manager (DCNM) Internet Small Computer System Interface (iSCSI) Fibre Connection (FICON) Fibre Channel over IP (FCIP) buffer credits crossbar arbiter virtual output queuing (VOQ) virtual SAN (VSAN) zoning N-port virtualization (NPV) N-port identifier virtualization (NPIV) Data Mobility Manager (DMM) I/O Accelerator (IOA) Dynamic Fabric Automation (DFA) multiprotocol support XRC Acceleration inter-VSAN routing (IVR) head-of-line blocking .Table 7-12 Key Topics for Chapter 7 Complete Tables and Lists from Memory Print a copy of Appendix B. includes completed tables and lists to check your work. Define Key Terms Define the following key terms from this chapter.

why. A number of computing technologies benefit from virtualization. Server virtualization: Partitioning a physical server into smaller virtual servers. “Answers to the ‘Do I Know This Already?’ Quizzes. Network virtualization: Using network resources through a logical segmentation of a single physical network. Virtualizing Storage This chapter covers the following exam topics: Describe Storage Virtualization Describe LUN Storage Virtualization Describe Redundant Array of Inexpensive Disk (RAID) Describe Virtualizing SANs (VSAN) Describe Where the Storage Virtualization Occurs In computing. You can find the answers in Appendix A. Application virtualization: Decoupling the application and its data from the operating system. and how questions. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. storage device. This chapter guides you through storage virtualization concepts answering basic what. or network where the framework divides the resource into one or more execution environments.Chapter 8. This chapter goes directly into the key concepts of storage virtualization and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. applications. read the entire chapter. and human users are able to interact with the virtual resource as if it were a real single logical resource. including the following: Storage virtualization: Combining multiple network storage devices into what appears to be a single storage unit.” Table 8-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. Table 8-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. virtualization means to create a virtual version of a device or resource. Devices. If you . such as a server. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics.

b. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. Power and cooling consumption in the data center b. Achieving cost-effective business continuity and data protection 3.) a. Which of the following options explains RAID 1+0? a. The storage array c. INCITS (International Committee for Information Technology Standards) b. Virtualization trends d. SNIA (Storage Networking Industry Association) d. It is an exact copy (or mirror) of a set of data on two disks. IETF (Internet Engineering Task Force) c. In the SAN fabric d. you should mark that question as wrong for purposes of the self-assessment. File systems e. Storage virtualization has no standard measure defined by a reputable organization. The host b. Switches 4.) a. Data protection regulations e. Which of the following options are reasons to use storage virtualization? (Choose three. Which of the following organizations defines the storage virtualization standards? a. DNS 5. Blocks c. Which of the following options can be virtualized according to SNIA storage virtualization taxonomy? (Choose all that apply. It comprises block-level striping with distributed parity. 2. b.) a. 6. . It comprises block-level striping with distributed parity. Exponential data growth and disruptive storage upgrades c. It creates a striped set from a series of mirrored drives. Tape systems d.do not know the answer to a question or are only partially sure of the answer. Where does the storage virtualization occur? (Choose three. 1. The control plane e. Disks b. It comprises block-level striping with double distributed parity. Which of the following options explains RAID 6? a. It is an exact copy (or mirror) of a set of data on two disks. c. d.

II c. It is close to the file system. It is independent of SAN transport. III. It is licensed and managed per host. 9. II. III. It should be configured on HBAs. 8.) a. d. Which of the following options explains the LUN masking? (Choose two. IV b.) a. d. It comprises block-level striping with double distributed parity. III. b. III. II. IV e. b. The LVM manipulates LUN representation to create virtualized storage to the file system. It is a proprietary technique that Cisco MDS switches offer. I. I.) a. e. LVMs can be used to divide large physical disk arrays into more manageable virtual disks. Write operation to the primary array is acknowledged locally. VI d. Asynchronous mirroring operation was not explained correctly. c. It is a feature of storage arrays. III. d. II. b. An acknowledgement is sent to the primary storage array by a secondary storage array after the data is stored on the secondary storage array. It provides basic LUN-level security by allowing LUNs to be seen by selected servers. I. II. LVM combines multiple hard disk drive components into a logical unit to provide data redundancy. Which of the following options explains the Logical Volume Manager (LVM)? (Choose two. Which of the following options are the advantages of host-based storage virtualization? (Choose three. It creates a striped set from a series of mirrored drives. Write operation is received by primary storage array from the host. c. d. IV. It uses the array controller’s CPU cycles. It is a feature of Fibre Channel HBA. I.c. 10. a. primary storage array does not require a confirmation from the secondary storage array to acknowledge the write operation to the server. e. I. 7. IV. It initiates a write to the secondary storage array. c. Foundation Topics . It uses the operating system’s built-in tools. Primary storage array maintains new data queued to be copied to the secondary storage array at a later time. Select the correct order of operations for completing asynchronous array-based replication. The LVM is a collection of ports from a set of connected Fibre Channel switches.

In 2003. SDS refers to the abstraction of the physical elements. storage virtualization is the act of abstracting. SDS delivers automated. or isolating the internal function of a storage (sub) system or service from applications. A single set of tools and processes performs everything from online any-to-any data migrations to heterogeneous replication. The term Software Defined Storage is a follow up of the technology trend Software Defined Networking. One decade later. virtualization began to move into storage and some disk subsystems. similar to server virtualization. virtual tape architectures moved away from the mainframe and entered new markets. is stored in huge mapping tables. . Virtualization achieved a new level when the first Redundant Array of Inexpensive Disk (RAID) virtual disk array was announced in 1992 for mainframe systems. which was a box full of DRAM chips that appeared to be rotating magnetic disks. Much of the pioneering work in virtualization began on mainframes and has since moved into the non-mainframe. When a host requests I/O. increasing the demand for virtual tape. the control plane and the data plane have been intertwined within the traditional switches that are deployed today. application-aware storage services through orchestration of the underlining storage infrastructure in support of an overall software defined environment. the virtualization layer looks at the logical address in that I/O request and. which has produced useful tutorial content on the various flavors of virtualization technology. which was a helical-scan cartridge that looked like a disk. which was first used to describe an approach in network technology that abstracts various elements of networking and creates an abstraction or virtualized layer in software. The first virtual tape systems appeared in 1997 for mainframe systems. The storage virtualization layer creates a mapping from the logical storage address space (used by the hosts) to the physical address of the storage device. According to the SNIA dictionary. The late 1970s witnessed the introduction of the first solid-state disk. Virtual memory operating systems evolved during the 1960s. Virtualization gained momentum in the late 1990s as a result of virtualizing SANs and storage networks in an effort to tie together all the computing platforms from the 1980s. open-system world. The closest vendor-neutral attempt to make storage virtualization concepts comprehensible has been the work of the Storage Networking Industry Association (SNIA). This virtual pool can then be more easily managed and provisioned as needed.What Is Storage Virtualization? Storage virtualization is a way to logically combine storage capacity and resources from various heterogeneous. or general network resources for the purpose of enabling application and network independent management of storage or data. The beginning of virtualization goes a long way back. SDS represents a new evolution for the storage industry for how storage will be managed and deployed in the future. hiding. Such mapping information. The original mass storage device used a tape cartridge. external storage systems into one virtual pool of storage. making abstraction and virtualization more difficult to manage in complex virtual environments. computer servers. How Storage Virtualization Works Storage virtualization works through mapping. In networking. however. storage virtualization has no standard measure defined by a reputable organization such as INCITS (International Committee for Information Technology Standards) or the IETF (Internet Engineering Task Force). Unlike previous new protocols or architectures. virtual tape was the most popular storage initiative in many storage management pools. policy-driven. also referred to as metadata. Software Defined Storage (SDS) has been proposed in 2013 as a new category of storage software products. In 2005.

The major economic driver is to reduce costs without sacrificing data integrity or performance. use IT resources more efficiently. Figure 8-1 illustrates this virtual storage layer that sits between the applications and the physical storage devices. and become more agile. translates the logical address into the address of a physical storage device. They must simultaneously improve asset utilization. Organizations can use the technology to resolve the issue and create a more adaptive. The top seven reasons to use storage virtualizations are Exponential data growth and disruptive storage upgrades Low utilization of existing assets . ensure business continuity. cooling. In addition. they are faced with ever-increasing constraints on power. Figure 8-1 Storage Virtualization Mapping Heterogeneous Physical Storage to Virtual Storage Why Storage Virtualization? The business drivers for storage virtualization are much the same as those for server virtualization. The virtualization layer then performs I/O with the underlying storage device using the physical address. and space. The application is unaware of the mapping that happens beneath the covers. flexible service-based infrastructure to reflect ongoing changes in business requirements. and when it receives the data back from the physical device. CIOs and IT managers must cope with shrinking IT budgets and growing client demands. it returns that data to the application as if it had come from the logical address.using the mapping table.

and manage storage Ensuring rapid. run. Storage virtualization provides the means to build higher-level storage services that mask the complexity of all underlying components and enable automation of data storage operations. This is illustrated in Figure 8-2. file systems. tape systems. Figure 8-2 SNIA Storage Virtualization Taxonomy Separates Objects of Virtualization from Location and Means of Execution What is being virtualized may include blocks. in storage arrays. high-quality storage service delivery to application and business owners Achieving cost-effective business continuity and data protection Power and cooling consumption in the data center What Is Being Virtualized? The Storage Networking Industry Association (SNIA) taxonomy for storage virtualization is divided into three basic categories: what is being virtualized. and how it is implemented. or in the network via intelligent fabric switches or SAN-attached appliances. where the virtualization occurs.Growing management complexity with flat or decreasing budgets for IT staff head count Increasing hard costs and environmental costs to acquire. . disks. How the virtualization occurs may be via inband or out-of-band separation of control and data paths. Where virtualization occurs may be on the host. and file or record virtualization.

in 1988. Originally. In common usage. these alternatives are referred to as host-based. vendors have different methods for implementing virtualized storage transport.The most important reasons for storage virtualization were covered in the “Why Storage Virtualization?” section. In-band and out-of-band virtualization techniques are sometimes referred to as symmetrical and asymmetrical. Figure 8-3 Possible Alternatives Where the Storage Virtualization May Occur In addition to differences between where storage virtualization is located. The abstraction layer that masks physical from logical storage may reside on host systems such as servers. binding multiple levels of technologies on a foundation of logical abstraction. network-based. Figure 8-3 illustrates all components of an intelligent SAN. so that both block data and the control information that govern its virtual appearance transit the same link. The in-band method places the virtualization engine directly in the data path. it can present an iSCSI target to the host while using standard FC-attached arrays for the storage. or on a storage array or tape subsystem targets. or array-based virtualization. The out-of-band method provides separate paths for data and control. For example. respectively. presenting an image of virtual storage to the host by one link and allowing the host to directly retrieve data blocks from physical storage on another. within the storage network in the form of a virtualization appliance. In the in-band method the appliance acts as an I/O target to the hosts and acts as an I/O initiator to the storage. Block Virtualization—RAID RAID is another data storage virtualization technology that combines multiple hard disk drive components into a logical unit to provide data redundancy and improve performance or capacity. it was defined as Redundant Arrays of Inexpensive Disks (RAID) on a paper . This can be achieved by a layered approach. It has the opportunity to work as a bridge. The ultimate goal of storage virtualization should be to simplify storage administration. within SAN switches. This capability enables end users to preserve investment in storage resources while benefiting from the latest host connection mechanisms such as Fibre Channel over Ethernet (FCoE).

but the storage space added to the array by each disk is limited to the size of the smallest disk. Now it is commonly used as Redundant Array of Independent disks (RAID). It provides no data redundancy. RAID levels greater than RAID 0 provide protection against unrecoverable (sector) read errors. Host volume management software typically has several RAID variations and. if an 800 GB disk is striped together with a 500 GB disk. This is useful when read performance or reliability is more important than data storage capacity. The different schemes or architectures are named by the word RAID followed by a number (for example. Running RAID in a volume manager opens the potential for integrating file system and volume management products for management and performance-tuning purposes. Patterson. A classic RAID 1 mirrored pair contains two disks. although it can also be used as a way to create a large logical disk out of two or more physical ones. A RAID 1 is an exact copy (or mirror) of a set of data on two disks. the size of the array will be 400 GB. Double parity provides fault tolerance up to two failed drives. Today. and RAID 6 (dual parity). For example. Each scheme provides a different balance between the key goals: reliability and availability.” a widely used method in information technology to provide fault tolerance in a given set of data. Such an array can only be as big as the smallest member disk. depending on the specific level of redundancy and performance required. Upon failure of a single drive. if a 500 GB disk is mirrored together with 400 GB disk. starting with SCSI and including Advanced Technology Attachment (ATA) and serial ATA. RAID 1). There is an enormous range of capabilities offered in disk subsystems across a wide range of prices. it can be used on multiple levels recursively. referred to as RAID levels. the size of the array will be 1 TB (500 GB × 2). It requires that all drives but one be present to operate. offers many sophisticated configuration options. without parity information. several nested levels and many nonstandard levels (mostly proprietary). RAID 1 and variants (mirroring). It can be found in disk subsystems. system motherboards. and virtually any processor along the I/O path. host bus adapters. The Storage Networking Industry Association (SNIA) standardizes RAID levels and their associated data formats in the common RAID Disk Drive Format (DDF) standard. A RAID 5 comprises block-level striping with distributed parity. especially for high- . from the University of California. and Randy Katz. RAID 0. Originally. but RAID 6 uses two separate parities based respectively on addition and multiplication in a particular Galois Field or Reed–Solomon error correction that we will not be covering in this chapter. A RAID 6 comprises block-level striping with double distributed parity. RAID 5 (distributed parity). in some cases. Gary A. A RAID 0 can be created with disks of different sizes. RAID has been implemented in adapters/controllers for many years. A RAID 0 (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped). Because RAID works with storage address spaces and not necessarily storage devices. Many RAID levels employ an error protection scheme called “parity. subsequent reads can be calculated from the distributed parity so that no data is lost. RAID 5 requires a minimum of three disks. RAID 0 is normally used to increase performance. Gibson. but many variations have evolved—notably. Data is distributed across the hard disk drives in one of several ways. For example. as well as whole disk failure. there were five RAID levels. Disk subsystems have been the most popular product for RAID implementations and probably will continue to be for many years to come.written by David A. RAID is a staple in most storage products. volume management software. performance and capacity. Most of the RAID levels use simple XOR. This makes larger RAID groups more practical. The most common RAID levels that are used today are RAID 0 (striping). device driver software.

because large-capacity drives take longer to restore. With a RAID 6 array. a single drive failure results in reduced performance of the entire array until the failed drive has been replaced. it is possible to mitigate most of the problems associated with RAID 5. Many storage controllers allow RAID levels to be nested (hybrid). The array can sustain multiple drive losses as long as no mirror loses all its drives. The array continues to operate with one or more drives failed in the same mirror set. Table 8-2 lists the different types of RAID levels. and storage arrays are rarely nested more than one level deep. The larger the drive capacities and the larger the array size. The final array of the RAID level is known as the top array. Most common examples of nested RAIDs are the following: RAID 0+1 creates a second striped set to mirror a primary striped set. When the top array is RAID 0 (such as in RAID 1+0 and RAID 5+0). most vendors omit the “+”. but if drives fail on both sides of the mirror. using drives from multiple sources and manufacturers. RAID 1+0 creates a striped set from a series of mirrored drives. the more important it becomes to choose RAID 6 instead of RAID 5. respectively. the data on the RAID system is lost. As with RAID 5.availability systems. . which yields RAID 10 and RAID 50.

.

.

.

Fibre Channel (FC). You can have multiple layers of virtualization or mapping by using the output of one layer as the input for a higher layer of virtualization. A Logical Unit Number (LUN) is a unique identifier that designates individual hard disk devices or grouped devices for address by a protocol associated with a SCSI. The back-end resource refers to a LUN that is not presented to the computer or host system for direct use. Virtualization maps the relationships between the back-end resources and front-end resources. or similar interface.Table 8-2 Overview of Standard RAID Levels Virtualizing Logical Unit Numbers Virtualization of storage helps to provide location independence by abstracting the physical location of the data. iSCSI. which manages the process of mapping that logical space to the physical location. The front-end LUN or volume is presented to the computer or host system for use. How the mapping . The user is presented with a logical space for data storage by the virtualization system. LUNs are central to the management of block storage arrays shared over a storage-area network (SAN).

LUN masking in heterogeneous environments or where Just a Bunch of Disks (JBOD) are installed. known as a Logical Block Address (LBA). the same volume will be accessed by more than one file system. Typically. In most SAN environments. LUN mapping: LUN mapping. this mapping is a large management task. a proprietary technique that Cisco MDS switches offer.is performed depends on the implementation. LUN management becomes more difficult. LUN zoning can be used instead of. one block of information is addressed by using a LUN and an offset within that LUN. Most administrators configure the HBA to automatically map all LUNs that the HBA discovers. Otherwise. provides basic LUN-level security by allowing LUNs to be seen only by selected servers that are identified by their port worldwide name (pWWN). or in combination with. . a feature of Fibre Channel HBAs. There are three ways to prevent this multiple access. Each storage array vendor has its own management and proprietary techniques for LUN masking in the array. allows the administrator to selectively map some LUNs that have been discovered by the HBA. a feature of enterprise storage arrays. In a large SAN. allows LUNs to be selectively zoned to their appropriate host port. each individual LUN must be discovered by only one server host bus adapter (HBA). In a block-based storage environment. LUN zoning: LUN zoning. one physical disk is broken down into smaller subsets in multiple megabytes or gigabytes of disk space. leading to potential loss of data or security. LUN mapping must be configured on every HBA. In a heterogeneous environment with arrays from different vendors. They then perform LUN management in the array (LUN masking) or in the network (LUN zoning). as shown in Figure 8-13: LUN masking: LUN masking.

The physical storage resources are aggregated into storage pools from which the logical storage is created. consequently demanding special attention to the ratio between “declared” and actually used resources. Utilizing vLUNs disk expansion and shrinking can be managed easily. only blocks that are present in the vLUN are saved on the storage pool. Using this technique. In this . which may be heterogeneous in nature. More storage systems. In most LUN virtualization deployments.Figure 8-13 LUN Masking. LUN Mapping. disks can be reduced in size by removing some physical storage from the mapping (uses for this are limited because there is no guarantee of what resides on the areas removed). and the virtual storage space will scale up by the same amount. Similarly. More physical storage can be allocated by adding to the mapping table (assuming the using system can cope with online expansion). although a server can detect a vLUN with a “full” size. This process is fully transparent to the applications using the storage infrastructure. can be added as and when needed. Thin-provisioning is a LUN virtualization feature that helps reduce the waste of block storage resources. and LUN Zoning on a Storage Area Network (SAN) Note JBODs do not have a management function or controller and so do not support LUN masking. Each storage vendor has different LUN virtualization solutions and techniques. a virtualizer element is positioned between a host and its associated target disk array. This feature brings the concept of oversubscription to data storage. This virtualizer generates a virtual LUN (vLUN) that proxies server I/O operations while hiding specialized data block processes that are occurring in the pool of disk arrays.

Virtualizing Storage-Area Networks A virtual storage-area network (VSAN) is a collection of ports from a set of connected Fibre Channel switches. Tape Storage Virtualization Tape storage virtualization can be divided into two categories: the tape media virtualization and the tape drive and library virtualization (VTL). Virtualizing the disk storage as tape allows integration of VTLs with existing backup software and existing backup and recovery processes and policies. zones. Conversely. and floor space. FICON. and applied to a SAN. (This scheme is called disk-to-disk-to-tape. With tape media virtualization the amount of mounts are reduced. VTL often increases performance of both backup and recovery operations. The shift to VTL also eliminates streaming problems that often impair efficiency in tape drives. They also often offer a disk staging feature: moving the data from disk to a physical tape for long-term storage. which form a virtual fabric. By backing up data to disks instead of tapes. FCIP. A VTL presents a storage component (usually hard disk storage) as tape libraries or tape drives for use with existing backup software.chapter we have briefly covered a subset of these features. Restore processes are found to be faster than backup regardless of implementations. or D2D2T. events. The use of array enclosures increases the scalability of the solution by allowing the addition of more disk drives and enclosures to increase the storage capacity. In October 2004. accessed through NFS and CIFS protocols over IP networks) not requiring tape library emulation at all. The use of VSANs allows traffic to be isolated within specific portions of the network. If a problem occurs in one VSAN. the Technical Committee T11 of the International Committee for Information Technology Standards approved VSAN technology to become a standard of the American National Standards Institute (ANSI). and iSCSI. A virtual tape library (VTL) is used typically for backup and recovery purposes. like each FC fabric. it saves tapes. Figure 8-14 portrays three different SAN islands that are being virtualized onto a common SAN infrastructure on Cisco MDS switches. The geographic location of the switches and the attached devices are independent of their segmentation into logical VSANs. that problem can be handled with a minimum of disruption to the rest of the network. Most current VTL solutions use SAS or SATA disk arrays as the primary storage component because of their relatively low cost. Ports within a single switch can be partitioned into multiple VSANs. tape libraries. such as physical tapes. VSANs were designed by Cisco. VSANs can also be configured separately and independently. Each VSAN is a separate self-contained fabric using distinctive security policies. modeled after the virtual local area network (VLAN) concept in Ethernet networking. multiple switches can join a number of ports to form a single VSAN.) Alternatively. A VSAN. memberships. most contemporary backup software products introduced also direct usage of the file system storage (especially network-attached storage. for disaster recovery purposes. the data stored on the VTL’s disk array is exported to other media. Tape media virtualization resolves the problem of underutilized tape media. . despite sharing hardware resources. because disk technology does not rely on streaming and can write effectively regardless of data transfer speeds. can offer different high-level protocols such as FCP. In some cases. and name services. The benefits of such virtualization include storage consolidation and faster data restore processes.

The problem. Whenever a file system client needs to access a file. Virtualizing File Systems A virtual file system is an abstraction layer that allows client applications using heterogeneous filesharing network protocols to access a unified pool of storage resources. an inherent conflict between trying to reduce the overall number of switches to keep the domain ID count low while also needing to add switches to have a sufficiently high port count. and that the total number of domain IDs in a fabric is limited. Different applications can be used in conjunction with NPIV. In a virtual machine environment where many host operating systems or applications are running on a physical host. and security perspectives. though. N-Port ID Virtualization (NPIV) NPIV allows a Fibre Channel host connection or N-Port to be assigned multiple N-Port IDs or Fibre Channel IDs (FCID) over a single link. is that you often need to add Fibre Channel switches to scale the size of your fabric. it must retrieve information about the file from the metadata servers first (see Figure 8-15). The N-Port Virtualizer feature allows the blade switch or top-of-rack fabric device to behave as an NPIV-based host bus adapter (HBA) to the core Fibre Channel director. Consider that every Fibre Channel switch in a fabric needs a different domain ID. All FCIDs assigned can now be managed on a Fibre Channel fabric as unique entities on the same physical host.Figure 8-14 Independent Physical SAN Islands Virtualized onto a Common SAN Infrastructure Since December 2002 Cisco enabled multiple virtualization features within intelligent SANs. . this limit can be fairly low depending upon the devices attached to the fabric. N-Port Virtualizer (NPV) An extension to NPIV is the N-Port Virtualizer feature. The device aggregates the locally connected host ports or N-Ports into one or more uplinks (pseudo-interswitch links) to the core switches. Several examples are Inter VSAN routing (IVR). These systems are based on a global file directory structure that is contained on dedicated file metadata servers. There is. NPV is intended to address this problem. In some cases. It is designed to reduce switch management and overhead in larger SAN deployments. each virtual machine can now be managed independently from zoning. aliasing. NPV is primarily a switch-based technology. Whereas NPIV is primarily a host-based solution. therefore. N-Port ID Virtualization (NPIV) and N-Port Virtualizer.

The terms of the contract might bring incompatibility from release to release. NTFS. Therefore. and possibly modified before recompilation. Linux’s Global File System (GFS). be used to access local and network storage devices transparently without the client application noticing the difference. to allow it to work with a new release of the operating system. . The VMware Virtual Machine File System (VMFS). A VFS can. and Unix file systems. Mac OS. Or the supplier of the operating system might make only backward-compatible changes to the contract.Figure 8-15 File System Virtualization The purpose of a virtual file system (VFS) is to allow client applications to access different types of concrete file systems in a uniform way. for example. it is easy to add support for new file system types to the kernel simply by fulfilling the contract. Sun Microsystems introduced one of the first VFSes on Unix-like systems. which would require that the concrete file system support be recompiled. so that applications can access files on local file systems of those types without having to know what type of file system they are accessing. It can be used to bridge the differences in Windows. and the Oracle Clustered File System (OCFS) are all examples of virtual file systems. A VFS specifies an interface (or a “contract”) between the kernel and a concrete file system. so that concrete file system support built for a given release of the operating system would work with future versions of the operating system.

However. can be implemented at the host server in the data path on a network switch blade or on an appliance. the logical volume manager. residing above the disk device driver intercepts the I/O requests (see Figure 8-16) and provides the metadata lookup and I/O mapping. Often global file systems are not granular to the file level. A physical device driver handles the volumes (LUN) presented to the host system. including aggregation (pooling). but most important. which removes the requirement to know the IP address of a website. The second file virtualization method uses a standalone virtualization engine—typically an appliance. Where Does the Storage Virtualization Occur? Storage virtualization functionality. is that this metadata is unique to the users of that specific file system. This eliminates one of the key capabilities of a file virtualization system. you simply type in the domain name. In a common scenario. There are two types of file virtualization methods available today. and objectives for deploying storage virtualization. whereas out-of-band solutions are simpler to implement. The best location to implement storage virtualization functionality depends in part on the preferences. however. Both offer file-level granularity. they can be moved to lower cost. The challenge. In-band solutions typically offer better performance. existing technologies. It is a vital part of both file area network (FAN) and network file management (NFM) concepts. a software layer. file virtualization eliminates the need to know exactly where a file is physically located. we discussed different types of tiered storage. a situation equivalent to not being able to mix brands of servers in a server virtualization project. often called a “global” file system. and device emulation. Because these files are less frequently accessed. . the primary NAS could be a high-speed high performance system storing production data. heterogeneous replication. With this solution. The first is one that’s built in to the storage system or the NAS itself.File/Record Virtualization File virtualization unites multiple storage devices into a single logical pool of files. as well as in a storage system. In some cases. meaning the same hardware must be used to expand these systems and is often available from only one manufacturer. Host-Based Virtualization A host-based virtualization requires additional software running on the host as a privileged task or process. Users look to a single mount point that shows all their files. as you will remember. In a similar fashion. These systems can either sit in the data path (in-band) or outside the data path (out-of-band) and can move files to alternate locations based on a variety of attribute related policies. which could solve the file stub and manual look-up problems. more power-efficient systems that bring down capital and operational costs. File virtualization is similar to Dynamic Name Service (DNS). volume management is built in to the operating system. you could lose the ability to store older data on a less expensive storage system. transparent data migration. In Chapter 6. There may be some advantage here because the file system itself is managing the metadata that contains the file locations. the flexibility to mix hardware in a system. Data can be moved from a tier-one NAS to a tier-two or tier-three NAS or network file server automatically. and in other instances it is offered as a separate product. standalone file virtualization systems do not have to use all the same hardware.

heterogeneous storage systems can be used on a perserver basis. servers. it’s called Logical Volume Manager or LVM. it’s Logical Disk Manager. Network-Based (Fabric-Based) Virtualization A network-based virtualization is a fabric-based switch infrastructure. in Windows. or LDM) that performs virtualization tasks. the virtualization capabilities require standardized protocols to ensure interoperability between fabric-hosted virtualization engines. in Solaris and FreeBSD. Fabric may be composed of multiple switches from different vendors. it will not differentiate the RAID levels. but manual administration of hundreds or thousands of servers in a large data center can be very costly. Fabric-based storage virtualization dynamically interacts with virtualizers on arrays. The SAN fabric provides connectivity to heterogeneous server platforms and heterogeneous storage arrays and tape subsystems. Most modern operating systems have some form of logical volume management built in (in Linux. or appliances. So the storage administrator must ensure that the appropriate classes of storage under LVM control are paired with individual application requirements. LVMs can be used to divide large physical disk arrays into more manageable virtual disks or to create large virtual disks from multiple physical disks. For smaller data centers. . Host-based storage virtualization may require more CPU cycles. this may not be a significant issue. So with host-based virtualization. logical volume management may be complemented by hardware-based virtualization. Use of host-resident virtualization must therefore be balanced against the processing requirements of the upper layer applications so that overall performance expectations can be met. it’s ZFS’s zpool layer. RAID controllers for internal DAS and JBOD external chassis are good examples of hardware-based virtualization. The logical volume manager treats any LUN as a vanilla resource for storage pooling.Figure 8-16 Logical Volume Manager on a Server System The LVM manipulates LUN representation to create virtualized storage to the file system or database manager. For higher performance requirements. but the software that performs the striping calculations often places a noticeable burden on the host CPU. RAID can be implemented without a special controller.

which requires the processing power and internal bandwidth to ensure that they don’t add too much latency to the I/O process for host servers. and storage-based virtualization provides independence from vendor-specific servers and operating systems. and it initiated the fabric application interface standard (FAIS). Asymmetric or out-of-band: The virtualization device in this approach sits outside the data . The fabric-based virtualization engine can be hosted on an appliance. while maintaining high-performance switching of storage data. but don’t handle data traffic. The FAIS initiative aims to provide fabric-based services. the virtualization device is sitting in the path of the I/O data flow. Caching for improving performance and other storage management features such as replication and migration can be supported. The control path processor (CPP) includes some form of operating system. map them to the physical storage locations. or appliance-based utilities to now be supported within fabric switches and directors. Multivendor fabric-based virtualization provides high-availability. and regenerate those I/O requests to the storage systems on the back end. mirroring. The FAIS initiative separates control information from the data path. There are two types of processors. the FAIS application interface. it’s also less disruptive if the virtualization engine goes down. The APIs are a means to more easily integrate storage applications that were originally developed as host. an application-specific integrated circuit (ASIC) blade. FAIS development is thus being driven by the switch manufacturers and by companies who have developed storage virtualization software and virtualization hardware-assist components. The CPP is therefore a high-performance CPU with auxiliary memory. Network-based virtualization provides independence from both. which perform I/O with the actual storage device on behalf of the host. switching. centralized within the switch architecture. or auxiliary module. and the storage virtualization application. routing I/O requests to the proper physical locations. The DPC is optimized for low latency and high bandwidth to execute basic SCSI read/write transactions under the management of one or more CPPs. It can be either in-band or out-of-band. array. referring to whether the virtualization engine is in the data path. This can result in less latency than in-band storage virtualization because the virtualization engine doesn’t handle the data. the CPP and the DCP. Out-of-band solutions typically run on the server or on an appliance in the network and essentially process the control traffic. and data replication. so vendor implementation may be different. They do require that the virtualization engine handle both control traffic and data. FAIS is an open systems project of the ANSI/INCITS T11. Allocation of virtualized storage to individual servers and management of the storage metadata is the responsibility of the storage application running on the CPP. The data path controller (DPC) may be implemented at the port level in the form of an applicationspecific integrated circuit (ASIC) or dedicated CPU.5 task group and defines a set of common application programming interfaces (APIs) to be implemented within fabrics. such as snapshots. The FAIS specifications do not define a particular hardware design for CPP and DPC entities. In-band solutions intercept I/O requests from hosts.A host-based (server-based) virtualization provides independence from vendor-specific storage. Network-based virtualization can be done with one of the following approaches: Symmetric or in-band: With an in-band approach. Hosts send I/O requests to the virtualization device. and other tasks. Fabric-based virtualization requires processing power and memory on top of a hardware architecture that is providing processing power for fabric services.

taking advantage of intelligent SAN switches to perform I/O redirection and other virtualization tasks at wire speed. in the split-path approach the I/O-intensive work is offloaded to dedicated port-level ASICs on the SAN switch. . What this means is that special software is needed on the host.path between the host and storage device. which knows to first request the location of data from the virtualization device and then use that mapped physical address to perform the I/O. Hybrid split-path: This method uses a combination of in-band and out-of-band approaches. Whereas in typical in-band solutions. the CPU is susceptible to being overwhelmed by I/O traffic. Figure 8-17 portrays the three architecture options for implementing network-based storage virtualization. These ASIC can look inside Fibre Channel frames and perform I/O mapping and reroute the frames without introducing any significant amount of latency. Specialized software running on a dedicated highly available appliance interacts with the intelligent switch ports to manage I/O traffic and map logical-to-physical storage resources at wire speed.

Some applications and their storage requirements are best suited for a combination of technologies to address specific needs and requirements.Figure 8-17 Possible Architecture Options for How the Storage Virtualization Is Implemented There are many debates regarding the architecture advantages of in-band (in the data path) vs. Cisco SANTAP is one of the Intelligent Storage Services features supported on the Storage Services Module (SSM). out-ofband (out-of-data path with agent and metadata controller) or split path (hybrid of in-band and out-ofband). and MDS 9000 18/4-Port Multiservice . MDS 9222i Multiservice Modular Switch.

Array-based data replication may be synchronous or asynchronous. Cisco SANTAP is a good example of Network-based virtualization. The control path handles requests that create and manipulate replication sessions sent by an appliance. That controller. The Cisco MDS 9000 SANTAP service enables customers to deploy third-party appliance-based storage applications without compromising the integrity. The primary storage array controller also provides replication. synchronous data replication is generally limited to metropolitan distances (~150 kilometers) to avoid performance degradation due to speed of light propagation delay.Module (MSM-18/4). The protocol-based interface that is offered by SANTAP allows easy and rapid integration of the data storage service application because it delivers a loose connection between the application and an SSM. the virtualization layer resides inside a storage array controller. The communication protocol between storage subsystems may be proprietary or based on the SNIA SMI-S standard. which the SANTAP process creates and monitors. availability. data replication requires that a storage system function as a target to its attached servers and as an initiator to a secondary array. The communication between separate storage array controllers enables the disk resources of each system to be managed collectively. or performance of a data path between the server and disk. This is commonly referred to as disk-to-disk (D2D) data replication. In a synchronous replication operation (see Figure 8-18). . and other storage management services across the storage devices that it is virtualizing. For this reason. each write to a primary array must be completed on the secondary array before the SCSI transaction acknowledges completion. migration. The SCSI I/O is therefore dependent on the speed at which both writes can be performed. As one of the first forms of array-to-array virtualization. and multiple other storage devices from the same vendor or from a different vendor can be added behind it. SANTAP also allows LUN mapping to Appliance Virtual Targets (AVT). and any latency between the two systems affects overall performance. Responses are sent to the control logical unit number (LUN) on the appliance. called the primary storage array controller. SANTAP has a control path and a data path. is responsible for providing the mapping functionality. The control path is implemented using an SCSI-based protocol. which reduces the effort needed to integrate applications with the core services being offered by the SSM. either through a distributed management scheme or through a hierarchal master/slave relationship. Array-Based Virtualization In the array-based approach. An appliance sends requests to a Control Virtual Target (CVT).

Point-in-time copy is a technology that permits you to make the copies of large sets of data a common activity. full-copy snapshots can protect against physical destruction. To minimize this risk. data mining. This solves the problem of local performance. Figure 8-19 Asynchronous Mirroring Array-based virtualization services often include automated point-in-time copies or snapshots that may be implemented as a complement to data replication. some implementations required the primary array to temporarily buffer each pending transaction to the secondary so that transient disruptions are recoverable. These copies can be used for a backup. or snapshots. but may result in loss of the most current write to the secondary array if the link between the two systems is lost. and a kind of off-host processing. a checkpoint to restore the state of an application.Figure 8-18 Synchronous Mirroring In asynchronous array-based replication (see Figure 8-19). are virtual or physical copies of data that capture the state of data set contents at a single instant. while one or more write transactions may be queued to the secondary array. Both virtual (copy-on-write) and physical (full-copy) snapshots protect against corruption of data content. point-in-time copy. Additionally. test data. . Like photographic snapshots capture images of physical action. individual write operations to the primary array are acknowledged locally.

The best solution is going to be the one that meets customers’ specific requirements and may vary by customers’ different tiers of storage and applications. connectivity. Table 8-3 provides a basic comparison between various approaches based on SNIA’s virtualization tutorial. SNIA.snia. N-Port Virtualization in the Data Center: http://www.org/education/dictionary SNIA Storage Virtualization tutorial I and II: http://www. A. Santana.org/education/storage_networking_primer/stor_virt Fred G.com/c/en/us/products/collateral/storagenetworking/mds-9000-nx-os-software-release-4-1/white_paper_c11-459263. Indianapolis: Cisco Press. and resiliency without introducing instability. Gustavo.snia.html A. . Storage Virtualization for IT Flexibility. and ease of management.Solutions need to be able to scale in terms of performance. Data Center Virtualization Fundamentals.cisco. Moore. TechTarget 2006. There are many approaches to implement and deploy storage virtualization functionality to meet various requirements.org: http://www. 2013. Sun Microsystems. functionality.org/tech_activities/standards/curr_standards/ddf/ SNIA Dictionary: http://www.snia. Table 8-3 Comparison of Storage Virtualization Levels Reference List “Common RAID Disk Drive Format (DDF) standard”.

Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. . noted with the key topic icon in the outer margin of the page. Table 8-4 Key Topics Complete Tables and Lists from Memory Print a copy of Appendix B. “Memory Tables” (found on the disc). includes completed tables and lists to check your work.Farley. Long. Indianapolis: Cisco Press. and complete the tables and lists from memory. Indianapolis: Addison-Wesley Professional. Marc. Storage Networking Fundamentals. James. Storage Virtualization—Technologies for Simplifying Data Storage and Management. 2006. Storage Networking Protocol Fundamentals. “Memory Tables Answer Key. Appendix C. Clark. Table 8-4 lists a reference of these key topics and the page numbers on which each is found. or at least the section for this chapter. Volume 1 and 2. Indianapolis: Cisco Press.” also on the disc. 2005. Tom. 2004.

and check your answers in the Glossary: storage-area network (SAN) Redundant Array of Inexpensive Disk (RAID) virtual storage-area network (VSAN) operating system (OS) Logical Unit Number (LUN) virtual LUN (vLUN) application-specific integrated circuit (ASIC) host bus adapter (HBA) thin-provisioning Storage Networking Industry Association (SNIA) software-defined storage (SDS) metadata in-band out-of-band virtual tape library (VTL) disk-to-disk-to-tape (D2D2T) N-Port virtualizer (NPV) N-Port ID virtualization (NPIV) virtual file system (VFS) asynchronous and synchronous mirroring host-based virtualization network-based virtualization array-based virtualization .Define Key Terms Define the following key terms from this chapter.

If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. and the fabric domain using command-line interface. Table 9-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. providing a modular and sustainable base for the long term. the fabric login. Chapter 7. Formerly known as Cisco SAN-OS.Chapter 9. read the entire chapter. “Cisco MDS Product Family.1 it is rebranded as Cisco MDS 9000 NX-OS Software. It is based on a secure.” discussed Cisco MDS 9000 Software features. and standard Linux core. stable. Cisco NX-OS Software is the network software for Cisco MDS 9000 family and Cisco Nexus family data center switching products. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. This chapter goes directly into the practical configuration steps of the Cisco MDS product family and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. zoning. In this chapter we build our storage-area network starting from the initial setup configuration on Cisco MDS 9000 switches. It also describes how to verify virtual storage-area networks (VSAN). You can find the answers in Appendix A. This chapter discusses how to configure Cisco MDS 9000 Series multilayer switches and how to update software licenses.” . starting from release 4. Fibre Channel Storage-Area Networking This chapter covers the following exam topics: Perform Initial Setup on Cisco MDS Switch Verify SAN Switch Operations Verify Name Server Login Configure and Verify VSAN Configure and Verify Zoning The foundation of the data center network is the network software that runs the switches in the network. “Answers to the ‘Do I Know This Already?’ Quizzes.

MDS 9148 c. How does the On-Demand Port Activation license work on the Cisco MDS 9148S 16G FC switch? a. MGMT 0 interface b. MDS 9250i b. COM1. Out-of-band management interface e.Table 9-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. 2 ports of 10 Gigabit Ethernet for FCIP and iSCSI storage services. If you do not know the answer to a question or are only partially sure of the answer. and MGMT interface? (Choose all the correct answers. MDS 9706 2. 4. MDS 9148S d. During the initial setup script of MDS 9706. which interfaces can be configured with the Ipv6 address? a. and 8 ports of 10 Gigabit Ethernet for FCoE connectivity. MDS 9706 family director switch does not support in-band management. Via answering yes to Configuring Advanced IP Options. you should mark that question as wrong for purposes of the self-assessment. MDS 9710 b. Which of the following switches support Console port. None 3. MDS 9513 d. There is no configuration option for in-band management on MDS family switches. Which MDS switch models support the PowerOn Auto provisioning with NX-OS 6. 1. c. All FC interfaces c. VSAN interface d.) a. . b.2(9)? a. The base configuration of 20 ports of 16 Gbps Fibre Channel. MDS 9706 5. MDS 9508 c. Via enabling SSH service. During the initial setup script of MDS 9500. MDS 9148S e. how can you configure in-band management? a. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. d.

6. System—BIOS—Loader—Kickstart d. or 48 ports.2 255. The base switch model comes with 8 ports and can be upgraded to models of 16. vsan 4 suspend . no suspend vsan 4 c. c.2. System—Kickstart—BIOS—Loader b. Click here to view code image switch(config)# interface mgmt0 switch(config-if)# ip address 10. Click here to view code image switch(config)# interface mgmt0 switch(config-if)# ipv6 enable switch(config-if)# ipv6 address ipv6 address 2001:0db8:800:200c::417a/64 switch(config-if)# no shutdown c. BIOS—Loader—System—Kickstart 7. or 48 ports. Click here to view code image switch(config-if)# (no) switchport mode F switch(config-if)# (no) switchport mode auto e. Which NX-OS CLI command suspends a specific VSAN traffic? a.255. Which of the following options is the correct configuration for in-band management of MDS 9250i? a. Which is the correct option for the boot sequence? a. The base switch model comes with 24 ports enabled and can be upgraded as needed with a 12-port activation license to support 36 or 48 ports.22. 32.1 b. d. Click here to view code image switch(config)# interface vsan 1 switch(config-if)# no shutdown d. 36. None 8. vsan 4 name SUSPEND b.22.2. The base switch model comes with 12 ports enabled and can be upgraded as needed with the 12-port activation license to support 24.0 switch(config-if)# no shutdown switch(config-if)# exit switch(config)# ip default-gateway 10.255. BIOS—Loader—Kickstart—System c.b.

if needed. Ipv6 address c. 10-Gbps Fibre Channel. you may want to quickly review the chapter. The Cisco MDS family switches provide the following types of ports: Console port (supervisor modules): The console port is available for all the MDS product family.and 10-Gbps Fibre Channel over IP (FCIP) on Gigabit Ethernet ports. any device connected to this port must be capable of asynchronous transmission. trunk mode off b. This port is used to create a local management connection to set the IP address and other initial configuration settings before connecting the switch to the network for the first time. 1. To connect the console port to a computer terminal. No correct answer exists Foundation Topics Where Do I Start? The Cisco MDS 9000 family of storage networking products supports 1-. follow these steps: . Switch 1: Switchport mode E.d. vsan no suspend vsan 4 9. trunk mode auto e. trunk mode auto. trunk mode on. The MDS multilayer directors have dual supervisor modules. which explains in detail how and where to install specific components of the MDS switches. Switch 2: Switchport mode E. Mac address of the MGMT 0 interface d. Switch 1: Switchport mode E.” is an RS-232 port with an RJ-45 interface. Switch 2: Switchport mode F. The console port. trunk mode off. trunk mode on c. For each MDS product family switch there is a specific hardware installation guide. Switch 1: Switchport mode E. This chapter starts with explaining what to do after the switch is installed and powered on following the hardware installation guide. Which of the following options are valid member types for a zone configuration on MDS 9222i switch? a. FC ID 10. trunk mode auto d. 16-. The MDS multiservice and multilayer switches have one management and one console port as well. In Chapter 7 we described all the details about the MDS 9000 product family. Switch 2: Switchport mode E. Switch 2: Switchport mode E. Before starting any configuration. pWWN b. labeled “Console. In which of the following conditions for two MDS switches is the trunking state (ISL) and port mode E? a. Switch 1: Switchport mode E. 4-. the equipment should be installed and mounted onto the racks. 2-. It is an asynchronous (sync) serial port. and each of the supervisor modules have one management and one console port. 8-. 10-Gbps Fibre Channel over Ethernet (FCoE). trunk mode off.

Connect the modem to the COM1 port using the adapters and cables. You can connect the MGMT 10/100/1000 Ethernet port to an external hub. COM1 port is available for all the MDS product family except MDS 9700 director switches. c. the newly activated supervisor module takes over this IP address.default: ATE0Q1&D2&C1S0=1\015 Statistics: tx:17 rx:0 Register Bits:RTS|DTR MGMT 10/100/1000 Ethernet port (supervisor module): This is an Ethernet port that you can use to access and manage the switch by IP address. Connect the adapters using the RJ-45-to-RJ-45 rollover cable (or equivalent crossover cable). You can use it to connect to an external serial communication device such as a modem. MGMT Ethernet port is available for all the MDS product family. Connect the supplied RJ-45 to DB-9 female adapter or RJ-45 to DP-25 female adapter (depending on your computer) to the computer serial port. Note For high availability. This process requires an Ethernet connection to the newly activated supervisor module. 2. The COM1 port (labeled “COM1”) is an RS-232 port with a DB-9 interface. The active supervisor module owns the IP address used by both of these Ethernet connections. COM1 port: This is an RS-232 port that you can use to connect to an external serial communication device such as a modem. 3. such as through Cisco Data Center Network Manager (DCNM). . connect the MGMT 10/100/1000 Ethernet port on the active supervisor module and on the standby supervisor module to the same network or VLAN. no parity. switch. a. The default COM1 settings are as follows: line Aux: |Speed: 9600 bauds Databits: 8 bits per byte Stopbits: 1 bit(s) Parity: none Modem In: Enable Modem Init-String . RJ-45. Connect the RJ-45 to DB-25 modem adapter to the modem. 2. On a switchover. Connect the DB-9 serial adapter to the COM1 port. It is available on each supervisor module of MDS 9500 Series switches. follow these steps: 1. straightthrough UTP cable to connect the MGMT 10/100/1000 Ethernet port to an Ethernet switch port or hub or use a crossover cable to connect to a router interface. or router. We recommend using the adapter and cable provided with the switch. The supervisor modules support an autosensing MGMT 10/100/1000 Ethernet port (labeled “MGMT 10/100/1000”) and have an RJ-45 interface. 8 data bits. 1 stop bit.1. Configure the terminal emulator program to match the following default port characteristics: 9600 baud. To connect the COM1 port to a modem. You need to use a modular. Connect the console cable (a rollover RJ-45 to RJ-45 cable) to the console port and to the RJ-45 to DB-9 adapter or the RJ-45 to DP-25 adapter (depending on your computer) at the computer serial port. b.

SFP+ is in compliance with the protocol of IEEE802. SFP+. SFP and SFP+ specifications are developed by the Multiple Source Agreement (MSA) group. SFP+ is the mainstream design. XFP and SFP+ are 10G fiber optical modules and can connect with other types of 10G modules. Gigabit Ethernet ports (IP storage ports): These are IP Storage Services ports that are used for the FCIP or iSCSI connectivity. and the cable must not exceed the stipulated cable length for reliable communication.Fibre Channel ports (switching modules): These are Fibre Channel (FC) ports that you can use to connect to the SAN or for in-band management. The Cisco SFP. The Cisco MDS 9000 family supports both Fibre Channel and FCoE protocols for SFP+ transceivers. X2. there are two USB drives. and XFP are all terms for a type of transceiver that plugs into a special port on a switch or other network device to convert the port to a copper or fiber interface. Fibre Channel over Ethernet ports (switching modules): These are Fibre Channel over Ethernet (FCoE) ports that you can use to connect to the SAN or for in-band management. the show interface and show interface brief commands display the ID instead of the transmitter type. The size of SFP+ is smaller than XFP. They both have the same size and appearance but are based on different standards. and XENPAK directly. The most up-todate information can be found for pluggable transceivers here: http://www. SFP+ was replaced by the XFP 10 G modules and became the mainstream. SFP. including signal modulation function. The show interface transceiver command and the show interface fc slot/port transceiver command display both values for Cisco-supported SFPs. Slot 0 and LOG FLASH. Each transceiver must match the transceiver on the other end of the cable. thus it moves some functions to the motherboard. The SFP hardware transmitters are identified by their acronyms when displayed in the show interface brief command. and XFP? SFP. SFP has the lowest cost among all three. Let’s also review the transceivers before we continue with the configuration. and EDC. An advantage of the SFP+ modules is that SFP+ has a more compact form factor package than X2 and XFP. SFP is based on the INF-8074i specification and SFP+ is based on the SFF-8431 specification. and SFF-8432 specifications. What are the differences between SFP+. You can use any combination of SFP. CDR. SFP+ transceivers that are supported by the switch.html. X2. and the cable must not exceed the stipulated cable length for reliable communications. The Cisco MDS 9700 Series switch has two USB drives (in each Supervisor-1 module). If the related SFP has a Cisco-assigned extended ID.cisco. and long wavelength (LWL) transceivers with LWL transceivers. . SFF-8431. SFP+. The only restrictions are that short wavelength (SWL) transceivers must be paired with SWL transceivers.3ae. XFP is based on the standard of XFP MSA.com/c/en/us/products/collateral/storage-networking/mds-9000-series-multilayerswitches/product_data_sheet09186a00801bc698. and XENPAK. and X2 devices are hot-swappable transceivers that plug into Cisco MDS 9000 family director switching modules and fabric switch ports. MAC. The USB drive is not available for MDS 9100 Series switches. The LOG FLASH and Slot 0 USB ports use different formats for their data. USB drive/port: It is a simple interface that allows you to connect to different devices supported by NX-OS. In the supervisor module. It can connect with the same type of XFP. The cost of SFP+ is lower than XFP. They allow you to choose different cabling types and distances on a port-by-port basis.

The Cisco MDS 9000 family switches can be accessed and configured in many ways. The Cisco MDS product family uses the same operating system as NX-OS. maximum supportable distances. Figure 9-1 Tools for Configuring Cisco NX-OS Software The different protocols that are supported in order to access. The Nexus product line and MDS family switches use mostly the same management features. and they support standard management protocols.htm. check the Fiber Optic Association page (FOA) at http://www. . monitor. and attenuation for optical fiber applications by fiber type.To get the most up-to-date technical specifications for fiber optics per the current standards and specs. In Chapter 5. Figure 9-1 portrays the tools for configuring the Cisco NX-OS software.thefoa. and configure the Cisco MDS 9000 family of switches are described in Table 9-2.” we discussed Nexus management and monitoring features. “Management and Monitoring of Cisco Nexus Devices.org/tech/Linkspec.

If a default answer is not available (for example.Table 9-2 Protocols to Access. . You can press Ctrl+C at any prompt to skip the remaining configuration options and proceed with what you have configured up to that point. The setup utility allows you to build an initial configuration file using the System Configuration dialog. you can use the CLI to perform additional configuration. except for the administrator password. The setup starts automatically when a device has no configuration file in NVRAM. The setup utility allows you to configure only enough connectivity for system management. and Configure the Cisco MDS Family The Cisco MDS NX-OS Setup Utility—Back to Basics The Cisco MDS NX-OS setup utility is an interactive command-line interface (CLI) mode that guides you through a basic (also called a startup) configuration of the system. Monitor. The dialog guides you through initial configuration. Figure 9-2 portrays the setup script flow. If you want to skip answers to any questions. After the file is created. the device uses what was previously configured and skips to the next question. press Enter. the device hostname).

not the configured value. Connect the Ethernet management port on the supervisor module to the network. if there is a default value for the step. it runs a setup program that . and the default gateway IPv4 address to enable SNMP access. the device uses the IPv4 route and the default network IPv4 address. The first time you access a switch in the Cisco MDS 9000 family. the setup utility changes to the configuration using that default. If you enable IPv4 routing. you can use the setup utility at any time for basic device configuration. The setup script only supports IPv4. the default network IPv4 address. you must execute the following steps: Step 1. when no configuration is present. the setup utility does not change that configuration if you skip that step. However. Step 2. For example. if you have already configured the mgmt0 interface. Connect the console port on the supervisor module to the network. connect the Ethernet management ports on both supervisor modules to the network. If IPv4 routing is disabled. However. If you have dual supervisor modules. connect the console ports on both supervisor modules to the network.Figure 9-2 Setup Script Flow You use the setup utility mainly for configuring the system initially. The setup utility keeps the configured values when you skip steps in the script. Step 3. Be sure to carefully check the configuration changes before you save the configuration. the device uses the default gateway IPv4 address. Have a password strategy for your network environment. If you have dual supervisor modules. Note Be sure to configure the IPv4 route. Before starting the setup utility.

. uppercase letters. Enter the password for the administrator. The following are the initial setup procedure steps for configuring out-of-band management on the mgmt0 interface: Step 1. Step 3. A default route that points to the switch providing access to the IP network should be configured on every switch in the Fibre Channel fabric. Click here to view code image Enter the password for admin: admin-password Confirm the password for admin: admin-password Step 4. *Note: setup is mainly used for configuring the system initially. you can create an additional user account (in the network-admin role) besides the administrator’s account. Press Enter at any time to skip a dialog.prompts you for the IP address and other configuration information necessary for the switch to communicate over the supervisor module Ethernet interface. Setup configures only enough connectivity for management of the system. An interface for VSAN 1 is created on every switch in the fabric. A secure password should contain characters from at least three of the classes: lowercase letters. assign the IP address. You can configure out-of-band management on the mgmt 0 interface. Step 2. This management interface uses the Fibre Channel infrastructure to transport IP traffic. Enter yes (yes is the default) to enable a secure password standard. The IP address can only be configured from the CLI. when no configuration is present. Enter yes to enter the setup mode. The in-band management logical interface is VSAN 1. This information is required to configure and manage the switch. Enter yes (no is the default) if you do not want to create additional accounts. This setup utility will guide you through the basic configuration of the system. Press Ctrl+C at any prompt to end the configuration process Step 5. Power on the switch. Click here to view code image Create another login account (yes/no) [no]: yes While configuring your initial setup. When you power up the switch for the first time. So setup always assumes system defaults and not the current system configuration values. After you perform this step. There are two types of management through MDS NX-OS CLI. Click here to view code image Would you like to enter the basic configuration dialog (yes/no): yes The setup utility guides you through the basic configuration process. Each switch should have its VSAN 1 interface configured with either an IPv4 address or an IPv6 address in the same subnetwork. Click here to view code image Do you want to enforce secure password standard (yes/no): yes You can also enable a secure password standard using the password strength-check command. digits. Switches in the Cisco MDS 9000 family boot automatically. and special characters. Use CTRL-C-c at anytime to skip the remaining dialogs. the Cisco MDS 9000 Family Cisco Prime DCNM can reach the switch through the console port.

Configure the read-only or read-write SNMP community string. . b. Click here to view code image Mgmt0 IPv4 address: ip_address Enter the mgmt0 IPv4 subnet mask. Network administrator (network-admin): Has permission to execute all commands and make configuration changes. The default switch name is “switch”.Click here to view code image Enter the user login ID: user_name Enter the password for user_name: user-password Confirm the password for user_name: user-password Enter the user role [network-operator]: network-admin By default. Enter yes (yes is the default) to configure the default gateway. Step 6. IP version 6 (IPv6) is supported in Cisco MDS NX-OS Release 4. Enter yes (yes is the default) at the configuration prompt to configure out-of-band management. The switch name is limited to 32 alphanumeric characters. Enter the SNMP community string. two roles exist in all switches: a. Click here to view code image Enter the switch name: switch_name Step 8. the setup script supports only IP version 4 (IPv4) for the management interface. The operator cannot make any configuration changes. Click here to view code image Continue with out-of-band (mgmt0) management configuration? [yes/ no]: yes Enter the mgmt0 IPv4 address. Click here to view code image Configure read-only SNMP community string (yes/no) [n]: yes b. Enter yes (no is the default) to avoid configuring the read-only SNMP community string. Enter a name for the switch. Click here to view code image SNMP community string: snmp_community Step 7.1(x) and later. Network operator (network-operator): Has permission to view the configuration only. Click here to view code image Mgmt0 IPv4 netmask: subnet_mask Step 9. However. One (of these 64 additional roles) can be configured during the initial setup process. The administrator can also create and customize up to 64 additional roles. a.

Enter yes (yes is the default) to configure the default network. Click here to view code image Continue with in-band (VSAN1) management configuration? (yes/no) [no]: no b. Click here to view code image Destination prefix mask: dest_mask Enter the next hop IP address. Enter yes (yes is the default) to configure the DNS IPv4 address. DNS. Click here to view code image . Enter yes (yes is the default) to enable IPv4 routing capabilities. Enter no (no is the default) at the in-band management configuration prompt. Click here to view code image Configure Advanced IP options (yes/no)? [n]: yes a. such as in-band management static routes. Click here to view code image Next hop ip address: next_hop_address d. Click here to view code image Enable ip routing capabilities? (yes/no) [y]: yes c. Click here to view code image Destination prefix: dest_prefix Enter the destination prefix mask. IP address of the default gateway: default_gateway Step 10. Enter yes (yes is the default) to configure a static route. Click here to view code image Configure static route: (yes/no) [y]: yes Enter the destination prefix. and domain name. default network.Click here to view code image Configure the default-gateway: (yes/no) [y]: yes Enter the default gateway IP address. Enter yes (no is the default) to configure advanced IP options. Click here to view code image Default network IP address [dest_prefix]: dest_prefix e. Click here to view code image Configure the default-network: (yes/no) [y]: yes Enter the default network IPv4 address.

Click here to view code image Enter the type of drop to configure congestion/no_credit drop? (con/no) [c]:con Step 15. Click here to view code image Type the SSH key you would like to generate (dsa/rsa)? rsa Enter the number of key bits within the specified range. Enter yes (yes is the default) to enable the SSH service Click here to view code image Enabled SSH service? (yes/no) [n]: yes Enter the SSH key type. Click here to view code image Configure congestion or no_credit drop for fc interfaces? (yes/ no) [q/quit] to quit [y]: yes Step 14. Enter yes (yes is the default) to configure congestion or no_credit drop for FC interfaces. Click here to view code image Enter the number of key bits? (768-2048) [1024]: 2048 Step 12.Configure the DNS IP address? (yes/no) [y]: yes Enter the DNS IP address. Enter a value from 100 to 1000 (d is the default) to calculate the number of milliseconds for congestion or no_credit drop. Enter con (con is the default) to configure congestion or no_credit drop. Enter yes (no is the default) to skip the default domain name configuration. Click here to view code image Configure the default domain name? (yes/no) [n]: yes Enter the default domain name. DNS IP address: name_server f. Enter yes (no is the default) to disable the Telnet service. Click here to view code image Default domain name: domain_name Step 11. Click here to view code image Enable the telnet service? (yes/no) [n]: yes Step 13. Click here to view code image Enter number of milliseconds for congestion/no_credit drop[100 1000] or [d/default] for default:100 .

You see the new configuration. Click here to view code image Configure default switchport interface state (shut/noshut) [shut]: shut The management Ethernet interface is not shut down at this point. Click here to view code image Configure default zone policy (permit/deny) [deny]: permit Permits traffic flow to all members of the default zone. Enter enhanced (basic is the default) to configure default-zone mode as enhanced. NTP server IP address: ntp_server_IP_address Step 18. Click here to view code image Configure default switchport trunk mode (on/off/auto) [off]: on Step 20. Enter shut (shut is the default) to configure the default switch port interface to the shut (disabled) state. Click here to view code image . Enter on (off is the default) to configure the switch port trunk mode. Click here to view code image Configure default port-channel auto-create state (on/off) [off]: on Step 22. FCIP. Enter a mode for congestion or no_credit drop. iSCSI. Review and edit the configuration that you have just entered. Enter yes (no is the default) to configure the NTP server. Enter yes (no is the default) to disable a full zone set distribution.Step 16. Only the Fibre Channel. Click here to view code image Configure NTP server? (yes/no) [n]: yes Enter the NTP server IPv4 address. Step 19. Enter on (off is the default) to configure the port-channel auto-create state. Enter yes (yes is the default) to configure the switchport mode F. Step 24. Click here to view code image Enable full zoneset distribution (yes/no) [n]: yes Overrides the switch-wide default for the full zone set distribution feature. Click here to view code image Enter mode for congestion/no_credit drop[E/F]: Step 17. Step 23. Click here to view code image Configure default switchport mode F (yes/no) [n]: y Step 21. Enter permit (deny is the default) to deny a default zone policy configuration. and Gigabit Ethernet interfaces are shut down.

This ensures that the kickstart and system images are also automatically configured. Type yes to save the new configuration. Enter no (no is the default) if you are satisfied with the configuration.Configure default zone mode (basic/enhanced) [basic]: enhanced Overrides the switch-wide default zone mode as enhanced. Step 28.com/tac . none of your changes are updated the next time the switch is rebooted.cisco. Step 25. Log in to the switch using the new username and password. Enter yes (yes is default) to use and save this configuration. Click here to view code image Use this configuration and save it? (yes/no) [y]: yes If you do not save the configuration at this point. depending on which you installed. Example 9-1 portrays show version display output. Verify that the required licenses are installed in the switch using the show license command Step 29. The following configuration will be applied: Click here to view code image username admin password admin_pass role network-admin username user_name password user_pass role network-admin snmp-server community snmp_community ro switchname switch interface mgmt0 ip address ip_address subnet_mask no shutdown ip routing ip route dest_prefix dest_mask dest_address ip default-network dest_prefix ip default-gateway default_gateway ip name-server name_server ip domain-name domain_name telnet server disable ssh key rsa 2048 force ssh server enable ntp server ipaddr ntp_server system default switchport shutdown system default switchport trunk mode on system default switchport mode F system default port-channel auto-create zone default-zone permit vsan 1-4093 zoneset distribute full vsan 1-4093 system default zone mode enhanced Would you like to edit the configuration? (yes/no) [n]: n Step 26. Verify that the switch is running the latest Cisco NX-OS 6. Step 27.2(x) software. issuing the show version command. Example 9-1 Verifying NX-OS Version on an MDS 9710 Switch Click here to view code image DS-9710-1# show version Cisco Nexus Operating System (NX-OS) Software TAC support: http://www.

1 5 6.Documents: http://www. All rights reserved. A copy of each such license is available at http://www.2(3) system: version 6. Ethernet Plugin Step 30.opensource.opensource.-----1 6.org/licenses/lgpl-2.----.php and http://www.2(3) 1.1 2 6. The copyrights to certain works contained in this software are owned by other third parties and used and distributed under license. Inc.html Copyright (c) 2002-2013. 20 second(s) Last reset Reason: Unknown System version: 6.2(3) 1.php Software BIOS: version 3.-------------.---------1 1c-df-0f-78-cf-d8 to 1c-df-0f-78-cf-db JAE171308VQ 2 1c-df-0f-78-d8-50 to 1c-df-0f-78-d8-53 JAE1714019Y 5 1c-df-0f-78-df-05 to 1c-df-0f-78-df-17 JAE171504KY Status ---------ok ok active * ha-standby .1.1.cisco. Certain components of this software are licensed under the GNU General Public License (GPL) version 2.0 kickstart: version 6.----------------------------------.0 6 6.0.com/en/US/products/ps9372/tsd_products_support_series_ home.0 Mod MAC-Address(es) Serial-Num --. Verify the status of the modules on the switch using the show module (see Example 9-2) command. 0 hour(s).2(3) 1. Processor Board ID JAE171504KY Device name: MDS-9710-1 bootflash: 3915776 kB slot0: 0 kB (expansion flash) Kernel uptime is 0 day(s).org/licenses/gpl-2.0 or the GNU Lesser General Public License (LGPL) Version 2. 39 minute(s).2(3) BIOS compile time: 02/27/2013 kickstart image file is: bootflash:///kickstart kickstart compile time: 7/10/2013 2:00:00 [07/31/2013 10:10:16] system image file is: bootflash:///system system compile time: 7/10/2013 2:00:00 [07/31/2013 11:59:38] Hardware cisco MDS 9710 (10 Slot) Chassis ("Supervisor Module-3") Intel(R) Xeon(R) CPU with 8120784 kB of memory. Example 9-2 Verifying Modules on an MDS 9710 Switch Click here to view code image MDS-9710-1# show module Mod Ports Module-Type Model --.1.-----------------1 48 2/4/8/10/16 Gbps Advanced FC Module DS-X9448-768K9 2 48 2/4/8/10/16 Gbps Advanced FC Module DS-X9448-768K9 5 0 Supervisor Module-3 DS-X97-SF1-K9 6 0 Supervisor Module-3 DS-X97-SF1-K9 Mod Sw Hw --.2(3) Service: Plugin Core Plugin. Cisco Systems.-------------------------------------.2(3) 1.

Enter no (yes is the default) to configure a static route. Enter no (yes is the default) to enable IPv4 routing capabilities.6 Mod --1 2 5 6 Xbar --1 2 3 Xbar --1 2 3 Xbar --1 2 3 1c-df-0f-78-df-2b to 1c-df-0f-78-df-3d JAE171504E6 Online Diag Status -----------------Pass Pass Pass Pass Ports Module-Type Model ----. Enter yes (no is the default) at the in-band management configuration prompt.-----NA 1. Click here to view code image VSAN1 IPv4 address: ip_address Enter the IPv4 subnet mask.0 NA 1.---------NA JAE1710085T NA JAE1710087T NA JAE1709042X Status ---------ok ok ok The following are the initial setup procedure steps for configuring in-band management logical interface on VSAN 1: You can configure both in-band and out-of-band configuration together by entering yes in both step 10c and step 10d in the following procedure: Step 10. Click here to view code image Enable ip routing capabilities? (yes/no) [y]: no c. . Click here to view code image VSAN1 IPv4 net mask: subnet_mask b. DNS. Click here to view code image Continue with in-band (VSAN1) management configuration? (yes/no) [no]: yes Enter the VSAN 1 IPv4 address. and domain name.0 NA 1. Click here to view code image Configure Advanced IP options (yes/no)? [n]: yes a.-----------------0 Fabric Module 1 DS-X9710-FAB1 0 Fabric Module 1 DS-X9710-FAB1 0 Fabric Module 1 DS-X9710-FAB1 Sw Hw -------------. static routes. Enter yes (no is the default) to configure advanced IP options such as in-band management.----------------------------------.0 MAC-Address(es) Serial-Num -------------------------------------. default network.

When you power up a switch for the first time. If you continue in POAP mode. . Enter no (yes is the default) to configure the DNS IPv4 address. When a configuration file is not found. Click here to view code image Configure the default-network: (yes/no) [y]: no e. Enter no (no is the default) to skip the default domain name configuration. If you exit POAP mode. all the front-panel interfaces are set up in the default configuration. the POAP capability is available on Cisco MDS 9148 and MDS 9148S multilayer fabric switches (at the time of writing this book). You can choose to exit or continue with POAP. Click here to view code image Configure the default domain name? (yes/no) [n]: no In the final configuration the following CLI commands will be added to the configuration: Click here to view code image interface vsan1 ip address ip_address subnet_mask no shutdown The Power On Auto Provisioning POAP (Power On Auto Provisioning) automates the process of upgrading software images and installing configuration files on Cisco MDS and Nexus switches that are being deployed in the network. During startup. Click here to view code image Configure the DNS IP address? (yes/no) [y]: no f. The USB device on MDS 9148 or on MDS 9148S does not contain the required installation files. It also obtains the IP address of the DCNM server to download the configuration script that is run on the switch to download and install the appropriate software image and device configuration file. and bootstraps itself with its interface IP address. POAP mode starts. When a Cisco MDS switch with the POAP feature boots and does not find the startup configuration. gateway address. locates the DCNM DHCP server. and DCNM DNS server IP addresses. the switch enters POAP mode.Click here to view code image Configure static route: (yes/no) [y]: no d. it loads the software image that is installed at manufacturing and tries to find a configuration file from which to boot. A TFTP server that contains the configuration script is used to automate the software image installation and configuration process. The POAP setup requires the following network infrastructure: a DHCP server to bootstrap the interface IP address.2(9). Enter no (yes is the default) to configure the default network. you enter the normal interactive setup script. gateway. asking if you want to abort POAP and continue with a normal setup. and DNS (Domain Name System) server. Starting with NX-OS 6. Figure 9-3 portrays the network infrastructure for POAP. One or more servers can contain the desired software images and configuration files. a prompt appears.

Step 3. Install the switch in the network. enter y (yes). Deploy a DHCP server and configure it with the interface. This information is provided to the switch when it first boots. use one of the following commands: show running config: Displays the running configuration show startup config: Displays the startup configuration Licensing Cisco MDS 9000 Family NX-OS Software Features . Step 2. To verify the configuration after bootstrapping the device using POAP. Power on the switch. Deploy one or more servers to host the software images and configuration files. gateway. Step 3.Figure 9-3 POAP Network Infrastructure Here are the steps to set up the network environment using POAP: Step 1. (Optional) if you want to exit POAP mode and enter the normal interactive setup script. (Optional) Put the POAP configuration script and any other desired software image and switch configuration files on a USB device that is accessible to the switch. and TFTP server IP addresses and a boot file with the path and name of the configuration script file. Step 4. Step 2. Deploy a TFTP server to host the configuration script. After you set up the network environment for POAP setup. follow these steps to configure the switch using POAP: Step 1.

the switch provides a grace period before the unlicensed features are disabled. Mainframe Package. such as the Cisco MDS 9000 Enterprise Package. Obtain the serial number for your device by entering the show license host-id command. Module-based licenses allow features that require additional hardware modules. DMM Package. Cisco license packages require a simple installation of an electronic license: no software installation or upgrade is required. Any feature not included in a license package is bundled with the Cisco MDS 9000 family switches. The licensing model defined for the Cisco MDS product line has two options: Feature-based licenses allow features that are applicable to the entire switch. The grace period allows plenty of time to correct the licensing issue. so license keys are never lost even during a software reinstallation. and XRC Acceleration Package. The second step is using the device and the licensed features. the Cisco MDS 9134 Fabric Switch. The license key file is mirrored on both supervisor modules. This single console reduces management overhead and prevents problems because of improperly maintained licensing. Some features are logically grouped into add-on packages that must be licensed separately. Performing a Manual Installation: If you have existing devices or if you want to install the licenses on your own. The cost varies based on a per-switch usage. DCNM Packages. On-demand port activation licenses are also available for the Cisco MDS 9000 family blade switches and 4-Gbps Cisco MDS 9100 Series multilayer fabric switches. the Cisco Fabric Switch for HP c-Class Blade System. the license file continues to function from the version that is available on the chassis. Even if both supervisor modules fail. Obtaining a Factory-Installed License: You can obtain factory-installed licenses for a new Cisco NX-OS device. All licensed features may be evaluated for up to 120 days before a license is expired. Licensing allows you to access specified premium features on the switch after you install the appropriate license for that feature. Licenses can also be installed on the switch in the factory. Devices with dual supervisors have the following additional high-availability features: The license software runs on both supervisor modules and provides failover protection. SAN Extension over IP Package. IOA Package. Cisco Data Center Network Manager for SAN includes a centralized license management console that provides a single interface for managing licenses across all Cisco MDS switches in the fabric. Cisco MDS stores license keys on the chassis serial PROM (SPROM). . you must first obtain the license key file and then install that file in the device. The first step is to contact the reseller or Cisco representative and request this service. License Installation You can either obtain a factory-installed license (only applies to new device orders) or perform a manual license installation of the license (applies to existing devices in your network). You can also obtain licenses to activate ports on the Cisco MDS 9124 Fabric Switch. and the Cisco Fabric Switch for IBM BladeCenter. The cost varies based on a per-module usage. An example is the IPS-8 or IPS-4 module using the FCIP feature. Obtaining the License Key File consists of the following steps: Step 1.Licenses are available for all switches in the Cisco MDS 9000 family. If an administrative error does occur with licensing.

as shown in Example 9-3. ftp. You can access the Product License Registration website from the Software Download website at this URL: http://www. Step 5. Step 7. Step 5. Locate the website URL from the software license claim certificate document. Installing the License Key File consists of the following steps: Step 1. contact Cisco Technical Support at this URL: http://www. Step 3. Exit the device console and open a new terminal session to view all license files installed on the device using the show license command.com/en/US/support/tsd_cisco_worldwide_contacts. Click here to view code image switch# install license bootflash:license_file. Example 9-3 show license Output on MDS 9222i Switch Click here to view code image MDS-9222i# show license MDS20090713084124110. Step 3.lic: SERVER this_host ANY VENDOR cisco INCREMENT ENTERPRISE_PKG cisco 1. or scp) or use the Cisco Fabric Manager License Wizard under Fabric Manager Tools to copy a license to the switch. You can use the file transfer service (tftp. ftp.html.cisco. If you cannot locate your software license claim certificate. Step 4.The host ID is also referred to as the device serial number. Step 2. or scp) or use the Cisco Fabric Manager License Wizard under Fabric Manager Tools to copy a license to the switch. sftp. Locate the product authorization key (PAK) from the software license claim certificate document. Log in to the device through the console port of the active supervisor.com/cisco/web/download/index. Follow the instructions on the Product License Registration website to register the license for your device.html. sfp. sfp. (Optional) Back up the license key file. Click here to view code image mds9710-1# show licenser host-id License hostid: VDH=JAF1710BBQH Step 2. Perform the installation by using the install license command on the active supervisor module from the device console. Use the copy licenses command to save your license file to either the bootflash: directory or a slot0: device. You can use the file transfer service (tftp.done Step 4. Obtain your claim software license certificate document. sftp.lic Installing license . Step 6.0 permanent uncounted \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALL-LICENSESINTRL</ SKU> \ HOSTID=VDH=FOX1244H0U4 \ ..cisco.

Click here to view code image switch# show license usage ENTERPRISE_PKG Application ----------ivr qos_manager ----------- You can use the show license usage command to identify all the active features.lic Clearing license Enterprise.lic MDS201308120931018680. it can activate a license grace period.lic MDS-9222i# show license file MDS201308120931018680.NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>1</LicLineID> \ <PAK></PAK>" SIGN=D2ABC826DB70 INCREMENT STORAGE_SERVICES_ENABLER_PKG cisco 1. Click here to view code image switch# clear license Enterprise. Example 9-4 show license brief Output on MDS 9222i Switch Click here to view code image MDS-9222i# show license brief MDS20090713084124110.lic MDS201308120931018680. you can use clear license filename command. You can use the show license brief command to display a list of license files installed on the device (see Example 9-4).0 permanent 1 \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALLLICENSES-INTRL</SKU> \ HOSTID=VDH=FOX1244H0U4 \ NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>2</LicLineID> \ <PAK></PAK>" SIGN=9287E36C708C INCREMENT SAN_EXTN_OVER_IP cisco 1.0 permanent 1 \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALL-LICENSESINTRL</ SKU> \ HOSTID=VDH=FOX1244H0U4 \ NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>3</LicLineID> \ <PAK></PAK>" SIGN=C03E97505672 <snip> . To uninstall the installed license file. where filename is the name of the installed license key file.lic: SERVER this_host ANY VENDOR cisco .lic: SERVER this_host ANY VENDOR cisco INCREMENT SAN_EXTN_OVER_IP_18_4 cisco 1...0 permanent 2 \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>L-M92EXT1AK9=</SKU> \ HOSTID=VDH=FOX1244H0U4 \ NOTICE="<LicFileID>20130812093101868</LicFileID><LicLineID>1</LicLineID> \ <PAK></PAK>" SIGN=DD075A320298 When you enable a Cisco NX-OS software feature.

2 ports of 10 Gigabit Ethernet for FCIP and iSCSI storage services. On the Cisco MDS 9148. You need to contact technical support to request an updated license. show license usage: Displays the usage information for installed licenses. You can enable the grace period feature by using the license grace-period command: Click here to view code image switch# configure terminal switch(config)# license grace-period Verifying the License Configuration To display the license configuration. After obtaining the license file. where url specifies the bootflash: slot0:. the status displayed is Unused. show license brief: Displays summary information for all installed license files. Customers have the option of purchasing preconfigured models of 16.and 32-port models onsite all the way to 48 ports by adding these licenses. or usb2: location of the updated license file. Example 9-5 Verifying License Usage on an MDS 9148 Switch Click here to view code image MDS-9148# show license usage Feature Ins Lic Status Expiry Date Comments Count ---------------------------------------------------------------------------FM_SERVER_PKG No Unused ENTERPRISE_PKG No Unused Grace expired . the On-Demand Port Activation license allows the addition of eight 8-Gbp ports. show license host-id: Displays the host ID for the physical device. show license file: Displays information for a specific license file. 36.If the license is time bound. the status displayed is In use. or 48 enabled ports. use one of the following commands: show license: Displays information for all installed license files. you can update the license file by using the update license url command. The base switch model comes with 12 ports enabled and can be upgraded as needed with the 12-port Cisco MDS 9148S On-Demand Port Activation license to support configurations of 24. you must obtain and install an updated license. If a license is in use. If a license is installed but no ports have acquired a license. usb1:. You can use the show license usage command (see Example 9-5) to view any licenses assigned to a switch. or 48 ports and upgrading the 16. The Cisco MDS 9148S 16G Multilayer Fabric Switch compact one rack-unit (1RU) switch scales from 12 to 48 line-rate 16 Gbps Fibre Channel ports. and 8 ports of 10 Gigabit Ethernet for FCoE connectivity. 32. On-Demand Port Activation Licensing The Cisco MDS 9250i is available in a base configuration of 20 ports (upgradeable to 40 through On-Demand Port Activation license) of 16 Gbps Fibre Channel. The PORT_ACTIVATION_PKG does not appear as installed if you have only the default license installed.

[no] port-license . interface fc slot/port 3. 2. you must make that port eligible by using the port-license command. However. Configure terminal. 1. if a port has already been made ineligible and you prefer to activate it. which are described in Table 9-3.PORT_ACTIVATION_PKG No 16 In use never - Example 9-6 shows the default port license configuration for the Cisco MDS 9148 switch: Example 9-6 Verifying Port-License Usage on an MDS 9148 Switch Click here to view code image bdc-mds9148-2# show port-license Available port activation licenses are 0 ----------------------------------------------Interface Cookie Port Activation License ----------------------------------------------fc1/1 16777216 acquired fc1/2 16781312 acquired fc1/3 16785408 acquired fc1/4 16789504 acquired fc1/5 16793600 acquired fc1/6 16797696 acquired fc1/7 16801792 acquired fc1/8 16805888 acquired fc1/9 16809984 acquired fc1/10 16814080 acquired fc1/11 16818176 acquired fc1/12 16822272 acquired fc1/13 16826368 acquired fc1/14 16830464 acquired fc1/15 16834560 acquired fc1/16 16838656 acquired fc1/17 16842752 eligible <snip> fc1/46 16961536 eligible fc1/47 16965632 eligible fc1/48 16969728 eligible There are three different statuses for port licenses. all ports are eligible to receive a license. Table 9-3 Port Activation License Status Definitions Making a Port Eligible for a License By default.

the switch returns the message “port activation license not available. Shutdown 4. 2. (Optional) copy running-config startup-config Cisco MDS 9000 NX-OS Software Upgrade and Downgrade Each switch is shipped with the Cisco MDS NX-OS operating system for Cisco MDS 9000 family switches. use the KICKSTART variable. (Optional) show port-license 6. Exit 5. To select the kickstart image. interface fc slot/port 3. (Optional) copy running-config startup-config Acquiring a License for a Port If you prefer not to accept the default on-demand port license assignments. no port-license 5. Exit 11. The images and variables are important factors in any install procedure.4. To select the system image. (Optional) show port-license 12. Exit 6. (Optional) copy running-config startup-config Moving Licenses Among Ports You can move a license from a port (or range of ports) at any time. You must specify the variable and the respective image to upgrade or downgrade your switch. port-license acquire 9. interface fc slot/port 7. No shutdown 10. . Shutdown 8. Both images are not always required for each install.” 1. you will need to first acquire licenses for ports to which you want to move the license. 1. If you attempt to move a license to a port and no license is available. Configure terminal 2. use the SYSTEM variable. Exit 5. The Cisco MDS NX-OS software consists of two images: the kickstart image and the system image. [no] port-license acquire 4. Configure terminal. interface fc slot/port 3. (Optional) show port-license 6.

In some cases. when the upgrade has progressed past the point at which it cannot be stopped gracefully. Before starting any software upgrade or downgrade. you may decide to proceed with this installation. there are specific sections that explain compatibility between different software versions and hardware modules. To view the results of a dynamic compatibility check. In this case. In this case. Use this command to obtain further information when the install all . but the data plane remains up. During the upgrade. because they are not supported in the image to be installed. Image version: Each image file has a version. If the active and the standby supervisor modules run different versions of the image. Flash disks on the switch: The bootflash: resides on the supervisor module and the Compact Flash disk is inserted into the slot0: device. both images may be High Availability (HA) compatible in some cases and incompatible in others. In some cases. the control plane is down. On switches with dual supervisor modules. but existing devices will not experience any disruption of traffic via the data plane. or if a failure occurs. issue the show incompatibility system bootflash:filename command. Note What is nondisruptive NX-OS upgrade or downgrade? Nondisruptive upgrades on the Cisco MDS fabric switches take down the control plane for not more than 80 seconds. the incompatibility is loose. Compatibility is established based on the image and configuration: Image incompatibility: The running image and the image to be installed are not compatible. the incompatibility is strict. So new devices will be unable to log in to the fabric via the control plane.The software image install procedure is dependent on the following factors: Software images: The kickstart and system image files reside in directories or folders that can be accessed from the Cisco MDS 9000 family switch prompt. both supervisor modules must have Ethernet connections on the management interfaces (mgmt 0) to maintain connectivity when switchovers occur during upgrades and downgrades. An incompatible feature is enabled in the image to be installed and it is not available in the running image and does not cause the switch to move into an inconsistent state. In the Release Notes. Supervisor modules: There are single or dual supervisor modules. In some cases. If the running image and the image you want to install are incompatible. the software upgrade may be disruptive. Configuration incompatibility: There is a possible incompatibility if certain features in the running image are turned off. The image to be installed is considered incompatible with the running image if one of the following statements is true: An incompatible feature is enabled in the image to be installed and it is not available in the running image and may cause the switch to move into an inconsistent state. the user may need to take a specific path to perform nondisruptive software upgrades or downgrades. the user should review NX-OS Release Notes. the software reports the incompatibility.

usually on bootflash. The BIOS then runs the loader bootstrap function. the administrator must recover from the error and reload the switch. The boot parameters are held in NVRAM and point to the location and name of both the kickstart and system images. the system image loads the Cisco NX-OS Software. the boot process fails at the last stage. . one-step upgrades using the install all command. The loader obtains the location of the kickstart file. The install all command compares and presents the results of the compatibility before proceeding with the installation. and proceeds to load the startup configuration that contains the switch configuration from NVRAM. checks the file systems. Quick. This upgrade is disruptive. as a result. You can exit if you do not want to proceed with these changes. Figure 9-4 portrays the boot sequence. and verifies the kickstart image before loading it. usually on bootflash. The install all command launches a script that greatly simplifies the boot procedure and checks for errors and the upgrade impact before proceeding. Do you wish to continue? (y/ n) [y]: n You can upgrade any switch in the Cisco MDS 9000 Family using one of the following methods: Automated. The kickstart then verifies and loads the system image. This upgrade is nondisruptive for director switches. the system BIOS on the supervisor module first runs power-on self-test (POST) diagnostics. The kickstart loads the Linux kernel and device drivers and then needs to load the system image. Finally. If the boot parameters are missing or have an incorrect name or location. Before running the reload command. one-step upgrade using the reload command. some resources might become unavailable after a switchover.command returns the following message: Click here to view code image Warning: The startup config contains commands not supported by the standby supervisor. If this error happens. Again. the boot parameters in NVRAM should point to the location and name of the system image. copy the correct kickstart and system images to the correct location and change the boot commands in your configuration. When the Cisco MDS 9000 Series multilayer switch is first switched on or during reboot.

For MDS switches with only one supervisor module. Step 2. Bring up the old active supervisor module with the new kickstart and system images. 4. you must have a working connection to both supervisors. Verify the following physical connections for the new Cisco MDS 9500 family switch: The console port is physically connected to a computer terminal (or terminal server). In this section we discuss firmware upgrade steps for a director switch with dual supervisor modules. To upgrade the switch. or router. use the latest Cisco MDS NX-OS Software on the Cisco MDS 9000 director series switch.txt command. the boot sequence steps are as follows: 1. because this process causes a switchover and the current standby supervisor will be active after the upgrade.Figure 9-4 Boot Sequence For the MDS Director switches with dual supervisor modules. switch. Upgrade the BIOS on the active and standby supervisor modules and the data modules. 2. Step 3. Upgrading to Cisco NX-OS on an MDS 9000 Series Switch During any firmware upgrade. 6. use the console connection. Install licenses (if necessary) to ensure that the required features are available on the . Upgrade complete. 3. Perform non-disruptive image upgrade for each data module (one at a time). 5. Bring up standby supervisor module with the new kickstart and system images. You can also create a backup of your existing configuration to a file by issuing the copy running-config bootflash:backup_config. Switch over from the active supervisor module to the upgraded supervisor module. Issue the copy running-config startup-config command to store your current running configuration. Steps 7 and 8 will not be executed. The management 10/100 Ethernet port (mgmt0) is connected to an external hub. and follow these steps: Step 1. Be aware that if you are upgrading through the management interface.

Click here to view code image switch# copy tftp://tftpserver.2. Verify that the switch is running the required software version by issuing the show version command.2. Use the delete bootflash: filename command to remove unnecessary files.6.cisco.6.bin switch# copy tftp://tftpserver. Step 10. Ensure that the required space is available in the bootflash: directory for the image file(s) to be copied using the dir bootflash: command.2.x.6.2. depending on which you are installing. Verify that space is available on the standby supervisor module bootflash on a Cisco MDS 9500 Series switch.bin switch# del m9500-sf2ek9-mz-npe.5.5c.2. .5. Click here to view code image switch(standby)# del bootflash: m9500-sf2ek9-kickstart-mz.bin bootflash:m9500-sf2ek9-kickstartmz.bin bootflash:m9500-sf2ek9-mz. Step 5.6.2.cisco.6. If you need more space on the standby supervisor module bootflash on a Cisco MDS 9500 Series switch. Download the files to an FTP or TFTP server.bin Step 8. Step 12.bin Usage for bootflash://sup-local 122811392 bytes used 61748224 bytes free 184559616 bytes total switch(standby)# exit ( to return to the active supervisor ) Step 7. If you need more space on the active supervisor module bootflash. Access the Software Download Center. Click here to view code image switch# attach mod x (where x is the module number of the standby supervisor) switch(standby)s# dir bootflash: 12288 Aug 26 19:06:14 2011 lost+found/ 16206848 Jul 01 10:54:49 2011 m9500-sf2ek9-kickstart-mz.2.bin Step 11.switch. Step 9.5.bin Step 6. Step 4.6.6. and select the required Cisco MDS NX-OS Release 6.bin switch(standby)# del bootflash: m9500-sf2ek9-mz.x.bin 16604160 Jul 01 10:20:07 2011 m9500-sf2ek9-kickstart-mz. Click here to view code image switch# del m9500-sf2ek9-kickstart-mz.x.2.5.27.6. Copy the Cisco MDS NX-OS kickstart and system images to the active supervisor module bootflash using FTP or TFTP.6.com/MDS/m9500-sf2ek9mz.2(x) image file.6.2.com/MDS/m9500-sf2ek9kickstart-mz.6.x. delete unnecessary files to make space available. Verify that your switch is running compatible hardware by checking the Release Notes.2. delete unnecessary files to make space available.

the upgrade might already be complete. Here we outline the major steps for the downgrade: Step 1.2. Downgrading Cisco NX-OS Release on an MDS 9500 Series Switch During any firmware upgrade. Use the install all command to downgrade the switch and handle configuration conversions.6. Step 2.6. Perform the upgrade by issuing the install all command.cisco. The output displays the status only after the switch has rebooted with the new image. Be aware that if you are downgrading through the management interface. When downgrading any switch in the Cisco MDS 9000 family. The process checks configuration compatibility.2. After information is provided about the impact of the upgrade before it takes place. In this section we discuss firmware upgrade steps for an MDS director switch with dual supervisor modules. (Refer to Step 5 in the upgrade section. the session is disconnected when the switch reboots.5.bin The install all process verifies all the images before installation and detects incompatibilities.bin ssi m9000-ek9ssi-mz.5. in which case the output will display the status of the upgrade.bin bootflash:m9700-sf3ek9-mz. download it from an FTP or TFTP server to the active supervisor module bootflash. because this process causes a switchover and the current standby supervisor will be active after the downgrade. Steps 7 and 8 will not be executed.bin switch# copy tftp://tftpserver. the script will check whether you want to continue with the upgrade.6.6.2.bin bootflash:m9700-sf3ek9-kickstart-mz.6.) Click here to view code image switch# copy tftp://tftpserver. Click here to view code image switch# install all kickstart m9500-sf2ek9-kickstartmz. If you need more space on the active supervisor module bootflash: use the delete command to remove unnecessary files and ensure that the required space is available on the active supervisor. Do you want to continue with the installation (y/n)? [n] y If the input is entered as y the install will continue. avoid using the reload command. you must have a working connection to both supervisors.6.Step 13. For MDS switches with only one supervisor module. Ensure that the required space is available on the active supervisor. You should first read the Release Notes and follow the steps for specific NX-OS downgrade instructions.bin Step 3. Verify that the system image files for the downgrade are present on the active supervisor module bootflash with dir bootflash command. All actions preceding the reboot are not captured in this output because when you enter the install all command using a Telnet session.2. delete unnecessary files to make space available.9.com/MDS/m9700-sf3ek9mz. (Refer to Steps 6 and 7 in the upgrade section. You can display the status of a nondisruptive upgrade by using the show install all status command.) If you need more space on the standby supervisor module bootflash. When you can reconnect to the switch through a Telnet session. use the console connection.5.2.2. .2.com/MDS/m9700-sf3ek9mz.6. If the software image file is not present.bin system m9500-sf2ek9-mz.9.9.cisco.5.

TL port. ST port. Disable any features that are incompatible with the downgrade system image. FL port. Save the configuration using the copy running-config startup-config command. Interfaces are created in VSAN 1 by default. Gigabit Ethernet interfaces.S74 system bootflash:m9500-sf2ek9-mz.2. Click here to view code image switch# copy running-config startup-config Step 7. Each physical Fibre Channel interface in a switch may operate in one of several port modes: E port.bin. each interface may be configured in auto or Fx port modes. Issue the show incompatibility system command (see Example 9-7) to determine whether you need to disable any features not supported by the earlier release.2.1.Convert autocreated port channels to be persistent (port-channel 1 persistent) . the management interface (mgmt0). To relay the frames. F port.. SD port. the configuration is retained. or VSAN interfaces. the characteristics of the interfaces through which the frames are received and sent must be defined.5. Click here to view code image switch# install all kickstart bootflash:m9500-sf2ek9-kickstartmz.1. Click here to view code image switch# configure terminal Enter configuration commands.. TE port. These two modes determine the port type during interface initialization. Step 5. switch(config)# interface fcip 31 switch(config-if)# no channel-group auto switch(config-if)# end switch# port-channel 127 persistent switch# Step 6. Besides these modes. one per line. End with CNTL/Z.S74 Step 8. Capability : CAP_FEATURE_AUTO_CREATED_41_PORT_CHANNEL Description : auto create enabled ports or auto created port-channels are present Capability requirement : STRICT Disable command : 1. Issue the install all command to downgrade the software.1. If a different type of module is inserted. 2. When a module is removed and replaced with the same type of module.switch# dir bootflash: Step 4. Verify the status of the modules on the switch using the show module command.Disable autocreate on interfaces (no channel-group auto). Example 9-7 show incompatibility system Output on MDS 9500 Switch Click here to view code image switch# show incompatibility system bootflash:m9500-sf2ek9-mz. and B port (see Figure 9-5). The configured interfaces can be Fibre Channel interfaces.2.5.5. Configuring Interfaces The main function of a switch is to relay frames from one data link to another.bin. the original configuration is no .bin The following configurations on active are incompatible with the system image 1) Service : port-channel .

” we discussed the different Fibre Channel port types. . Figure 9-5 Cisco MDS 9000 Family Switch Port Modes The interface state depends on the administrative configuration of the interface and the dynamic state of the physical link. The Cisco NX-OS software implicitly performs a graceful shutdown in response to either of the following actions for interfaces operating in the E port mode: If you shut down an interface. In Table 9-4 the interface states are summarized. “Data Center Storage Architecture. In Chapter 6. Table 9-4 Interface States Graceful Shutdown Interfaces on a port are shut down by default (unless you modified the initial configuration).longer retained.

The bit errors can occur for the following reasons: Faulty or bad cable Faulty or bad GBIC or SFP GBIC or SFP is specified to operate at 1 Gbps but is used at 2 Gbps GBIC or SFP is specified to operate at 2 Gbps but is used at 4 Gbps Short haul cable is used for long haul or long haul cable is used for short haul Momentary sync loss Loose cable connection at one or both ends Improper GBIC or SFP connection at one or both ends A bit error rate threshold is detected when 15 error bursts occur in a 5-minute period. You can enter a shutdown and no shutdown command sequence to re-enable the interface. Autosensing speed is enabled on all 4-Gbps and 8-Gbps switching module interfaces by default. Port Administrative Speeds By default. note the following guidelines: . If the encapsulation is set to EISL. 2 Gbps. When using local switching. A graceful shutdown ensures that no frames are lost when the interface is shutting down. This enhancement reduces the chance of frame loss. When autosensing is enabled for an interface operating in dedicated rate mode. An SD port monitors network traffic that passes through a Fibre Channel interface. regardless of the SPAN sources. When a shutdown is triggered either by you or the Cisco NX-OS software. By using local switching. all outgoing frames are transmitted in the EISL frame format. This configuration enables the interfaces to operate at speeds of 1 Gbps. Bit Error Thresholds The bit error rate threshold is used by the switch to detect an increased error rate before performance degradation seriously affects traffic. 4 Gbps of bandwidth is reserved. an interface functions as a SPAN. In SD port mode. Frame Encapsulation The switchport encap eisl command applies only to SD (SPAN destination) port interfaces. an extra switching step is avoided.If a Cisco NX-OS software application executes a port shutdown as part of its function. the switches connected to the shutdown link coordinate with each other to ensure that all frames in the ports are safely sent through the link before shutting down. or 4 Gbps on the 4-Gbps switching modules. the switch disables the interface when the threshold is reached. By default. which decreases the latency. and 8 Gbps on the 8-Gbps switching modules. Monitoring is performed using a standard Fibre Channel analyzer (or a similar Switch Probe) that is attached to the SD Port. Local Switching Local switching can be enabled in Generation 4 modules. which allows traffic to be switched directly with a local crossbar when the traffic is directed from one port to another on the same line card. even if the port negotiates at an operating speed of 1 Gbps or 2 Gbps. the port administrative speed for an interface is automatically calculated by the switch. This command determines the frame format for all frames transmitted by the interface in SD port mode.

you can share the bandwidth in a port group by using the switchport rate-mode shared command. E ports are not allowed in the module because they must be in dedicated mode. Note Local switching is not supported on the Cisco MDS 9710 switch. Table 9-5 Bandwidth and Port Group Configurations for Fibre Channel Modules . enter the switchport rate-mode shared command. Ports in the port group can have bandwidth dedicated to them. and ports on highbandwidth servers. Table 9-5 lists the bandwidth and port group configuration for Fibre Channel modules. For other ports. such as ISL ports. When configuring the ports. For ports that require high-sustained bandwidth. To place a port in shared mode. which usually is the default state. or ports can share a pool of bandwidth. storage ports that have higher fan-out ratios).All ports need to be in shared mode. Dedicated and Shared Rate Modes Ports on Cisco MDS 9000 Family line cards are grouped into port groups that have a fixed amount of bandwidth per port group. storage and tape array ports. typically servers that access shared storage-array ports (that is. When planning port bandwidth requirements. you can have bandwidth dedicated to them in a port group by using the switchport rate-mode dedicated command. allocation of the bandwidth within the port group is important. be sure not to exceed the available bandwidth in a port group. The Cisco MDS 9000 Family allows for the bandwidth of ports in a port group to be allocated based on the requirements of individual ports.

which can be used by other unrelated flows that do not experience the slow drain condition. You cannot configure all 6 ports of a port group at the 8-Gbps dedicated rates because that would require 48 Gbps of bandwidth. In Chapter 6. and default gateway). a Cisco MDS 9513 Multilayer Director with a Fabric 3 module installed. Even though this feature can be applied to ISLs as well. subnet mask. This oversubscription rate is well below the oversubscription rate of the typical storage array port (fan-out ratio) and does not affect performance.For example. has eight port groups of 6 ports each. These classes of service do not support end-to-end flow control. Slow Drain Device Detection and Congestion Avoidance All data traffic between end devices in a SAN fabric is carried by Fibre Channel Class 3. we discussed the fan-out ratio. No-credit timeout drops all packets after the slow drain is detected using the configured thresholds. however. Most major disk subsystem vendors provide guidelines as to the recommended fan-out ratio of subsystem client-side ports to server connections. Management Interfaces You can remotely configure the switch through the management interface (mgmt0). Each port group has 32. You can also mix dedicated and shared rate ports in a port group. To avoid or minimize the stuck condition. or the IP version 6 (IPv6) parameters so that the switch is reachable. and buffer-tobuffer flow control. This feature provides various enhancements to detect slow drain devices that are causing congestion in the network and also provides a congestion avoidance function. The lesser frame timeout value helps to alleviate the slow drain condition that affects the fabric by dropping the packets on the edge ports sooner than the time they actually get timed out (500 mms). Note This feature is used mainly for edge ports that are connected to slow edge devices. When there are slow devices attached to the fabric. the traffic is carried by Class 2 services that use link-level. This function frees the buffer space in ISL. These recommendations are often in the range of 7:1 to 15:1.4 Gbps of bandwidth. The slow devices lead to ISL credit shortage in the traffic destined for these devices and they congest the links. the end devices do not accept the frames at the configured or negotiated rate. you must configure either the IP version 4 (IPv4) parameters (IP address. we recommend that you apply this feature only for edge F ports and retain the default configuration for ISLs as E and TE ports. and the port group has only 32. In some cases. To configure a connection on the mgmt0 interface. This feature is focused mainly on the edge ports that are connected to slow drain devices. so that the ports run at 8 Gbps and are oversubscribed at a rate of 1.4 Gbps of bandwidth available. You can. configure all 6 ports in shared rate mode. configure a lesser frame timeout for the ports. The management port (mgmt0) is autosensing and operates in full-duplex mode at a speed of . The credit shortage affects the unrelated flows in the fabric that use the same ISL link. The goal is to avoid or minimize the frames being stuck in the edge ports due to slow drain devices that are causing ISL blockage.48:1 (6 ports × 8 Gbps = 48 Gbps/32.4 Gbps). even though destination devices do not experience slow drain. per-hop-based. and using a 48-port 8-Gbps Advanced Fibre Channel module.

You can create an IP interface on top of a VSAN and then use this interface to send frames to this VSAN. Autosensing supports both the speed and the duplex mode. If you delete the VSAN. All the NX-OS CLI commands for interface configuration and verification are summarized in Table 9-6. the interface cannot be created. VSAN interfaces cannot be created for nonexisting VSANs. Here are the guidelines when creating or deleting a VSAN interface: Create a VSAN before creating the interface for that VSAN. Configure each interface only in one VSAN. the attached interface is automatically deleted. the default speed is 100 Mbps and the default duplex mode is auto. VSAN Interfaces VSANs apply to Fibre Channel fabrics and enable you to configure multiple isolated SAN topologies within the same physical infrastructure. it is not created automatically. the default speed is auto and the default duplex mode is auto.10/100/1000 Mbps. On a Supervisor-1 module. Create the interface VSAN. On a Supervisor-2 module. If a VSAN does not exist. To use this feature. you must configure the IP address for this VSAN. .

.

.

.

Table 9-6 Summary of NX-OS CLI Commands for Interface Configuration and Verification Verify Initiator and Target Fabric Login In Chapters 6 and 7 we discussed a theoretical explanation of Virtual SAN (VSAN). In this chapter we build up the following sample SAN topology together to practice what we have learned so far (see Figure 9-6). In this section we discuss the configuration and verification steps to build a Fibre Channel SAN topology. Fibre Channel Services. . and SAN topologies. zoning. Fibre Channel concepts.

VSANs have the following attributes: VSAN ID: The VSAN ID identifies the VSAN as the default VSAN (VSAN 1). This state indicates that traffic can pass through this VSAN. With VSANs you can create multiple logical SANs over a common physical infrastructure. and the isolated VSAN (VSAN 4094). VSANs provide isolation among devices that are physically connected to the same fabric. Nexus 7000 Series switches. and UCS Fabric Interconnect. user-defined VSANs (VSAN 2 to 4093). Nexus 6000.Figure 9-6 Our Sample SAN Topology Diagram VSAN Configuration You can achieve higher security and greater stability in Fibre Channel fabrics by using virtual SANs (VSANs) on Cisco MDS 9000 family switches and Cisco Nexus 5000. This state cannot be configured. Each VSAN can contain up to 239 switches and has an independent address space that allows identical Fibre Channel IDs (FC IDs) to be used simultaneously in different VSANs. State: The administrative state of a VSAN can be configured to an active (default) or . A VSAN is in the operational state if the VSAN is active and at least one port is up.

suspended state. For example. You can assign VSAN membership to ports using one of two methods: Statically: By assigning VSANs to ports. the default name for VSAN 3 is VSAN0003. Example 9-8 VSAN Creation and Verification on MDS-9222i Switch Click here to view code image . All ports in a suspended VSAN are disabled. This method is referred to as dynamic port VSAN membership (DPVM). After VSANs are created. VSAN name: This text string identifies the VSAN for management purposes. Cisco NX-OS software supports the following four interop modes: Mode 1: Standards based interop mode that requires all other vendors in the fabric to be in interop mode Mode 2: Brocade native mode (Core PID 0) Mode 3: Brocade native mode (Core PID 1) Mode 4: McData native mode Port VSAN Membership Port VSAN membership on the switch is assigned on a port-by-port basis. Fibre Channel standards guide vendors toward common external Fibre Channel interfaces.) In Example 9-8. the default) for load balancing path selection. The active state of a VSAN indicates that the VSAN is configured and enabled. they may exist in various conditions or states. Load balancing attributes: These attributes indicate the use of the source-destination ID (srcdst-id) or the originator exchange OX ID (src-dst-ox-id. Dynamically: By assigning VSANs based on the device WWN. If a port is configured in this VSAN. each port belongs to the default VSAN. By suspending a VSAN. The suspended state of a VSAN indicates that the VSAN is configured but not enabled. Interop mode: Interoperability enables the products of multiple vendors to interact with each other. By default. (DPVM is explained in Chapter 6. The name can be from 1 to 32 characters long and it must be unique across all VSANs. the VSAN name is a concatenation of VSAN and a four-digit string representing the VSAN ID. you activate the services for that VSAN. which specifically turns off advanced or proprietary features and provides the product with a more amiable standards-compliant implementation. you can preconfigure all the VSAN parameters for the whole fabric and activate the VSAN immediately. it is disabled. Use this state to deactivate a VSAN without losing the VSAN’s configuration. By enabling a VSAN. By default. Each vendor has a regular mode and an equivalent interoperability mode. we will be configuring and verifying VSANs on our sample topology that was shown in Figure 9-6.

fc1/3 MDS-9222i(config-vsan-db)# interface fc1/1. associated with the storage array port). the operational state is down since no FC port is up MDS-9222i# show vsan 47 vsan 47 information name:VSAN-A state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:down MDS-9222i(config)# vsan database ! Interface fc1/1 and fc 1/3 are included in VSAN 47 MDS-9222i(config-vsan-db)# vsan 47 interface fc1/1. FL mode: Arbitrated loop mode—storage connections that allow multiple storage targets to log in to a single switch port. one per line. We have done this just so that we can see multiple storage targets for zoning. as highlighted. which allows the port to come up in either F mode or FL mode: F mode: Point-to-point mode for server connections and most storage connections (storage ports in F mode have only one target logging in to that port. End with CNTL/Z.!Initial Configuration and entering the VSAN configuration database MDS-9222i# conf t Enter configuration commands. the Admin Mode is FX (the default). . MDS-9222i(config)# vsan database ! Creating VSAN 47 MDS-9222i(config-vsan-db)# vsan 47 ? MDS-9222i(config-vsan-db)# vsan 47 name VSAN-A ! Configuring VSAN name for VSAN 47 MDS-9222i(config-vsan-db)# vsan 47 name VSAN-A ? <CR> interop Interoperability mode value loadbalancing Configure loadbalancing scheme suspend Suspend vsan ! Other attributes of a VSAN configuration can be displayed with a question mark after !VSAN name. The last steps for the VSAN configuration will be verifying the VSAN membership and usage (see Example 9-9). fc1/3 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) fc1/1 47 FX on up swl F 2 -fc1/3 47 FX on up swl FL 4 -- Note. MDS-9222i(config-vsan-db)# vsan 47 name VSAN-A interop loadbalancing ? src-dst-id Src-id/dst-id for loadbalancing src-dst-ox-id Ox-id/src-id/dst-id for loadbalancing(Default) ! Load balancing attributes of a VSAN ! To exit configuration mode issue command end MDS-9222i(config-vsan-db)# end ! Verifying the VSAN state. fc1/3 ! Change the state of the fc interfaces MDS-9222i(config-if)# no shut ! Verifying status of the interfaces MDS-9222i(config-if)# show int fc 1/1.

47 vsans available for configuration: 2-46. The name server functionality maintains a database containing the attributes for all hosts and storage devices in each VSAN. Examine the FLOGI database on a switch that is directly connected to the host HBA and connected ports. FCID is 0x410100 . Name servers allow a database entry to be modified by a device that originally registered the information.48-4093 fc1/4 fc1/8 fc1/12 fc1/16 In a Fibre Channel fabric. Use the show flogi command to verify if a storage device is displayed in the FLOGI table as in the next section. trunk mode is on snmp link state traps are enabled Port mode is F. the name server instances running on each switch share information in a distributed database. we continue verifying FCNS and FLOGI databases on our sample SAN topology. In Example 9-10. In a multiswitch fabric configuration. the fabric login is successful. each host or disk requires a Fibre Channel ID. The name server permits an Nx port to register attributes during a PLOGI (to the name server) to obtain attributes of other hosts.Example 9-9 Verification Steps for VSAN Membership and Usage Click here to view code image ! Verifying the VSAN membership MDS-9222i# show vsan membership vsan 1 interfaces: fc1/1 fc1/2 fc1/3 fc1/5 fc1/6 fc1/7 fc1/9 fc1/10 fc1/11 fc1/13 fc1/14 fc1/15 fc1/17 fc1/18 vsan 47 interfaces: fc1/1 fc1/3 vsan 4079(evfp_isolated_vsan) interfaces: vsan 4094(isolated_vsan) interfaces: ! Verifying VSAN usage MDS-9222i# show vsan usage 1 vsan configured configured vsans: 1. If the required device is displayed in the FLOGI table. One instance of the name server process runs on each switch. Example 9-10 Verifying FCNS and FLOGI Databases on MDS-9222i Switch Click here to view code image ! Verifying fc interface status MDS-9222i(config-if)# show interface fc 1/1 fc1/1 is up Hardware is Fibre Channel. SFP is short wave laser w/o OFC (SN) Port WWN is 20:01:00:05:9b:77:0d:c0 Admin port mode is FX. The name server stores name entries for all hosts in the FCNS database. These attributes are deregistered when the Nx port logs out either explicitly or implicitly.

fc1/3 47 0x4100e8 c5:9a:00:06:2b:04:fa:ce c5:9a:00:06:2b:01:00:04 fc1/3 47 0x4100ef c5:9a:00:06:2b:01:01:04 c5:9a:00:06:2b:01:00:04 Total number of flogi = 3. 0 frames/sec 18 frames input. 0 NOS. 0 frames/sec 5 minutes output rate 32 bits/sec. 1 LRR.0 ipa :ff ff ff ff ff ff ff ff fc4-types:fc4_features :scsi-fcp:target symbolic-port-name :SANBlaze V4.0. MDS-9222i(config-if)# show fcns database VSAN 47: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x4100e8 NL c5:9a:00:06:2b:04:fa:ce scsi-fcp:target 0x4100ef NL c5:9a:00:06:2b:01:01:04 scsi-fcp:target 0x410100 N 21:00:00:e0:8b:0a:f0:54 (Qlogic) scsi-fcp:init Total number of entries = 3 MDS-9222i(config-if)# show fcns database detail -----------------------VSAN:47 FCID:0x4100e8 -----------------------port-wwn (vendor) :c5:9a:00:06:2b:04:fa:ce node-wwn :c5:9a:00:06:2b:01:00:04 class :3 node-ip-addr :0.0. 0 NOS.Port vsan is 47 Speed is 2 Gbps Rate mode is shared Transmit B2B Credit is 3 Receive B2B Credit is 16 Receive data field Size is 2112 Beacon is turned off 5 minutes input rate 88 bits/sec. 2 loop inits 1 output OLS. 0 LRR. the initiator is connected to fc 1/1 the domain id is x41 MDS-9222i(config-if)# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------fc1/1 47 0x410100 21:00:00:e0:8b:0a:f0:54 20:00:00:e0:8b:0a:f0:54 ! JBOD FC target contains 2 NL_Ports (disks) and is connected to domain x041 ! FC Initiator with the pWWN 21:00:00:e0:8b:0a:f0:54 is attached to fc interface 1/1. the VSAN domain manager 47 has assigned FCID 0x410100 to the device during FLOGI. 0 errors 0 input OLS. 4 bytes/sec. ! Verifying flogi database . which is attached to fc 1/1. 3632 bytes 0 discards. ! Verifying name server database ! The assigned FCID 0x410100 and device WWN information are registered in the FCNS and attain fabric-wide visibility. 11 bytes/sec. The buffer-to-buffer credits are negotiated during the port login. 1396 bytes 0 discards. 0 too short 18 frames output. 0 errors 0 CRC.0 Port . 1 loop inits 16 receive B2B credit remaining 3 transmit B2B credit remaining 3 low priority transmit B2B credit remaining Interface last changed at Mon Sep 22 09:55:15 2014 ! The port mode is F mode and there is automatically FCID is assigned to the FC Initiator. 0 unknown class 0 too long.

0.0 fabric-port-wwn :20:03:00:05:9b:77:0d:c0 hard-addr :0x000000 permanent-port-wwn (vendor) :00:00:00:00:00:00:00:00 connected interface :fc1/3 switch name (IP address) :MDS-9222i (10.0 fabric-port-wwn :20:03:00:05:9b:77:0d:c0 hard-addr :0x000000 permanent-port-wwn (vendor) :00:00:00:00:00:00:00:00 connected interface :fc1/3 switch name (IP address) :MDS-9222i (10.0.0.2.8.8.0.8.0 ipa :ff ff ff ff ff ff ff ff fc4-types:fc4_features :scsi-fcp:init symbolic-port-name : symbolic-node-name :QLA2342 FW:v3.2.1.0.0 fabric-port-wwn :20:01:00:05:9b:77:0d:c0 hard-addr :0x000000 permanent-port-wwn (vendor) :21:00:00:e0:8b:0a:f0:54 (Qlogic) connected interface :fc1/1 switch name (IP address) :MDS-9222i (10. .2.25 DVR:v9.44) -----------------------VSAN:47 FCID:0x4100ef -----------------------port-wwn (vendor) :c5:9a:00:06:2b:01:01:04 node-wwn :c5:9a:00:06:2b:01:00:04 class :3 node-ip-addr :0.0.0 Port symbolic-node-name : port-type :NL port-ip-addr :0.0.44) -----------------------VSAN:47 FCID:0x410100 ! Initiator connected to fc 1/1 ! The vendor of the host bus adapter (HBA) is Qlogic where we can see in the vendor pWWN -----------------------port-wwn (vendor) :21:00:00:e0:8b:0a:f0:54 (Qlogic) node-wwn :20:00:00:e0:8b:0a:f0:54 class :3 node-ip-addr :0.0 ipa :ff ff ff ff ff ff ff ff fc4-types:fc4_features :scsi-fcp:target symbolic-port-name :SANBlaze V4.44) Total number of entries = 3 The DCNM Device Manager for MDS 9222i will be displaying the ports in two tabs: one is Summary and the other one is a GUI display of the physical ports in the Device tab (see Figure 9-7).symbolic-node-name : port-type :NL port-ip-addr :0.0.0.6 port-type :N port-ip-addr :0.0.8.03.

Figure 9-7 Cisco DCNM Device Manager for MDS 9222i Switch In Table 9-7 we summarize NX-OS CLI commands that are related to VSAN configuration and verification. .

security is crucial.Table 9-7 Summary of NX-OS CLI Commands for VSAN Configuration and Verification Verify Fibre Channel Zoning on a Cisco MDS 9000 Series Switch Zoning is used to provide security and to restrict access between initiators and targets on the SAN fabric. the data on this disk could become corrupted. For example. To avoid any compromise of . With many types of servers and storage devices on the network. if a host were to gain access to a disk that was being used by another host. potentially with a different operating system.

In Example 9-11. You cannot make changes to an existing zone set and activate it if the full zone set is lost or is not propagated. to one of the following areas: To the full zone set To a remote location (using FTP. Members in a zone can access one another. Members in different zones cannot access one another. A zone can be a member of more than one zone set. or TFTP) The active zone set is not part of the full zone set. SCP. can see which targets. This map dictates which devices. thus reducing the risk of data loss. volatile: directory. namely hosts. You can copy an active zone set from the bootflash: directory. A zone set consists of one or more zones: A zone set can be activated or deactivated as a single entity across all switches in the fabric. Only one zone set can be activated at any time. SFTP. port-number fcalias Add fcalias to zone fcid Add FCID member to zone fwwn Add Fabric Port WWN member to zone interface Add member based on interface ip-address Add IP address member to zone pwwn Add Port WWN member to zone symbolic-nodename Add Symbolic Node Name member to zone ! Inserting a disk (FC target) into zoneA MDS-9222i(config-zone)# member pwwn c5:9a:00:06:2b:01:01:04 ! Inserting pWWN of an initiator which is connected to fc 1/1 into zoneA MDS-9222i(config-zone)# member pwwn 21:00:00:e0:8b:0a:f0:54 ! Verifying zone zoneA if the correct members are configured or not MDS-9222i(config-zone)# show zone name zoneA zone name zoneA vsan 47 pwwn c5:9a:00:06:2b:01:01:04 pwwn 21:00:00:e0:8b:0a:f0:54 ! Creating zone set in VSAN 47 MDS-9222i(config-zone)# zoneset name zonesetA vsan 47 .crucial data within the SAN. zoning allows the user to overlay a security map. Zone Set Distribution: You can distribute full zone sets using one of two methods: one-time distribution or full zone set distribution. or slot0. A zone consists of multiple zone members. we will continue building our sample topology with zoning configuration. Zone Set Duplication: You can make a copy of a zone set and then edit it without altering the original zone set. Example 9-11 Configuring Zoning on MDS-9222i Switch Click here to view code image ! Creating zone in VSAN 47 MDS-9222i(config-if)# zone name zoneA vsan 47 MDS-9222i(config-zone)# member ? ! zone member can be configured with one of the below options device-alias Add device-alias member to zone domain-id Add member based on domain-id.

DCNM is really best at viewing and operating on existing fabrics (that is. members can be displayed via Logical Domains (on the left pane). Figure 9-8 Cisco DCNM Zone Members for VSAN 47 About Smart Zoning Smart zoning implements hard zoning of large zones with fewer hardware resources than was previously required. The administrator is required to manage the individual zones according to the zone configuration guidelines. Select VSAN 47. but not initiator-initiator. and the combinations that are not used are ignored. In this chapter we are just doing the CLI versions of these commands. By analyzing device-type information in the FCNS. On the DCNM SAN active zone set. useful combinations can be implemented at the hardware level by the Cisco MDS NX-OS software. MDS-9222i(config)# show zoneset brief zoneset name zonesetA vsan 47 zone zoneA Setting up multiple switches independently and then connecting them into one fabric can be done in DCNM. A missing asterisk may indicate an offline device or an incorrectly configured zone. and then select ZonesetA (see Figure 9-8). The traditional zoning method allows each device in a zone to communicate with every other device in the zone. check zone status ! Verifying the active zone set status in VSAN 47 MDS-9222i(config)# show zoneset active zoneset name zonesetA vsan 47 zone name zoneA vsan 47 * fcid 0x4100ef [pwwn c5:9a:00:06:2b:01:01:04] * fcid 0x410100 [pwwn 21:00:00:e0:8b:0a:f0:54] The asterisks (*) indicate that a device is currently logged into the fabric (online). or both. This information allows more . just about everything except making initial connections of multiple switches into one fabric). Smart zoning eliminates the need to create a single initiator to single target zones. For example. possibly a mistyped pWWN. target. The device type information of each device in a smart zone is automatically populated from the Fibre Channel Name Server (FCNS) database as host. initiator-target pairs are configured. However.MDS-9222i(config-zoneset)# member zoneA MDS-9222i(config-zoneset)# exit ! Inserting a disk (FC target) into zoneA MDS-9222i(config)# zoneset activate name zonesetA vsan 47 Zoneset activation initiated.

but data traffic to LUN 0 (for example. The default zoning mode can be selected during the initial setup. if available. LUN masking and mapping restricts server access to specific LUNs. such as a disk controller that needs to communicate with another disk controller. a member of the zone can access any LUN in the device. At the time enhanced zoning is enabled. control traffic to LUN 0 (for example. To enable enhanced zoning in a VSAN. Enhanced zoning uses the same techniques and tools as basic zoning. About Enhanced Zoning The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. The zone mode will display enhanced for enhanced mode and the default zone distribution policy is full for . The default zoning is basic mode. however. Smart zoning can be enabled at the VSAN level but can also be disabled at zone level. READ. If LUN masking is enabled on a storage subsystem. Both standards support the basic zoning functionalities explained in Chapter 6 and the enhanced zoning functionalities described in this section. About LUN Zoning and Assigning LUNs to Storage Subsystems Logical unit number (LUN) zoning is a feature specific to switches in the Cisco MDS 9000 family. Step 3. you can restrict access to specific LUNs associated with a device. INQUIRY) is supported. switch(config)# zone mode enhanced vsan 222 Zoning mode is changed to enhanced for VSAN 200. Enhanced zoning can be turned on per VSAN as long as each switch within that VSAN is enhanced zoning capable. You need to check the zone status to verify the status. WRITE) is denied. and if you want to perform additional LUN zoning in a Cisco MDS 9000 family switch. then. a VSANwide lock. A storage device can have multiple LUNs behind it. differs from that of basic zoning. If the device port is part of a zone. changes are either committed or aborted with enhanced zoning. Last. REPORT_LUNS. Enhanced zoning needs to be enabled on only one switch within the VSAN in the existing SAN topology. is implicitly obtained for enhanced zoning. For one thing. The flow of enhanced zoning. the enhanced zoning feature is disabled in all switches in the Cisco MDS 9000 family. smart zoning defaults can be overridden by the administrator to allow complete control. follow these steps: Step 1. The zone status can be verified with the show zone status command. all zone and zone set modifications for enhanced zoning include activation. In the event of a special situation. Second. the LUN number for each host bus adapter (HBA) from the storage subsystem should be configured with the LUN-based zone procedure. the command will be propagated to the other switches within the VSAN automatically. with a few added commands.efficient utilization of switch hardware by identifying initiator-target pairs and configuring those only in hardware. switch# config t (enter the configuration mode) Step 2. By default. When LUN 0 is not included within a zone. as per standards requirements. Click here to view code image switch(config)# no zone mode enhanced vsan 222 The preceding command disables the enhanced zoning mode for a specific VSAN. With LUN zoning.

The switch that is chosen to initiate the migration to enhanced zoning will distribute its full zone database to the other switches in the VSAN.enhanced mode. In basic zone mode the default zone distribution policy is active. Enabling enhanced zoning does not perform a zone set activation. Enabling it on multiple switches within the same VSAN can result in failure to activate properly. we discuss the advantages of using enhanced zoning. In Table 9-8. . Click here to view code image switch# show zone status vsan 222 VSAN: 222 default-zone: deny distribute: full Interop: default mode: enhanced merge-control: allow session: none hard-zoning: enabled broadcast: enabled smart-zoning: disabled rscn-format: fabric-address Default zone: qos: none broadcast: disabled ronly: disabled Full Zoning Database : DB size: 44 bytes Zonesets:0 Zones:0 Aliases: 0 Attribute-groups: 1 Active Zoning Database : Database Not Available The rules for enabling enhanced zoning are as follows: Enhanced zoning needs to be enabled on only one switch in the VSAN of an existing converged SAN fabric. thereby overwriting the destination switches’ full zone set database.

Table 9-8 Advantages of Enhanced Zoning .

fc1/12 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/1 47 FX on up swl F 2 -fc1/3 47 FX on up swl FL 4 -fc1/4 48 FX on up swl FL 4 -MDS-9222i(config-if)# show fcns database VSAN 47: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x4100e8 NL c5:9a:00:06:2b:04:fa:ce scsi-fcp:target 0x4100ef NL c5:9a:00:06:2b:01:01:04 scsi-fcp:target 0x410100 N 21:00:00:e0:8b:0a:f0:54 (Qlogic) scsi-fcp:init Total number of entries = 3 VSAN 48: -------------------------------------------------------------------------- . Figure 9-9 SAN topology with CLI Configuration On MDS 9222i we configured two VSANs: VSAN 47 and VSAN 48.fc1/4. so far we managed to build only separate SAN islands (see Figure 9-9). (See Example 9-12.Let’s return to our sample SAN topology.fc1/3.) Example 9-12 Verifying Initiator and Target Logins Click here to view code image MDS-9222i(config-if)# show interface fc1/1. Both VSANs are active and their operational states are up.

which require the Enterprise package license. . zone-based traffic prioritizing.FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x4100e8 NL c5:9b:00:06:2b:14:fa:ce scsi-fcp:target 0x4100ef NL c5:9b:00:06:2b:02:01:04 scsi-fcp:target Total number of entries = 2 MDS-9148(config-if)# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------fc1/1 48 0x420000 21:01:00:e0:8b:2a:f0:54 20:01:00:e0:8b:2a:f0:54 Total number of flogi = 1. MDS-9148(config-if)# show fcns database VSAN 48: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x420000 N 21:01:00:e0:8b:2a:f0:54 (Qlogic) scsi-fcp:init Total number of entries = 1 Table 9-9 summarizes NX-OS CLI commands for zone configuration and verification. are LUN zoning. and zone-based FC QoS. read-only zones. The zoning features.

.

.

.

.

.

. Trunking is supported on E ports and F ports. and HBAs. over the same physical link. third-party core switches. NPV switches.Table 9-9 Summary of NX-OS CLI Commands for Zone Configuration and Verification VSAN Trunking and Setting Up ISL Port VSAN trunking is a feature specific to switches in the Cisco MDS 9000 Family. Trunking enables interconnect ports to transmit and receive frames in more than one VSAN. Figure 9-10 represents the possible trunking scenarios in a SAN with MDS core switches.

In the case of MDS core switches. the VF_ID is 1 for all ports. TN port: If trunk mode is enabled (not currently supported) in an N port (see the link 1b in Figure 9-10) and that port becomes operational as a trunking N port. TNP port: If trunk mode is enabled in an NP port (see the link 2 in Figure 9-10) and that port becomes operational as a trunking NP port. By default. When an N port supports trunking. A server can reach multiple VSANs through a TF port without inter-VSAN routing (IVR). TF-TNP port link: A single link can be established to connect an TF port to an TNP port using the PTP protocol to carry tagged frames (see the link 2 in Figure 9-10). it is referred to as a TE port. TF port: If trunk mode is enabled in an F port (see the link 2 in Figure 9-10) and that port becomes operational as a trunking F port.Figure 9-10 Trunking F Ports The trunking feature includes the following key concepts: TE port: If trunk mode is enabled in an E port and that port becomes operational as a trunking E port. the pWWNs for which the N port requests additional FC_IDs are called virtual pWWNs. it is referred to as a TF port channel. A Fibre Channel VSAN is called Virtual Fabric and uses a VF_ID in place of the VSAN ID. Cisco Port Trunking Protocol (PTP) is used to carry tagged frames. it is referred to as a TNP port. it is referred to as a TN port. TF PortChannel: If trunk mode is enabled in an F port channel (see the link 4 in Figure 9-10) and that port channel becomes operational as a trunking F port channel. TF-TN port link: A single link can be established to connect an F port to an HBA to carry tagged frames (see the link 1a and 1b in Figure 9-10) using Exchange Virtual Fabrics Protocol (EVFP). it is referred to as a TF port. PTP is used because PTP also supports trunking port channels. a pWWN is defined for each VSAN and called a logical pWWN. .

if the third-party core switches ACC’s physical FLOGI with the EVFP bit is configured. and SD) on non-NPV switches. Table 9-10 Supported Trunking Protocols By default. In some cases. If the trunking protocol is disabled on a switch. Also.In Table 9-10. The ISL is always in a trunking disabled state. determines the trunking state of the link and the port modes at both ends. by default. other switches that are directly connected to this switch are similarly affected on the connected interfaces. In the case of F ports. no port on that switch can apply new trunk configurations. but only supports traffic in VSANs that it negotiated with previously (when the trunking protocol was enabled). you may need to merge traffic from different port VSANs across a nontrunking ISL. then EVFP protocol enables trunking on the link. The TE port continues to function in trunk mode. Trunk Modes By default. You can configure trunk mode as on (enabled). disable the trunking protocol. Fx. trunk mode is disabled. . we summarize supported trunking protocols. The preferred configuration on the Cisco MDS 9000 family switches is one side of the trunk set to auto and the other side set to on. off (disabled). the trunking protocol is enabled on E ports and disabled on F ports. The trunk mode configuration at the two ends of an ISL. between two switches. FL. the trunk mode configuration on E ports has no effect. In Table 9-11. Existing trunk configurations are not affected. F. If so. ST. or auto (automatic). trunk mode is enabled on all Fibre Channel interfaces (Mode: E. On NPV switches. we summarize the trunk mode status between switches. When connected to a third-party switch.

it will default to the ON mode behavior. if a port channel protocol frame is received from a peer port. and they do not need a designated master port. and TNP links are used by the trunking protocol to determine the allowed active VSANs in which frames can be received or transmitted. However.Table 9-11 Trunk Mode Status Between Switches Each Fibre Channel interface has an associated trunk-allowed VSAN list. If a port or a switching module fails. By default. ISL ports can reside on any switching module. the port channel protocol is not initiated. If a trunking enabled E port is connected to a third-party switch. Port Channel Modes You can configure each port channel with a channel group mode parameter to determine the port channel protocol behavior for all member ports in this channel group. In TE-port mode. or responds with a nonnegotiable status. frames are transmitted and received in one or more VSANs specified in this list. does not support the PortChannel protocol. the VSAN range (1 through 4093) is included in the trunk-allowed list. There are couple of important guidelines and limitations for the trunking feature: F ports support trunking in Fx mode. With this feature. Port channels aggregate multiple physical ISLs into one logical link with higher bandwidth and port resiliency for both Fibre Channel and FICON traffic. while configured in a channel group. In this mode. The ACTIVE port channel mode allows automatic recovery without explicitly enabling and disabling the port channel member ports at . the trunking protocol ensures seamless operation as an E port. the port channel continues to function properly without requiring fabric reconfiguration. MDS does not enforce the uniqueness of logical pWWNs across VSANs. The possible values for a channel group mode are as follows: ON (default): The member ports only operate as part of a port channel or remain inactive. up to 16 expansion ports (Eports) or trunking E-ports (TE-ports) can be bundled into a PortChannel. ACTIVE: The member ports initiate port channel protocol negotiation with the peer port(s) regardless of the channel group mode of the peer port. If the peer port. TF. The trunk-allowed VSANs configured for TE. the software indicates its nonnegotiable status.

the interface Admin Mode is FX MDS-9222i# show interface fc1/5. buffer –to-buffer credits and bandwidth. Example 9-13 Verifying Port Channel on Our Sample SAN Topology Click here to view code image ! Before changing the ports to dedicated mode . we show the verification and configuration steps on NX-OS CLI for port channel configuration on our sample SAN topology. we will connect two SAN islands by creating port channel using ports 5 and 6 on both MDS switches. Show port-resources module command will show the default and configured interface rate mode. Ports in auto mode will automatically come up as ISL (E-ports) after they are connected to another switch. Figure 9-11 portrays the minimum required configuration steps to set up ISL port channel with mode active between two MDS switches. Ports are also in trunk on mode.8 Gbps of bandwidth. On our sample SAN topology. When we set these two ports to dedicate they will reserve a dedicated 4 Gb of bandwidth (leaving 4. On MDS 9222i there are three port-groups available and each port-group consists of six Fibre Channel ports. so when ISL comes up it will automatically carry all VSANs in common between switches. Figure 9-11 Setting Up ISL Port on Sample SAN Topology On 9222i ports need to be put into dedicated rate mode to be ISLs (or be set to mode auto). In Example 9-13.8 Gbps . MDS-9222i(config)# show port-resources module 1 Module 1 Available dedicated buffers are 3987 Port-Group 1 Total bandwidth is 12.either end.fc1/6 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/5 1 FX on down swl --fc1/6 1 FX on down swl --! On 9222i ports need to be put into dedicated rate mode in order to be configured as ISL (or be set to mode auto).8 Gb shared between the other group members). Groups of six ports have a total 12.

0 shared fc1/3 16 4.0 Gbps -------------------------------------------------------------------Interfaces in the Port-Group B2B Credit Bandwidth Rate Mode Buffers (Gbps) -------------------------------------------------------------------fc1/1 16 4.0 shared fc1/4 16 4.0 shared fc1/5 16 4.0 dedicated fc1/6 250 4.0 shared fc1/2 16 4.0 dedicated <snip> ! Configuring port-channel interface MDS-9222i(config-if)# channel-group 56 fc1/5 fc1/6 added to port-channel 56 and disabled please do the same operation on the switch at the other end of the port-channel.Total shared bandwidth is 12. then do "no shutdown" at both ends to bring it up MDS-9222i(config-if)# no shut MDS-9222i(config-if)# int port-channel 56 MDS-9222i(config-if)# channel mode active MDS-9222i(config-if)# sh interface fc1/5-6 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/5 1 auto on trunking swl TE 4 56 fc1/6 1 auto on trunking swl TE 4 56 .0 shared fc1/5 250 4.0 shared fc1/3 16 4.0 shared fc1/6 16 4.0 Gbps -------------------------------------------------------------------Interfaces in the Port-Group B2B Credit Bandwidth Rate Mode Buffers (Gbps) -------------------------------------------------------------------fc1/1 16 4.0 shared fc1/2 16 4.8 Gbps Allocated dedicated bandwidth is 0.0 shared <snip> ! Changing the rate-mode to dedicated MDS-9222i(config)# inte fc1/5-6 MDS-9222i(config-if)# switchport rate-mode dedicated MDS-9222i(config-if)# switchport mode auto MDS-9222i(config-if)# sh interface fc1/5-6 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/5 1 auto on down swl --fc1/6 1 auto on down swl --! Interface fc 1/5-6 are configured for dedicated rate mode MDS-9222i(config-if)# show port-resources module 1 Module 1 Available dedicated buffers are 3543 Port-Group 1 Total bandwidth is 12.8 Gbps Total shared bandwidth is 4.0 shared fc1/4 16 4.8 Gbps Allocated dedicated bandwidth is 8.

4 LRR. Local switch run time information: State: Stable Local switch WWN: 20:01:00:05:9b:77:0d:c1 Running fabric name: 20:01:00:05:9b:77:0d:c1 Running priority: 2 Current domain ID: 0x51(81) Local switch configuration information: State: Enabled FCID persistence: Enabled Auto-reconfiguration: Disabled Contiguous-allocation: Disabled Configured fabric name: 20:01:00:05:30:00:28:df Optimize Mode: Disabled Configured priority: 128 Configured domain ID: 0x00(0) (preferred) Principal switch run time information: Running priority: 2 Interface Role RCF-reject --------------------------------------port-channel 56 Downstream Disabled --------------------------------------MDS-9148(config-if)# show fcdomain vsan 1 The local switch is a Subordinated Switch. VSAN 1 and VSAN 48 are up but VSAN 47 is in isolated status because VSAN 47 is not configured ! on MDS 9148 switch MDS 9148 is only carrying SAN-B traffic.47-48) Trunk vsans (up) (1.! Verifying the portchannel interface. 22 loop inits 2 output OLS.48) Trunk vsans (isolated) (47) Trunk vsans (initializing) () 5 minutes input rate 1024 bits/sec. trunk mode is on snmp link state traps are enabled Port mode is TE Port vsan is 1 Speed is 8 Gbps Trunk vsans (admin allowed and active) (1. Local switch run time information: State: Stable Local switch WWN: 20:01:54:7f:ee:05:07:89 Running fabric name: 20:01:00:05:9b:77:0d:c1 Running priority: 128 . 0 unknown class 0 too long. MDS-9222i(config-if)# show int port-channel 56 port-channel 56 is trunking Hardware is Fibre Channel Port WWN is 24:38:00:05:9b:77:0d:c0 Admin port mode is auto. 0 too short 361 frames output. 107 bytes/sec. 128 bytes/sec. 1 frames/sec 5 minutes output rate 856 bits/sec. 36000 bytes 0 discards. 29776 bytes 0 discards. 0 errors 0 CRC. 2 NOS. 2 loop inits Member[1] : fc1/5 Member[2] : fc1/6 ! During ISL bring up there will be principal switch election. 0 NOS. 0 LRR. 0 errors 2 input OLS. for VSAN 1 the one with the lowest priority won the election which is MDS ! 9222i in our sample SAN topology. MDS-9222i# show fcdomain vsan 1 The local switch is the Principal Switch. 1 frames/sec 363 frames input.

although they could be identified using switch physical port numbers. 2 ports up Ports: fc1/5 [up] fc1/6 [up] * ! The asterisks (*) indicate which interface became online first. we need to provide access to our initiator.) Usually members of a zone are identified using the end devices World Wide Port Number (WWPN). Figure 9-12 portrays the necessary zoning configuration. 2 ports up Ports: fc1/5 [up] fc1/6 [up] * To bring up the rest of our sample SAN topology. (Zoning on separate VSANs is completely independent. . which is connected to fc 1/1 of MDS 9148 via VSAN 48 to a LUN that resides on a storage array connected to fc 1/14 of MDS 9222i (VSAN 48). MDS-9148(config-if)# show port-channel database port-channel 56 Administrative channel mode is active Operational channel mode is active Last membership update succeeded First operational port is fc1/6 2 ports in total. Zoning through the switches allows traffic on the switches to connect only chosen sets of servers and storage end devices on a particular VSAN.Current domain ID: 0x52(82) Local switch configuration information: State: Enabled FCID persistence: Enabled Auto-reconfiguration: Disabled Contiguous-allocation: Disabled Configured fabric name: 20:01:00:05:30:00:28:df Optimize Mode: Disabled Configured priority: 128 Configured domain ID: 0x00(0) (preferred) Principal switch run time information: Running priority: 2 Interface Role RCF-reject --------------------------------------port-channel 56 Upstream Disabled --------------------------------------MDS-9222i# show port-channel database port-channel 56 Administrative channel mode is active Operational channel mode is active Last membership update succeeded First operational port is fc1/6 2 ports in total.

Figure 9-12 Zoning Configuration on our Sample SAN Topology Our sample SAN topology is up with the preceding zone configuration. . In Table 9-12 we summarize NX-OS CLI commands for trunking and port channel. FC initiator via dual host bus adapter (HBA) can access same LUNs from SAN A (VSAN 47) and SAN B (VSAN 48).

.

Table 9-12 Summary of NX-OS CLI Commands for Trunking and Port Channels Reference List Cisco MDS 9700 Series Hardware installation guide is available at http://www.com/c/en/us/td/docs/switches/datacenter/mds9000/sw/6_2/upgrade/guides/nxos/upgrade.html#pgfId-419386 Cisco MDS 9000 NX-OS and SAN OS software configuration guides are available at http://www.cisco.com/c/en/us/support/storage-networking/mds-9000-nx-os-san-ossoftware/products-installation-guides-list.cisco. Cisco MDS 9148s Multilayer Switch Hardware installation guide is available at http://www.com/c/en/us/td/docs/switches/datacenter/mds9000/hw/9700/mds_9700_hig/overview Cisco MDS Install and upgrade guides are available at http://www.cisco.com/c/en/us/support/storage-networking/mds-9000-nx-os-san-os- .com/c/en/us/td/docs/switches/datacenter/mds9000/hw/9148S/install/guide/9148S_h Cisco MDS NX-OS install and upgrade guides are available at http://www.com/en/US/docs/switches/datacenter/mds9000/hw/9250i/installation/guide/connect.com/c/en/us/support/storage-networking/mds-9000-series-multilayerswitches/products-installation-guides-list.cisco.html Cisco MDS 9000 NX-OS Software upgrade and downgrade guide is available at http://www.cisco.cisco.cisco.html Cisco MDS 9250i Multiservice Fabric Switch Hardware installation guide is available at http://www.

. noted with the key topic icon in the outer margin of the page.software/products-installation-and-configuration-guides-list. Table 9-13 lists a reference for these key topics and the page numbers on which each is found.html Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.

Define Key Terms Define the following key terms from this chapter. includes completed tables and lists to check your work.” also on the disc. and complete the tables and lists from memory. Appendix C.” (found on the disc).Table 9-13 Key Topics for Chapter 9 Complete Tables and Lists from Memory Print a copy of Appendix B. “Memory Tables Answer Key. and check your answers in the Glossary: virtual storage-area networks (VSAN) zoning fport-channel-trunk enhanced small form-factor pluggable (SFP+) small form-factor pluggable (SFP) 10 gigabit small form factor pluggable (XFP) host bus adapter (HBA) Exchange Peer Parameters (EPP) Exchange Virtual Fabrics Protocol (EVFP) Extensible Markup Language (XML) common information model (CIM) Power On Auto Provisioning (POAP) switch worldwide name (sWWN) World Wide Port Name (pWWN) slow drain basic input/output system (BIOS) . “Memory Tables. or at least the section for this chapter.

random-access memory (RAM) Port Trunking Protocol (PTP) fan-out ratio fan-in ratio .

without looking back at the chapters or your notes. using the PCPT software. but as you read through each new chapter you will become more familiar and comfortable with them. which will help you remember the concepts and technologies of these key topics as you progress with the book. . Table PII-1 Part II Review Checklist Repeat All DIKTA Questions For this task. using the PCPT software. take the time to reread those topics. you should go back and refresh on the key topics. Review Key Topics Browse back through the chapters and look for the Key Topic icons. Self-Assessment Questionnaire Each chapter of this book introduces several key learning items. Details on each task follow the table. you should try to answer these self-assessment questionnaires to the best of your ability. This might seem overwhelming to you initially. Answer Part Review Questions For this task. This exercise lists a set of key self-assessment questions.Part II Review Keep track of your part review progress with the checklist shown in Table PII-1. if you’re not certain. If you do not remember some details. Refer to “How to View Only DIKTA Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you DIKTA questions for this part only. Refer to “How to View Only Part Review Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you Part Review questions for this part only. shown in the following list. For this self-assessment exercise in this book. There will be no written answers to these questions in this book. which are relevant to the particular chapter. answer the Part Review questions for this part of the book. answer the “Do I Know This Already?” questions again for the chapters in this part of the book.

You might or might not find the answer explicitly. number 7 is about Cisco MDS Product Family. but the chapter is structured to give you the knowledge and understanding to be able to discuss and explain these self-assessment questions. For example. . For example.Think of each of these open self-assessment questions as being about the chapters you’ve read. Block Virtualization would apply to Chapter 8. and so on. number 6 is about Data Center Storage Architecture. Note your answer and try to reference key words in the answer from the relevant chapter.

Part III: Data Center Unified Fabric Exam topics covered in Part III: Define Challenges in Today’s Data Center Networks Unified Fabric Principles Inter-Data Center Unified Fabric Components and Operation of the IEEE Data Center Bridging Standards FCoE Principles FCoE Connectivity Options FCoE and Fabric Extenders FCoE and Server Adapter Virtualization Network Topologies and Deployment Scenarios for Multihop Unified Fabric FCoE and Cisco FabricPath Chapter 10: Unified Fabric Overview Chapter 11: Data Center Bridging and FCoE Chapter 12: Multihop Unified Fabric .

you should mark that question as wrong for purposes of the self-assessment.Chapter 10. Table 10-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. and orchestration platforms for more efficient operations and greater scalability. “Answers to the ‘Do I Know This Already?’ Quizzes. Cisco Unified Fabric delivers flexibility and consistent architecture across a diverse set of environments. which are designed to operate in physical. virtual. which improve the robustness of Ethernet and its responsiveness. storage. What is Unified Fabric? . It uniquely integrates with servers. You can find the answers in Appendix A. 1. In this chapter we discuss how Unified Fabric promises to be the architecture that fulfills the requirements for the next generation of Ethernet networks in the data center. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. Unified Fabric Overview This chapter covers the following exam topics: Define Challenges in Today’s Data Center Networks Unified Fabric Principles Inter-Data Center Unified Fabric Unified Fabric is a holistic network architecture comprising switching.” Table 10-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. and services infrastructure components. security. By unifying and consolidating infrastructure components. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. If you do not know the answer to a question or are only partially sure of the answer. such as IEEE Data Center Bridging enhancements. Unified Fabric supports new concepts. and cloud environments. read the entire chapter.

a. It leverages IEEE data center bridging. b. vPC blocks redundant paths toward the STP root bridge. c. What is vPath? a. It is a feature of Cisco Nexus 7000 series switches to steer the traffic toward physical service nodes. Which of the following is one challenge of today’s data center networks? a. SAN storage traffic c. What are the two types of traffic that are most commonly consolidated with Unified Fabric? a. and cloud environments. c. Spanning tree is automatically disabled when you enable vPC. d. c. b. 6. What Cisco virtual product is used to secure VM-to-VM communication within the same tenant space? a. Deploy separate application-specific dedicated environments. It leverages the Ethernet network. Today’s networks as they are can accommodate any application demand for network and storage. Virtual Security Gateway d. Administrators have hard time doing unified cabling. b. c. Virtual Adaptive Security Appliance c. vPC leverages IS-IS protocol to construct network topology. physical. Virtual Application Security 7. How does vPC differ from traditional spanning tree topologies? a. Multilayer design is not optimal for east-west network traffic flows. d. vPC leverages port-channels and has no blocked ports. . Voice traffic b. c. d. b. What is one possible way to solve the problem of different application demands in today’s data centers? a. vPC topology causes loss of 50% of bandwidth because of blocked ports. Infiniband traffic d. b. d. 4. It supports virtual. It consolidates multiple types of traffic. Network traffic 5. Rewrite applications to conform to the network on which they are running. 2. Virtual Supervisor Module b. Allow network discover application requirements. It is a feature that allows virtual switches to get their basic configuration during boot. 3. It is a virtual path selected between two virtual machines in the data center. Too low power consumption is not good for the electric companies.

Figure 10-1 depicts traditional north-south and the new east-west data center switching architectures. The frame is dropped. aggregation. where decreasing budgets have to accommodate for innovative technology and process adoption. This prevents the data centers of today from becoming the engine of enablement for the business growth. 10. a. At the same time. ITR stands for Ingress Transformation Router. c. which enable strategic business growth and provide a foundation for the continuing trends of the private/public cloud deployments. which can simultaneously support multiple traffic types. Traffic patterns have shifted from being predominantly north-south to being predominantly east-west. Wireless d.d. Such traffic patterns also facilitated the transition from traditional multilayer “stove pipe” designs of access. What happens when OTV receives a unicast frame and the destination MAC address is not in the MAC address table? a. d. Over the years it has withstood the test of time against other technologies trying to displace it as the popular option for data center network environments. Which parts of the infrastructure can DCNM products manage? a. Modern application environments no longer rely on simple client/server transactions. True or false? In LISP. and core layers into a flatter data center switching fabric architecture leveraging SpineLeaf Clos topology. 8. The frame is dropped and the ICMP message is sent back to the source. It is well understood by network engineers and developers worldwide. LAN b. Operational silos and isolated infrastructure solutions are not optimized for virtualized and cloud environments. The frame is flooded through the overlay to all remote sites. where a single client request causes several network transactions between the servers in the data center. It is a feature of Cisco Nexus 1000v virtual switches to steer the traffic toward virtual service nodes. b. IT departments are also under everincreasing pressure of delivering more for less. . The frame is turned into multicast and sent only to some of the remote sites. False Foundation Topics Challenges of Today’s Data Center Networks Ethernet is by far the predominant choice for a single. converged fabric. Ethernet enjoys a broad base of engineering and operational expertise resulting from its ubiquitous presence in the data center environments for many years. IT departments of today’s organizations are under increasing pressure to deliver technological solutions. True b. SAN c. vPC 9.

In the converged infrastructure of the Unified Fabric. Now application performance is measured along with performance of the network. which is a predominant use case behind the Unified Fabric deployment. One method to solve this is to deploy multiple. Packet drops can have severely adverse effects on overall application performance. All these require the network infrastructure to be flexible and intelligent to discover and accommodate the changes in network traffic dynamics. which means that network available bandwidth and latency numbers are both important. Client-to-server and server-to-server transactions usually involve short and bursty in nature transmissions. growing application demands require additional capabilities from the underlying networking infrastructure. It is not uncommon for the data centers of today to deploy an Ethernet network for IP traffic and a Fibre Channel storage-area network (SAN) for block-based storage connectivity. This fundamental shift in traffic volumes and traffic patterns has resulted in greater reliance on the network. Different protocols respond differently to packet drops. as well as services such as Remote Direct Memory Access (RDMA). The demands for high performance and cluster computing. even though this is not a popular technology in the majority of the modern . The best example is the FCoE protocol. applicationspecific dedicated environments. Networks deployed in high-frequency trading environments have to accommodate ultra-low latency packet forwarding where traditional rules of typical Enterprise application deployments do not quite suffice. differences also exist in the capability of applications to handle packet drops. Ultimately. both of the behaviors must be accommodated. FCoE protocol requires a lossless underlying fabric where packet loss is not tolerated. are at times addressed by leveraging the low latency InfiniBand network. some accept packet drops as part of a normal network behavior.Figure 10-1 Multilayer and Spine-Leaf Clos Architecture Increased volumes of data led to significant network and storage traffic growth across the infrastructure. whereas others absolutely must have guaranteed no-drop packet delivery. Different types of traffic carry different characteristics. Although bandwidth and latency matter. whereas most of the server-to-storage traffic consists of long-lasting and steady flows. separate.

This highly segmented deployment results in high capital expenditures (CAPEX). and network services onto consistent networking infrastructure across physical. Convergence of Network and Storage Convergence properties of the Cisco Unified Fabric architecture are the melding of the storage-area network (SAN) with a local-area network (LAN) delivered on top of enhanced Ethernet fabric. and lower operating costs.data centers. and high demands for management. Figure 10-2 Typical Distinct Data Center Networks When these types of separate networks are evaluated against the capabilities of the Ethernet technology. we can see that Ethernet has a promise of meeting the majority of the requirements for all the network types. Some organizations even build separate Ethernet networks for IP-based storage. This is where Cisco Unified Fabric serves as a key building block for ubiquitous infrastructure to deploy applications on top. Figure 10-2 depicts a traditional concept of distinct networks purposely built for specific requirements. high operational expenses (OPEX). virtual. and so on. data networking. All these combined create an opportunity for consolidation. which breaks organizational silos while reducing technological complexity. and cloud environments. It enables optimized resource utilization. . It provides foundational connectivity service. faster application rollout. scalability. high availability. Cisco Unified Fabric Principles Cisco Unified Fabric is a multifaceted approach where architectures. unifying storage. greater application performance. intelligence. resulting in fewer management points with shorter bring up time. features. while greatly increasing network business value. Cisco Unified Fabric creates a platform of a true multiprotocol environment on a single unified network infrastructure. Such Ethernet fabric would be operationally simpler to provision and operate. and capabilities are combined with concepts of convergence. The following sections examine in more detail the key properties of the Unified Fabric network. security.

Figure 10-3 FCoE and Fibre Channel Bidirectional Bridging The Cisco Nexus 5000 series switches can leverage either native Fibre Channel ports or the unified ports to bridge between FCoE and traditional Fibre Channel environment. . allowing FCoE servers to connect to older Fibre Channel-only storage arrays. Figure 10-4 depicts unified port operation. where FCoE servers connect to Fibre Channel storage arrays leveraging the principle behind bidirectional bridging on Cisco Nexus 5000 series switches. unified ports can be software configured to operate in 1G/10G Ethernet. Both the Cisco MDS 9000 family of storage switches and the Cisco Nexus 5000/7000 family of Unified Fabric network switches have features that facilitate network convergence. and by that providing investment protection for the current network equipment and technologies. Cisco Nexus 5000 and Cisco MDS 9000 family switches can provide full.Although end-to-end converged Unified Fabric provides the utmost advantages to the organizations. 10G FCoE. At the time of writing this book. bidirectional bridging between FCoE and traditional Fibre Channel fabric. or 1/2/4/8G Fibre Channel mode. servers attached through traditional Fibre Channel HBAs can access newer storage arrays connected through FCoE ports. it also allows the flexibility of integrating into existing non-converged infrastructure. Similarly. Figure 10-3 depicts bridging between FCoE and traditional Fibre Channel fabric. For instance.

If you add on top separate connections for backup. Example 10-1 Configuring Unified Port on the Cisco Nexus 5000 Series Switch Click here to view code image nexus5500# configure terminal nexus5500 (config)# slot 1 nexus5500 (config-slot)# port 32 type fc nexus5500 (config-slot)# copy running-config startup-config nexus5500 (config-slot)# reload Consolidation can also translate to a very significant savings. For example. At the time of writing this book. With such great flexibility.and IP-based storage protocols as the time progresses. and an out-of-band management connection. two redundant SAN connections. clustering. Example 10-1 shows a configuration example for defining unified ports as port type Fibre Channel on Cisco Nexus 5500 switches. Cisco Nexus 7000 series switches do not support unified port functionality. a typical server requires at least five connections: two redundant LAN connections. you can deploy Cisco Nexus Unified Fabric switches initially connected to traditional systems with HBAs and convert to FCoE or other Ethernet. and so on. for each server in the data center.Figure 10-4 Unified Port Operation Note Unified port functionality is not supported on Cisco Nexus 5010 and 5020 switches. you can quickly see how it translates into a significant .

To enforce Fibre Channel characteristics at each hop along the path between initiators and targets. consolidation also carries operational improvements. and support for virtual SANs (VSAN). and the like.investment. which is essential for healthy Fibre Channel protocol operation. Such consistency allows IT staff to leverage operational practices of the isolated environments and apply them to the converged environment of the Unified Fabric. this includes support for multicast and broadcast traffic. this includes provisioning of all the Fibre Channel services. All switches composing the Unified Fabric environment run the same Cisco NX-OS switch operating system. Along with reduction in cabling. and switches. VLANs. a converged data center network provides all the functionality of existing disparate network and storage environments. With consolidation in place. Figure 10-5 Server Adapter Consolidation Using Converged Network Adapters If network adapter virtualization in the form of Cisco Adapter-FEX or Cisco VM-FEX technologies is leveraged. This sometimes is referred to as FCoE Dense Mode. and so on. Figure 10-5 depicts an example of how network adapter consolidation can greatly reduce the number of server adapters used in the Unified Fabric topology. it can yield even further significant savings. From the perspective of a large size data center. Fibre Channel frames are encapsulated in outer Ethernet headers for hop-by-hop delivery between initiators and targets where lossless behavior is expected. and fewer network layers. inter-VSAN routing (IVR). For Fibre Channel. For Ethernet. adapters. Figure 10-6 depicts the logical concept behind . equal cost multipathing. proving at last 2:1 saving in cabling. which contribute to lower over-subscription and eventually high application performance. Fewer cables have implications on cooling by improving the airflow in the data center. all these connections can be replaced by the converged network adapters. One possible concern for delivering converged infrastructure for network and storage traffic is that Ethernet protocol does not provide guaranteed packet delivery. With FCoE. intermediate Unified Fabric Ethernet switches operate in either FCoE-NPV or FCoE switching mode. fewer switches. this cabling reduction translates to fewer ports. At the same time. such as zoning and name servers. link aggregation. and fewer switches and adapters also mean less power consumption.

support these standards for enhanced Ethernet. SAN and LAN convergence does not have to happen over night. FICON and Fibre Channel over IP. sometimes referred to as FCoE Sparse Mode. Cisco Nexus 5000/7000 series Unified Fabric switches.having all FCoE intermediate switches operating in either FCoE-NPV or FCoE switching mode. Cisco DCNM allows managing network and storage environments as a single entity. as well as the MDS 9500/9700 series storage switches with FCoE line cards. For smaller size deployments. Cisco Data Center Network Manager (DCNM). migration from disparate . FCoE switching mode implies running FCoE Fibre Channel Forwarder. The only partial exception is Dynamic FCoE. FCoE. Cisco DCNM provides a secure environment. Cisco MDS 9250i Multiservice Fabric Switch offers support for the Unified Fabric needs of running Fibre Channel. as well as monitor and automate common network administration tasks. the FCF. where intermediate Ethernet switches do not have FCoE awareness. where FCoE traffic is forwarded over Cisco FabricPath fabric core links. IEEE defines a set of standards augmenting standard Ethernet protocol behavior. no-drop. Cisco’s primary management tool. Running FCoE requires additional capabilities added to the Ethernet Unified Fabric network. which can manage a fully converged network. and they are required for successful FCoE operation. Ultimately. however. is optimized to work with both the Cisco MDS 9000 series storage switches and Cisco Nexus 5000/7000 series Unified Fabric network switches. Convergence with Cisco Unified Fabric also extends to management. and Cisco Unified Fabric architecture is flexible to allow gradual. DCB increases overall reliability of the data center networks. Figure 10-6 FCoE Dense Mode At the time of writing this book. This set of standards is collectively called Data Center Bridging (DCB). they still perform certain FCoE functions—for example. in-order delivery of packets endto-end. making sure frame hashing does not result in out-of-order packets. With the Dynamic FCoE feature. which is increasingly necessary not only for the storage traffic. Cisco FabricPath Spine Nodes do not act as FCoE Fibre Channel Forwarders. Cisco does not support FCoE topologies. iSCSI. but also other types of multiprotocol communication across the Unified Fabric. These additional standards make sure that Unified Ethernet fabric provides lossless. and many times non-disruptive.

iSCSI allows encapsulating original Fibre Channel payload in the TCP/IP packets. in-order delivery. Even though DCB is not mandatory for iSCSI operation. but that increases the cost of network adapters. to terminate the iSCSI TCP/IP session. iSCSI can also introduce increased CPU cycles resulting from TCP/IP overhead on the server. organizations can leverage an iSCSI gateway device. With convergence of the two. In environments where storage connectivity has less stringent requirements. For environments where storage arrays lack iSCSI functionality. the underlying network infrastructure cannot differentiate between iSCSI and any other TCP/IP application. Migration from NICs and HBAs to the CNA can occur either during the regular server hardware refresh cycles or it can be carried out by proactively swapping them during planned maintenance windows. extract the payload. server network interface cards (NIC) and host bus adapters (HBA) are used to provide connectivity into isolated LAN and SAN environments. such as a Cisco MDS 9250i storage switch. some NICs and CNAs connected to Cisco . no-drop. iSCSI does not support native Fibre Channel services and tools. Figure 10-7 depicts iSCSI deployment options with and without the gateway service. and it simply delivers IP packets between source and destination. Figure 10-7 iSCSI Deployment Options Contrary to FCoE. iSCSI leverages TCP protocol to deliver lossless. which relies on IEEE DCB standards for efficient operation. Traditionally. Small Computer System Interface over IP (iSCSI) can be considered as an alternative to FCoE.isolated environments to unified network infrastructure. In this case. TCP/IP offload can be leveraged on the network adapter to help with higher CPU utilization. which are built around traditional Fibre Channel fabrics. which are used for this solution. NICs and HBAs make way for a new type of adapter called converged network adapter (CNA). and reencapsulate it in the FCoE protocol for transmission over the Unified Fabric.

Unified Fabric is often characterized as having a multidimensional scalability. 40G. where individual device performance. for example. Scalability and Growth Scalability is defined as the capability to grow the environment based on changing needs. Example 10-2 shows the relevant command-line interface commands to enable a no-drop class for iSCSI traffic on Nexus 5000 series switches for guaranteed traffic delivery. and 100G offers substantially higher useable bandwidth for the server and storage connectivity. and subsequently fewer network tiers and fewer management points. Class of Service (COS) markings. as well as the overall fabric and system scalability. DCBX would negotiate configuration and settings between the switch and the network adapter by leveraging typelength-value (TLV) and sub-TLV fields. it is not quite replacing the wide adoption of the Fibre Channel protocol. such as. allowing a true geographic span. Example 10-2 Configuring No-Drop Behavior for iSCSI Traffic Click here to view code image nexus5000# configure terminal nexus5000(config)# class-map type qos match-all c1 nexus5000(config-cmap-qos)# match protocol iscsi nexus5000(config-cmap-qos)# match cos 6 nexus5000(config-cmap-qos)# exit nexus5000(config)# policy-map type qos c1 nexus5000(config-pmap-qos)# class c1 nexus5000(config-pmap-c-qos)# set qos-group 2 nexus5000(config-pmap-c-qos)# exit nexus5000(config)# class-map type network-qos c1 nexus5000(config-cmap-nq)# match qos-group 2 nexus5000(config-cmap-nq)# exit nexus5000(config)# policy-map type network-qos p1 nexus5000(config-pmap-nq)# class type network-qos c1 nexus5000(config-pmap-c-nq)# pause no-drop Although iSCSI continues to be a popular option in many storage applications. In that case. The capability to grow the network in a manner that causes the least amount of disruption to the ongoing operation is a crucial aspect of scalability.Nexus 5000 series switches can be configured to accept configuration values sent by the switches leveraging Data Center Bridging Exchange protocol. fewer switches. Scalability parameters can also stretch beyond the boundary of a single data center. part of DCB. This allows distributing configuration values. is achieved without compromising manageability and cost. particularly with small and medium-sized businesses (SMB). either traditional or FCoE. Unified Fabric leveraging connectivity speeds of 10G. to the connected network adapters in a scalable manner. allowing overall larger scale. for critical storage environments. Potential use of other DCB standards coded in TLV format adds even further flexibility to the iSCSI deployment by allowing underlying Unified Fabric infrastructure separate iSCSI storage traffic from the rest of IP traffic. which contributes to fewer ports. Each network deployment has unique .

Unified Fabric caters to the needs of ever expanding scalability by providing flexible architecture. Figure 10-10 depicts Cisco Nexus 9500 series modular switch architecture. Cisco’s entire data center switching portfolio adheres to the principles of supporting incremental growth and demands for scalability. Cisco Nexus 9500 series modular switches have no backplane. but instead uses a unified port controller and Unified Fabric crossbar.characteristics. higher port capacity. Cisco Nexus 7000 series switches offer expandable pay-asyou-grow scalable switch fabric architecture and non-oversubscribed behavior for high-speed Ethernet fabrics. Cisco Nexus 5000 series switches offer a highly scalable access layer platform with 10G and 40G Ethernet capabilities and FCoE support for Unified Fabric convergence. Figure 10-8 depicts the Cisco Nexus 7000 series switch fabric architecture. and the line cards are directly interconnected through the switch backplane. For example. These switches provide fixed and modular switch architecture with high port density and the capability to run Application Centric Infrastructure. which cannot be addressed by a rigid architecture. Through an evolution of the Fabric Modules and the I/O Modules. and several differentiating features. . while also following the investment protection philosophy. Figure 10-9 Cisco Nexus 5000 Series Switch Fabric Architecture The most recent addition to the Cisco Nexus platforms is the Cisco Nexus 9000 family switches. Its architecture does not include fabric modules. Figure 10-9 depicts Cisco Nexus 5000 series switch fabric architecture. Cisco Nexus 7000 series switches offer highly scalable M-series I/O modules for large capacities and feature richness. Figure 10-8 Cisco Nexus 7000 Series Switch Fabric Architecture Note Cisco Nexus 7004 switch has no fabric modules. and F-series I/O modules for lower latency. Cisco Nexus 9000 series switches are based on a combination of Cisco’s own ASICs and Broadcom ASICs.

inservice software upgrade (ISSU) and so on. Cisco NX-OS operating system.Figure 10-10 Cisco Nexus 9500 Series Modular Switch Architecture On the storage switch side. Cisco MDS switches offer a highly available platform for storage area networks with expandable and resilient architecture of supporting increased Fibre Channel speeds and services. Figure 10-11 depicts NX-OS software architecture components. runs modular software architecture based on an underlying Linux kernel. coupled with Role-Based Access Control (RBAC) and a . which powers data center network and storage switching platforms. persistent storage space (PSS) for process runtime information. component patching. Figure 10-11 NX-OS Software Architecture Components Cisco NX-OS leverages innovative approach in software development by offering stateful process restart.

this is no longer the case. fabric scale. Spanning tree Layer 2 loop-free topologies do not allow backup links and. where Etherchannels or port channels enable all-forwarding uplink topologies. and service insertion methodologies. Legacy spanning tree–driven topologies lack fast convergence required for the stringent demands of modern applications. traditional growth implied adding capacity in the form of switch ports or uplink bandwidth. Cisco NX-OS software enjoys frequent updates to keep up with the latest feature demands. Cisco implements virtual port channel (vPC) technology to help alleviate deficiencies of traditional spanning tree–driven topologies. fabric capacities. vPC still runs Spanning Tree Protocol underneath as a safeguard mechanism to break Layer 2 bridging loops should they occur. Modern data centers observe different trends in packet forwarding where traditional client/server applications driving primarily north-south traffic patterns are replaced by a new breed of applications with predominantly east-west traffic patterns. This shift has significant implications on the data center switching fabric designs. This generates additional requirements on the data center switching fabric side around extensibility of Layer 2 connectivity. . many applications reside on virtual machines with inherent characteristics of workload mobility and any workload anywhere. Figure 10-12 depicts spanning tree Layer 2 operational logic. Figure 10-12 Spanning Tree Layer 2 Operational Logic The evolution of spanning tree topologies led to the use of multichassis link-aggregation solutions. cause “waste” of 50% of provisioned uplink bandwidth on access switches by blocking redundant uplinks. Figure 10-13 depicts vPC operational logic. allowing faster convergence and full utilization of the Access Switch provisioned uplink bandwidth. In the past.rich set of remote APIs. as such. In addition to the changing network traffic patterns.

Cisco FabricPath takes a step further by completely eliminating the dependency on the Spanning Tree Protocol in constructing loop-free Layer 2 topology. It employs the principle of conversational learning to further scale switch hardware forwarding resources. Cisco FabricPath combines the simplicity of Layer 2 with proven scalability of Layer 3.Figure 10-13 vPC Operational Logic Can we do better? Yes. It also allows equal cost multipathing for both unicast and multicast traffic to maximize network resource utilization. Figure 10-14 depicts Cisco FabricPath topology and forwarding. . Even with vPC topologies. It leverages Spine-Leaf Clos architecture to construct data center switching fabric with predicable end-to-end latency and vast east-west bisectional bandwidth. the scalability of solutions requiring large Layer 2 domains in support of virtual machine mobility can be limited. by using IS-IS routing protocol instead of Spanning Tree Protocol.

This is an important consideration with multitenant Unified Fabric. Cisco FabricPath breaks the boundaries and enables large-scale virtual machine mobility domains.1Q standard widely deployed between switches. Similar to Cisco FabricPath. Figure 10-16 depicts the VXLAN packet format. Use of Cisco Nexus 2000 series Fabric Extenders in conjunction with Cisco FabricPath architecture enables us to build even larger-scale switching fabrics without increasing administrative domain. such as virtual extensible LANs or VXLAN. The 24-bit field potentially enables you to create more than 16 million virtual segments. Figure 10-16 VXLAN Packet Fields .Figure 10-14 Cisco FabricPath Topology and Forwarding By taking data center switching from PODs to fabric. Principles of Spine-Leaf topologies can also be applied with other encapsulation technologies. Figure 10-15 depicts VXLAN topology and forwarding. which requires a large number of virtual segments. sometimes also known as the Virtual Network Identifier (VNID) field. providing similar advantages to Cisco FabricPath while enjoying broader switching platform support. where you can see the 24-bit VXLAN ID field. Figure 10-15 VXLAN Topology and Forwarding VXLAN also assists in breaking through the boundary of ~4. VXLAN leverages underlying Layer 3 topology to extend Layer 2 services on top (MAC-in-IP).000 VLANs supported by the IEEE 802.

Consistent use of policies also enables faster application deployment cycles. called Virtual Supervisor Module (VSM). The use of consistent network policies is essential for successful Unified Fabric operation. so that creation. Cisco Nexus 1000v is a cross-hypervisor virtual software switch that allows delivering consistent network. and moved around. control the switch line cards called Virtual Ethernet Module (VEM). Policies also greatly reduce the risk of human error. Cisco Unified Fabric contains a complete portfolio of security products that are virtualization aware. a common feature set. and the like can be consistently set across the environment allowing to tune those for special application needs if necessary. VSMs should be deployed in pairs to provide the needed redundancy. removed.VXLAN fabric allows building large-span virtual machine mobility domains with significant scale and bandwidth. . the intelligence comes from a common switch operating system. Policies can be templatized and applied in a consistent repeatable manner. and intelligent services delivered to applications residing on physical or virtual network infrastructure alike. Cisco Unified Fabric policies around security. As multiple types of traffic are consolidated on top of the Unified Fabric infrastructure. performance. These products run in the hypervisor layer to secure virtual workloads inside a single tenant or across tenants. or migration of individual workloads has no impact on the overall environment operation. removal. Cisco Unified Fabric also contains a foundation for traffic switching in the hypervisor layer in the form of a Cisco Nexus 1000v product. security. it raises legitimate security concerns. Policies are applied across the infrastructure and most importantly in the virtualized environment where workloads tend to be rapidly created. Security and Intelligence Network intelligence coupled with security is an essential component of Cisco Unified Fabric. differentiated service. It operates in the manner similar to a modular chassis. Figure 10-17 depicts Cisco Nexus 1000v components. Cisco NX-OS. and Layer 4 through Layer 7 policies for the virtual workloads. It provides consistency. For the Cisco Nexus and Cisco MDS family of switches. where switch supervisor modules. high availability. deployment of applications required a considerable effort of sequentially configuring various physical infrastructure components. In the past.

or they can secure intertenant communication between the virtual machines belonging to different tenants. . such as VMware vCenter. Virtual services deployed as part of the Cisco Unified Fabric solution can secure intra-tenant communication between the virtual machine belonging to the same tenant. Cisco Virtual Security Gateway (VSG) is deployed on top of Cisco Nexus 1000v virtual switch catering the case of intra-tenant security. Figure 10-18 depicts an example of Cisco VSG deployment to secure a three-tier application within a single tenant space. for easy single pane of glass operation. It provides logical isolation and policy enforcement based on both simple networking constructs and more elaborate virtual machine attributes.Figure 10-17 Cisco Nexus 1000v Components Cisco Nexus 1000v runs NX-OS software on the VSMs and communicates with Virtual Machine Managers.

Figure 10-19 depicts vPath operation with VSG. With Cisco vPath. All the subsequent packets of the flow are switched in a fast-path in the hypervisor kernel. Example 10-3 VSG Port-Profile Configuration . the first packet of each flow is redirected to VSG for inspection and policy enforcement. a policy decision is made and programmed into the Cisco Nexus 1000v switch residing in hypervisor kernel space. This model contributes to consistent policies across the environment. which results in high traffic throughput. Example 10-3 shows how VSG is inserted into the Cisco Nexus 1000v port-profile for specific tenant use.Figure 10-18 Three-Tier Application Secured by VSG The VSG product offers a very scalable mode of operation leveraging Cisco vPath technology for traffic steering. Figure 10-19 vPath Operation with VSG VSG security policies can be deployed in tandem with switching policies enforced by the Cisco Nexus 1000v. Following inspection.

Figure 10-20 depicts typical Cisco ASA topology in a traditional multilayer and Spine-Leaf architectures. It is also common to use two layers of security with virtual firewalls securing east-west traffic between the virtual machines and physical firewalls securing the north-south traffic in and out of the data center.10. and they can also provide security services for the physical non-virtualized workloads to create consistent policy across virtual and physical infrastructures. physical ASAs are connected to the services leaf nodes.1 vlan 10 profile vnsp-1 Inter-tenant space can be secured by either virtual firewall in the form of virtual Cisco ASA product deployed in the hypervisor layer or by the physical appliances deployed in the upstream physical network. Physical ASA appliances are many times positioned at the Unified Fabric aggregation layer. Cisco WAAS offers the following main advantages: Transparent and standards-based secure application delivery LAN-like application performance with application-specific protocol acceleration . This is where Cisco Wide Area Application Services (WAAS) offer a solution for application acceleration across WAN. Another component of intelligent service delivery comes in the form of application acceleration. Figure 10-20 Typical Cisco ASA Deployment Topologies Both physical and virtual ASA together provide a comprehensive set of security services for Cisco Unified Fabric. In the Spine-Leaf Clos architecture.10.Click here to view code image nexus1000v# configure terminal nexus1000v(config)# port-profile host-profile nexus1000v(config-port-prof)# org root/Tenant-A nexus1000v(config-port-prof)# vn-service ip 10.

leveraging deployments of WAAS components in various parts of the topology. it can be deployed in the hypervisor leveraging Cisco vPath redirection mechanisms. Finally. a router module.IT agility through reduction of branch and data center footprint Enterprisewide deployment scalability and design flexibility Enhanced end-user experience Cisco WAAS reduces consumption of WAN bandwidth. influence user productivity. which can be delivered in the form of a physical appliance. Cisco WAAS technology has two main deployment scenarios. these factors can adversely impact application performance. Figure 10-21 depicts use of WAN acceleration between the branch office and the data center. a virtual appliance. First. If left unaddressed. Cisco WAAS employs TCP flow optimization (TFO) to better manage TCP window size and maximize network bandwidth use in a way superior to common TCP/IP stack behavior on the host. or a Cisco IOS software feature. subsequently optimizing application performance across long-haul circuits. rather than transmitting the entire data. and grow to be business inhibitors. Delivering applications over WAN is often accompanied by jitter. both ends of the WAN optimized connection can signal each other with very small signatures. Cisco WAAS leverages context-aware data redundancy elimination (DRE) to reduce traffic load on WAN circuits. packet loss. Second. Geographically dispersed organizations leverage significant WAN capacities and incur significant operational expenses for maintaining circuits and service levels. In both cases. Hypervisor deployment model brings the advantages of application acceleration close to the virtual workloads. which cause the content to be served out of the locally maintained cache. . it can be deployed either at the edge of the environment in the case of a branch office or at the aggregation layer in case of a data center. Cisco WAAS leverages adaptive persistent session-based Lempel-Ziv (LZ) compression to shrink the size of network data traveling across. as well as application-specific acceleration techniques to counter adverse impact of delivering applications across WAN. Cisco WAAS leverages generic network traffic optimization. Cisco WAAS commonly leverages WCCP protocol traffic redirection to steer the traffic off path toward the WAAS appliance cluster. With DRE. Some of the traffic transmitted across the WAN is repetitive and can be locally cached. and bandwidth starvation.

DCNM can set and automatically provision a policy on converged LAN and SAN environments. and deployment model. . thereby improving business continuity. It provides visibility and control of the unified data center to uphold stringent servicelevel agreements. It dynamically monitors and takes actions when needed on both physical and virtual infrastructures. types of workloads. “Cisco WAAS and Application Acceleration.Figure 10-21 WAN Acceleration Between Data Center and Branch Office Cisco WAAS technology deployment requires positioning of relevant application acceleration engine on either side of the WAN connection.” for a more detailed discussion about Cisco WAAS technology. Cisco Prime Data Center Network Manager (DCNM) manages Cisco Nexus and Cisco MDS families of switches through a single pane of glass. Refer to Chapter 20. Figure 10-22 depicts Cisco Prime DCNM dashboard. These factors contribute to the overall data center infrastructure reliability and uptime. Management of the network is at the core of its intelligence. DCNM provides a robust set of features by streamlining the provisioning aspects of the Unified Fabric components. The selection of an engine is based on performance characteristics.

Layer 3. At the same time. This discovery data can be imported to the change management systems for easier operations. Cisco DCNM has extensive reporting features that allow building custom reports specific to the operating environment.Figure 10-22 Cisco Prime DCNM Dashboard Software features provided by Cisco NX-OS can be deployed with Cisco DCNM because it leverages multiple dashboards for the ease of use and representation of the overall network topology. Cisco DCNM provides automated discovery of the network components. keeping track of all physical and logical network device information. Such properties are the extension of Layer 2. each data center must remain an autonomous unit of operation. and storage connectivity between the data centers powered by the end-to-end Unified Fabric approach. DCNM can also handle provisioning and monitoring aspects of FCoE deployment. Organizations can also leverage reports available through the preconfigured templates. . Figure 10-23 depicts data center interconnect technology solutions. including an endto-end path containing a mix of traditional Fibre Channel and FCoE. Inter-Data Center Unified Fabric Geographically dispersed data centers provide added application resiliency and enhanced application reliability. Network foundation to enable multi-data-center deployment must extend the network properties across the data centers participating in the distributed setup.

deploying storage environments with different administrative boundaries. and fault-tolerance becomes increasingly important. Layer 1 technologies in the form of DWDM or CWDM allow running any data. Fibre Channel over IP extends the reach of the Fibre Channel protocol across geographic distances by encapsulating Fibre Channel frames in outer IP headers for transport across interconnecting Layer 3 routed networks. are in use today. a need for interconnecting isolated storage networks while maintaining characteristics of performance. Fibre Channel over IP Disparate storage area networks are not uncommon. which lay the foundation for inter-data center Unified Fabric extension.Figure 10-23 Data Center Interconnect Methods Multiple technologies. storage. As organizations leverage ubiquitous storage services. . as well as private or public MPLS networks. or outcomes of acquisitions. Disparate storage environments are most often located in topologies segregated from each other by Layer 3 routed networks. availability. while Layer 2 bridged connectivity is achieved by leveraging Overlay Transport Virtualization (OTV). Layer 3 networks allow extending storage connectivity between disparate data center environments by leveraging Fibre Channel over IP protocol (FCIP). These could be artifacts of building isolated environments to address local needs. or converged protocol on top. Figure 10-24 depicts the principle of transporting Fibre Channel frames across IP transport. These could be long haul point-to-point links.

cannot come at the expense of storage network performance.” and increased delays. Acknowledgments are used to signal from the receiver back to the sender the receipts of contiguous in-sequence TCP segments. rather than all packets beyond the acknowledgement. Cisco implements a selective acknowledgement (SACK) mechanism where the receiver can selectively acknowledge TCP segments received after packet loss occurs. the sustained network throughput increases. This lossless storage traffic behavior allows FCIP to extend native Fibre Channel connectivity across greater distances. network capacity “waste. TCP/IP protocol traditionally achieves its reliability for traffic delivery by leveraging the acknowledgement mechanism for the received data segments. In this case the sender only needs to retransmit the TCP segment lost in transit. Should packet drop occur. In case of packet loss. This. it must be manually enabled if required by issuing the pmtu-enable CLI command under the FCIP profile configuration. in the transit IP network can also greatly influence the performance characteristics of the storage traffic carried by FCIP. however. segments received after the packet loss occurs are not acknowledged and must be retransmitted. such as leveraging TCP extensions for high performance (IETF RFC 1312) and scaling TCP window size up to 1Gb. any packets carrying Fibre Channel payload are retransmitted following standard TCP/IP protocol operation. Large Maximum Transmission Unit size. it must be manually enabled if required by issuing the sack-enable CLI command under the FCIP profile configuration. The SACK mechanism is most effective in environments with long and large transmissions. .Figure 10-24 FCIP Operation Principle Leveraging TCP/IP protocol provides the FCIP guaranteed traffic delivery required for healthy storage network operation. Path MTU Discovery (PMTUD) mechanism can be invoked on both sides of the FCIP connection to make sure the effective maximum supported path MTU is automatically discovered. As TCP window size is scaled up. Intelligent control over TCP window size is needed to make sure long haul links are fully utilized while preventing excessive traffic loss and subsequent retransmission. and it must be enabled and supported on both ends of FCIP circuit. Cisco FCIP capable platforms allow several features to maximize utilization of the intermediate Layer 3 links. FCIP PMTUD is not automatically enabled on Cisco devices. but so do the chances of packet drops. This causes unnecessary traffic load. which in turn have a detrimental effect on FCIP storage traffic performance. or MTU. Support for jumbo frames can allow transmission of full-size 2148 bytes Fibre Channel frames without requiring fragmentation and reassembly at the IP level into numerous TCP/IP segments carrying a single Fibre Channel frame. SACK is not enabled by default on Cisco devices.

zoning. Just like regular inter-switch links. For example. fabric services. These tunnels contribute to connectivity resiliency and allow failover. and efficiency. This frame tagging is transparently carried over FCIP tunnels to extend the isolation.Deployment of Quality of Service features in the transit Layer 3 routed network is not a requirement for FCIP operation. Utmost attention should be paid to make sure the transit Layer 3 routed network between the sites interconnected by FCIP circuits conforms to the high degree of availability. FCIP must also be designed following high availability and redundancy principles. At the same time. They fall short of making use of . which are considered by the Fabric Shortest Path First (FSPF) protocol for equal-cost multipathing purposes. and fabric management for ubiquitous deployment methodology. VSAN segregation is traditionally achieved through data-plane frame tagging. exhibits the same characteristics in regard to routing. Leveraging device. FCIP protocol is available on Cisco Nexus MDS 9500 modular switches by leveraging IP Storage Services Module (IPS) and the Cisco MDS 9250i Multiservice Fabric Switch. or in some cases across PODs of the same data center. overall storage performance will gain benefits from QoS policies operating at the network level. FCIP allows provisioning numerous tunnels interconnecting disparate storage networks. and even though FCIP can handle such conditions. Overlay Transport Virtualization Certain mechanisms for Layer 2 extension across data centers. Some of the storage traffic types encapsulated inside the FCIP packets can be very sensitive to degraded network conditions. Each VSAN. These redundant tunnels are represented as virtual E_port or virtual TE_port types. however. either local or extended over the FCIP tunnel. FCIP tunnels can be bound into port channels for simpler FSPF topology and efficient traffic loadbalancing operation. Figure 10-25 Redundant FCIP Tunnels Leveraging Port Channeled Connections FCIP protocol allows maintaining VSAN traffic segregation by building logically isolated storage environments on top of shared physical storage infrastructure. reliability. At the time of writing this book. circuit. Figure 10-25 depicts the principle behind provisioning multiple redundant FCIP tunnels between disparate storage network environments leveraging port channeled connections. Spanning Tree Protocol–based extension topologies do not scale well and can cause cascading failures that can propagate beyond the boundaries of a single availability domain. while proper capacity management helps alleviate performance bottlenecks. introduce challenges that can inhibit their practical use. and path redundancy eliminates the single point of failure. such features are highly advised to make sure that FCIP traffic is subjected to the absolute minimum influence from transient network delays and congestion. Along with performance characteristics.

Cisco Overlay Transport Virtualization (OTV) technology provides nondisruptive. A more efficient protocol is needed to leverage all available paths for traffic forwarding. multihomed. Such a solution can be challenging for organizations not willing to develop operational knowledge around those technologies. they depend on specific transport network characteristics. at the same time. as well as provide intelligent traffic loadbalancing across the data center interconnect links. and multipathed Layer 2 data center interconnect technology that preserves isolation and fault domain properties between the disparate data centers. Example 10-4 IS-IS Neighbor Relationships Between OTV Edge Devices Click here to view code image . Let’s now look at a possible alternative. however. Figure 10-26 depicts basic OTV traffic forwarding principle where Layer 2 frames are forwarded between the sites attached to OTV overlay. which greatly reduces the risk often associated with large-scale Layer 2 extensions. Other Layer 2 data center interconnect technologies offer a more efficient model than Spanning Tree Protocol–based topologies. OTV leverages a MAC-in-IP forwarding paradigm to extend Layer 2 connectivity over any IP transport.multipathing topologies by logically blocking redundant links. OTV leverages the control plane protocol in the form of an IS-IS routing protocol to dissimilate MAC address reachability across the OTV edge devices participating in the overlay. Figure 10-26 OTV Packet Forwarding Principle OTV simulates the behavior of traditional bridged environments while introducing improvements. Example 10-4 shows IS-IS neighbor relations established between the OTV edge devices through the overlay. The best example is MPLS-based technologies that require label switching between the data center networks for Layer 2 extension. transport agnostic.

6e02. multipathing.020b dynamic 0 F F Overlay1 O 110 0000. which are difficult to set up.6e02.43c2 static F F sup-eth1(R) * 110 0000.OTV_EDGE1_SITEA# show otv isis adjacency OTV-IS-IS process: default VPN: OTV-IS-IS adjacency database: System ID SNPA NEXUS7K-EDGE2 001b. Such features stretch across loop-prevention. G . which allow maintaining traffic segregation across the data center boundaries. The use of overlay in OTV creates point-to-cloud type connectivity rather than a collection of fully meshed point-to-point links.43c1 OTV_EDGE2_SITE 001b.Gateway MAC.Routed MAC.+ . OTV edge devices leverage the underlying IP transport to carry original Ethernet frames encapsulated in IP protocol between the data centers. and multihoming mechanisms. As mentioned earlier.43c4 Overlay1 Level 1 1 1 State UP UP UP Hold Time 00:00:25 00:00:27 00:00:07 Interface Overlay1 Overlay1 Overlay1 This approach is different from the traditional bridged environments.6e02. O .016c dynamic 0 F F Po1 * 110 0000. manage.6e02. Example 10-5 shows a MAC address table on an OTV edge device. Control plane protocol offers intelligence above and beyond the rudimentary behavior of flood-and-learn techniques and thus is instrumental in implementing OTV value-added features to enhance and safeguard the inter-datacenter Layer 2 environment. The actual hosts on both sides of the connection are unaware of OTV. and scale. which utilize data-plane learning or flood-and-learn behavior.0c07.6e01.010a dynamic 0 F F Po1 * 110 0000.primary entry.43c3 OTV_EDGE_SITE3 001b. After MAC addresses are learned.016d dynamic 0 F F Po1 O 110 0000.020a dynamic 0 F F Overlay1 O 110 0000.54c2. and enable First Hop Routing Protocol localization and Address Resolution Protocol (ARP) proxying without creating an additional operational overhead. such as VLAN ID.seconds since last seen.54c2. Let’s now look . Example 10-5 MAC Address Table Showing Local and OTV Learned Entries Click here to view code image OTV_EDGE1_SITEA# show mac address-table Legend: * .Overlay MAC age .026d dynamic 0 F F Overlay1 Control plane protocol leveraged by OTV includes other Layer 2 semantics. OTV can safeguard the Layer 2 environment across the data centers or across IP-separated PODs of the same data center by employing proven Layer 3 techniques. Local OTV edge devices hold the mapping between remote host MAC addresses and the IP address of the remote OTV edge device. where you can see locally learned MAC addresses as well as MAC addresses available through the overlay.6e01. (R) .54c2. because the transport network is transparently forwarding their traffic.primary entry using vPC Peer-Link VLAN MAC Address Type age Secure NTFY Ports ---------+-----------------+--------+---------+------+----+-----------------G 001b.54c2.026c dynamic 0 F F Overlay1 O 110 0000.6e01.ac6e dynamic 0 F F Po1 * 110 0000. An efficient network core is essential to deliver optimal application performance across the OTV overlay.

OTV can even be deployed over existing Layer 2 data center interconnect solutions. should one be formed in the locally attached Layer 2 environments. Optimized operations: OTV control plane protocol allows the simple addition and removal of . in time. This is a huge advantage for OTV. as with some of the other Layer 2 data center interconnect technologies.a little bit deeper into the main benefits achieved with OTV technology. unless specially allowed by the administrator. OTV can operate over unicast-only or multicast-enabled core. Link failure detection in the core is based on routing protocols. such as vPC. These behaviors create site containment similar to one available with Layer 3 routing. If deployed over vPC. OTV employs the active-active mode of operation at the edge of the overlay where all ingress and egress points forward traffic simultaneously. Another flexibility attribute of OTV comes from its incremental deployment characteristics. OTV can be migrated from vPC to routed links for more streamlined infrastructure. Any network capable of carrying IP packets can be used for OTV core. Little to no effect on existing network design: OTV leverages the IP network to transport original MAC encapsulated frames. and ARP traffic is forwarded only in a control manner by offering local ARP response behavior. Fault isolation and site independence: Even though local spanning tree topologies are unaware of OTV. Spanning Tree Bridged Protocol Data Units (BPDU) are not forwarded over the OTV overlay. so spanning tree domains in different data centers keep operating independently of each other. with the exception being multicast-based deployment. it can be seamlessly deployed on top of any existing IP infrastructure. where ECMP behavior can utilize multiple paths and links between the data centers. dictates remote MAC address reachability. Because control plane protocol is used to dissimilate MAC address reachability information. rather than tunneling. Because OTV is transport agnostic. which provides a very convenient topological placement to address the needs of the Layer 2 extension across the data centers. OTV also leverages dynamic encapsulation. which can rapidly reroute around failed sections of the network. Local spanning tree topologies are unaffected too. where multicast-enabled core is required. where it can bring the advantages of fault containment and improved scale. Multipathing also applies to the traffic as it crosses the IP transit network. OTV is also fully transparent to the local Layer 2 domains. Use of multiple ingress and egress points does not require a complex deployment and operational model. where hosts and switches not participating in OTV do not have any knowledge of OTV. One of the flexibility attributes of OTV comes from its deployment model at the data center aggregation layer. OTV does introduce a boundary for the Spanning Tree Protocol. Optimal bandwidth utilization and scalability: OTV can make use of multiple edge devices better utilizing all available ingress and egress points in the data center through the built-in multipathing behavior. In case of OTV operating over multicast-enabled core. there is a requirement to run multicast routing in the underlying transit network. so OTV edge devices do not need to maintain a state of tunnels interconnecting data centers. Multipathing is an important characteristic of OTV to maximize the utilization of available bandwidth. OTV does not allow unknown unicast traffic across the overlay. IS-IS can leverage its own hello/timeout timers or it can leverage BFD for very rapid failure detection. Fast failover and flexibility: OTV control plane protocol. IS-IS. OTV’s built-in loop prevention mechanisms avoid loops in the overlay and limit the scope of the loop.

distributed applications. even though at the time of writing the book only about 25% of total servers are estimated to be running a hypervisor virtualization layer. however. and virtualized applications. This behavior automatically establishes IS-IS neighbor relations across the shared medium without the need to define all remote OTV edge devices. Nonroutable applications typically expect to be placed on a share transmissions medium. which changes its location. clustered applications. OTV technology is available on Cisco Nexus 7000 switches.OTV sides from the overlay. KVM. At the time of writing this book. it can be very beneficial in numerous cases. Endpoint mobility can cause inefficient traffic-forwarding patterns as a traditional network struggles to deliver packets across the shortest path toward an endpoint. however. nonroutable applications. OTV edge devices express their desire to join the overlay by sending IS-IS hello messages to the multicast control group. As virtual machines move around the network due to administrative actions. .” with some being able to operate over IP-routed network and some requiring Layer 2 adjacency. For routable applications. the convenience of operations comes from the greatly simplified command-line interface needed to enable and configure OTV service on the device. Hyper-V. distributed application and server clustering. These applications make use of server virtualization technologies and leverage server virtualization properties. Cisco ASR 1000 routers. Figure 10-27 depicts traffic tromboning cases where a virtual machine migrates from Site A to Site B. dynamic resourcing. the network must provide ubiquitous and optimized connectivity services across the entire mobility domain. with one example being Hadoop for Big Data distributed computation tasks. Ultimately. and simplified disaster recovery are some of the main use cases behind Layer 2 extension. ingress traffic is still delivered to Site A because this is where the virtual machine’s IP subnet is still advertised out of. such as Ethernet. and so on. and as such require data center interconnect to exhibit Layer 2 connectivity properties. Most commonly deployed applications fall into one of the following categories: routable applications. Layer 2 data center interconnect is not a requirement. The traditional network forwarding paradigm does not distinguish between the identity of an endpoint (virtual machine) and its location in the network topology. the need to cater to virtualized application needs also increases. various failure conditions. nothing more than an IP network is required to make sure application components can communicate with each other. virtualized applications reside on the virtual hypervisors. This great simplicity does not come on the account of deep level troubleshooting that is still available on the platforms supporting OTV functionality. Cross-data center workload mobility. As modern data centers become increasingly virtualized. fault tolerance. Clustered applications are a “mixed bag. Finally. Distributed applications can typically make use of the IP network. so data center interconnect requirements can be different based on specific clustered application needs. and Cisco CSR 1000v software router platforms. such as virtual machine mobility. or ESXi. which are replicated by the transit network and delivered to all other OTV edge devices that are part of the same overlay (listening to the same multicast group). Locator ID Separation Protocol Connectivity requirements between data center locations depend on the nature of the applications running in these data centers. such as Zen. or resource constraints.

which is based on a true endpoint location in the topology. LISP accommodates virtual machine mobility while maintaining shortest path forwarding through the infrastructure. are the identifiers for the endpoints in the form of either a MAC or IP address. while RLOCs are the identifiers of the edge devices on both sides of a LISP-optimized environment (most likely a loopback interface IP address on the edge router or data center aggregation switch). LISP routers providing both PITR and PETR functionality are called PxTRs. . Figure 10-28 shows ITR. LISP routers providing both ITR and ETR functionality are called xTRs. part of Cisco Unified Fabric holistic architecture. LISP. These edge devices are called tunnel routers. as the name suggests. EIDs. ETR. By decoupling endpoint identifiers (EID) from the Routing Locators (RLOC).Figure 10-27 Traffic Tromboning Case A better mechanism is needed to leverage network intelligence. and the proxy tunnel router topology. They perform LISP encapsulation sending the traffic from Ingress Tunnel Router (ITR) to Egress Tunnel Router (ETR) on its way toward the destination endpoint. proxy tunnel routers are used (PITR and PETR). To integrate a non-LISP environment into the LISP-enabled network. provides a scalable method for network traffic routing.

ASM assumes that there is no VLAN and subnet extension between the data centers. known as LISP Extended Subnet Mode (ESM) mode. It also updates the tunnel and proxy tunnel routers that have cached this information.” This mapping is called an EID-to-RLOC mapping database. known as LISP Across Subnet Mode (ASM). Endpoint mobility events can occur within the same subnet. These entries are derived from the LISP query process and are cached for a certain period of time before being flushed. If matched. the LISP first-hop router must detect such an event. Catering to the virtual machine mobility case. a mapping is needed between the EID that represents the endpoint and the RLOC that represents a tunnel router that this endpoint is “behind. LISP mapping entries are distributed rather than centralized. When endpoints roam. LISP provides an exciting new mechanism for organizations to improve routing system scalability and introduce new capabilities to their networks. is called a roaming device. Unlike the approach in conventional databases. The endpoint that moves to another subnet or across the same subnet extended between the data centers leveraging technology. Full mapping database is distributed among all tunnel routers in the LISP environment.Figure 10-28 LISP Tunnel Routers To derive the true location of an endpoint. LISP enables a virtual machine to change its network location. Versatility of LISP allows it to be leveraged in a variety of solutions: Simplified and cost-effective multihoming. first-hop router updates the LISP map server with a new RLOC for the EID. It compares the source IP address of the received packet to the range of EIDs defined for the interface on which this data packet was received. or across subnets. This is where OTV can be leveraged. while keeping its assigned IP addresses. unless refreshed. This mode is best suited for cold migration and disaster recovery scenarios where there is no Layer 2 data center interconnect between the sites. ESM leverages Layer 2 data center interconnect to extend VLANs and subsequently subnets across multiple data centers. including ingress traffic engineering . such as OTV. with each tunnel router only maintaining entries needed in a particular given time for communication.

cisco.html Overlay Transport Virtualization: http://www. including no renumbering when changing providers or adding multihoming IP address (endhost) mobility. and CSR 1000v software routers. . including incremental deployment of IPv6 using existing IPv4 infrastructure (or IPv4 over IPv6) Simplified multitenancy and large-scale VPNs Operation and network simplification At the time of writing this book.com/c/en/us/tech/storage-networking/fiber-channelover-ip-fcip/index. noted with the key topics icon in the outer margin of the page.cisco.com/go/lisp Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. ISR routers.IP address portability. Table 10-2 lists a reference of these key topics and the page numbers on which each is found.com/go/otv Locator ID Separation Protocol: http://www.cisco. including session persistence across mobility events IPv6 transition simplification. LISP is supported on Nexus 7000 series switches with F3 and N7KM132XP-12 line cards. ASR 1000 routers. Reference List Unified Fabric: http://www.html Fibre Channel over IP: http://www.cisco.com/c/en/us/solutions/data-center-virtualization/unifiedfabric/unified_fabric.

Table 10-2 Key Topics for Chapter 10 Define Key Terms Define the following key terms from this chapter. and check your answers in the glossary: Nexus Multilayer Director Switch (MDS) Unified Fabric Clos Spine-Leaf Fibre Channel over Ethernet (FCoE) local-area network (LAN) storage-area network (SAN) Fibre Channel Unified Port converged network adapter (CNA) Internet Small Computer Interface (iSCSI) NX-OS Spanning Tree .

virtual port channel (vPC) FabricPath Virtual Extensible LAN (VXLAN) Virtual Network Identifier (VNID) Virtual Supervisor Module (VSM) Virtual Ethernet Module (VEM) Virtual Security Gateway (VSG) virtual ASA vPath Wide Area Application Services (WAAS) TCP flow optimization (TFO) data redundancy elimination (DRE) Web Cache Communication Protocol (WCCP) Data Center Network Manager (DCNM) Overlay Transport Virtualization (OTV) Locator ID Separation Protocol (LISP) .

“Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security.Chapter 11. Table 11-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. FCoE preserves current operational models where particular technology domain expertise is leveraged to achieve deployments conforming to best practices for both network and storage environments. . you should mark that question as wrong for purposes of the self-assessment. highly performing and robust fabric. In this chapter we also observe how coupling of FCoE with Cisco Fabric Extender technologies allows building even larger fabrics for ever-increasing scale demands. “Answers to the ‘Do I Know This Already?’ Quizzes. If you do not know the answer to a question or are only partially sure of the answer. You can find the answers in Appendix A. read the entire chapter.” Table 11-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. It has a profound impact on the data center economics in a form of lower capital expenditure (CAPEX) and operational expense (OPEX) by consolidating traditionally disparate physical environments of network and storage into a single. Data Center Bridging and FCoE This chapter covers the following exam topics: Components and Operation of the IEEE Data Center Bridging Standards FCoE Principles FCoE Connectivity Options FCoE and Fabric Extenders FCoE and Server Adapter Virtualization Running Fibre Channel protocol on top of Data Center Bridging enhanced Ethernet is one of the key principles of data center Unified Fabric architecture.

QSFP-40G-SR4 b. c. FCF d. What is a VN_Port? a. d. It is a MAC addresses assigned to the Converged Network Adapter by the manufacturer. It is a MAC addresses dynamically assigned to the Converged Network Adapter by the FCoE Fibre Channel Forwarder during the fabric login. QSFP-40G-SR-BD c. SCSI 2. Which two of the following are part of the Data Center Bridging standards? (Choose two. It provides lossless Ethernet service by pausing traffic based on MTU value. It is First Priority Multicast Algorithm used by the FCoE endpoints to select the FCoE Fibre Channel Forwarder for a fabric login.1. It is a port on the FCoE Fabric Extenders toward the FCoE server. FET-40G d. b. d. It is a virtual port on the FCoE endpoint connected to the VF_Port on the switch. c. QSFP-H40G-CU 6.) a. What function does Priority Flow Control provide? a. It provides lossless Ethernet service by pausing traffic based on Class of Service value. c. It is a MAC address of the FCoE Fibre Channel Forwarder. b. Which of the following allows 40 Gb Ethernet connectivity by leveraging a single pair of multimode fibers? a. It provides lossless Ethernet service by pausing traffic based on DSCP value. d. 3. What is FPMA? a. It is a virtual port used to internally attach the FCoE Fibre Channel Forwarder to the Ethernet switch. LDP c. It is an invalid port type. It is a Fabric Extender Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to either the upstream parent switches or the servers. b. b. . PFC e. None of the above. e. 4. 5. It is a Fabric Extender Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to the upstream parent switches. What is an FET? a. It is a configuration exchange protocol to negotiate Class of Service value for the FCoE traffic. ETS b.

It is a feature for virtualizing the network adapter on the server. What type of port profile is used to dynamically instantiate vEth interface on the Cisco Nexus 5000 Series switch? a.c. 2224TP c.1Qbb. 7. 10. Which two of the following Cisco Nexus 2000 Series Fabric Extenders support FCoE? a. where network and storage traffic are carried over the same Ethernet links. part of the Unified Fabric solution. Yes. Yes. 2232TM-E 9. It is the name of the transceiver used to connect Fabric Extender to the parent switch. Specifically. vethernet d. but only when FCoE traffic on Cisco Nexus 2000 Series FCoE Fabric Extender is pinned to a single Cisco Nexus 5000 Series parent switch d. It is a Fully Enabled Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to the upstream parent switches. It is required to run FCoE on the Fabric Extender. however. b. 2232PP d. What is Adapter-FEX? a. are defined under the Data Center Bridging standards defined by the IEEE. No c. Enhanced . dynamic-interface b. Yes b. c. Enhancements required for a successful Unified Fabric implementation. as well as necessary bandwidth and traffic priority services. but only when FIP is not used 8. The lack of those characteristics poses no concerns for the network traffic. d. profile-veth Foundation Topics Data Center Bridging Due to the lack of Ethernet protocol support for transport characteristics required for successful storage-area network implementation. It is the name of the Cisco next-generation Fabric Extender. 2248TP b. consolidation of network and storage traffic over the same Ethernet network requires special considerations. it requires the capability to provide lossless traffic delivery. vethinterface c. Ethernet protocol must be enhanced. d. It is a Fault-Tolerant Extended Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to the upstream parent switches in the most reliable manner. to make Ethernet a viable transport for storage traffic. These are Priority Flow Control (PFC) defined by IEEE 802. Is FCoE supported in Enhanced vPC topology? a.

PAUSE frames are transmitted when the receive buffer allocated to a no-drop queue on the current DCBcapable switch is exhausted.1p Class of Service level. ETS creates priority groups by allowing differentiated treatment for traffic belonging to the same Priority Flow Control class of service based on bandwidth allocation.Transmission Selection (ETS) defined by IEEE 802. enabling the current DCB-capable switch to “drain” its receive buffer. If any Ethernet frame carrying data is dropped. the previous hop DCB-capable switch pauses its transmission of the FCoE frames. such as TCP. PFC enables the Ethernet flow control mechanism to operate on the per-802. they indiscriminately cause buffering for types of traffic that otherwise would rely on TCP protocol for congestion management and might be better off dropped. For example. or best effort. Quantized Congestion Notification (QCN) defined by IEEE 802. low latency. to retransmit the lost portions. Figure 11-1 Priority Flow Control Operation With PFC. such bandwidth . When such per-physical link PAUSE frames are triggered. the queue becomes full. Priority Flow Control Ethernet protocol provides no guarantees for traffic delivery. pausing either all or none of them. such as FCoE.1Qaz. Upon receipt of a PAUSE frame. PAUSE frames are signaled back to the previous hop DCB-capable Ethernet switch for the negotiated FCoE Class of Service (CoS) value (Class of Service value of 3 by default). Enhanced Transmission Selection Enhanced Transmission Selection (ETS) enables optimal QoS strategy for links between DCBcapable switches and a bandwidth management of virtual links. Ethernet defines the flow control mechanism under the IEEE 802. that is.1Qau. which are essential because Fibre Channel frames encapsulated within FCoE packets expect lossless fabric behavior.3X standard in which flow control operates on the entire physical link level. it is up to the higher-level protocols. This process is known as a back pressure. Figure 11-1 demonstrates the principle of how PFC leverages PAUSE frames to pause transmission of FCoE traffic over a DCB-enabled Ethernet link. PFC provides lossless Ethernet transport services for FCoE traffic. and it cannot differentiate between different types of traffic forwarded through the link. All other traffic is subject to regular forwarding and queuing rules. and it can be triggered hop-by-hop from the point of congestion toward the source of the transmission (FCoE initiator or target). thus selectively pausing at Layer 2 only traffic requiring guaranteed delivery. that is.1AB (LLDP) with TLV extensions. and Data Center Bridging Exchange (DCBX) defined by IEEE 802. and hence lossless behavior can no longer be guaranteed.

allocation allows assigning 8 Gb to the storage traffic and 2 Gb to the network traffic on the 10 Gb shared Ethernet link. Figure 11-3 demonstrates how DCBX messages are exchanged hop-by-hop across the DCB-enabled links. traffic conditions called incasts can be encountered. traffic arriving on multiple ingress interfaces on a given switch is attempted to be sent out through a single switch egress interface. ETS leverages advanced traffic scheduling features by providing guaranteed minimum bandwidth to the FCoE and high-performance computing traffic. Data Center Bridging Exchange Data Center Bridging Exchange (DCBX) is a negotiation and exchange protocol between FCoE initiator/target and a DCB-capable switch or between two DCB-capable switches. is to assign 50% of the interface bandwidth to FCoE no-drop class. . It is an extension of the Link Layer Discovery Protocol (LLDP) defined under the IEEE 802. Parameters exchanged using DCBX protocol are encoded in Type-Length-Value (TLV) format. At times. With incast. Figure 11-2 demonstrates traffic bandwidth allocation carried out by the ETS feature. meaning that traffic classes are allowed to use bandwidth allocated to a different traffic class. This approach allows supporting bursty traffic patterns while still maintaining bandwidth guarantees.3AB standard. If the amount of ingress traffic exceeds the capacity of the switch egress interface. Figure 11-2 Enhanced Transmission Selection Operation The default behavior of Cisco switches supporting DCB standards. Bandwidth allocations made by ETS are not hard allocations. until that traffic class demands the bandwidth. such as Cisco Nexus 5000 Series switches. special care is needed to maintain lossless link characteristics.

Network Interface Virtualization (multihop FCoE behavior is discussed in Chapter 12. PFC enables you to throttle down transmission for no-drop traffic classes. Figure 11-4 demonstrates the principle behind the QCN feature operation. detects configuration mismatches.Figure 11-3 Data Center Bridging Exchange Operation DCBX performs the discovery of adjacent DCB-capable switches. a back-pressure is applied from the point of congestion toward the source of transmission (FCoE initiator or target) in a hop-by-hop fashion. Quantized Congestion Notification As discussed earlier. and negotiates DCB link parameters. . where notification messages are sent from the point of congestion in the FCoE fabric directly to the source of transmission. defined in the IEEE 802. As PFC is invoked. This approach maintains feature isolation and affects only the parts of the network causing the congestion. Quantized Congestion Notification mechanism. such as FCoE. Successful FCoE operation relies on Ethernet enhancements introduced with DCB and as such. between two adjacent DCB-capable switches. Enhanced Transmission Selection. “Multihop Unified Fabric”). and logical link up/down are successfully negotiated. takes a different approach of applying back-pressure from the point of congestion directly toward the source of transmission without involving intermediate switches.1Qau standard. DCBX plays a pivotal role in making sure behaviors such as Priority Flow Control.

Ethernet headers are removed. . the Fibre Channel frames are reencapsulated with Ethernet headers.Figure 11-4 Quantized Congestion Notification Operation FCoE protocol operates at Layer 2 as Ethernet encapsulated Fibre Channel frames are forwarded between initiators and targets across the FCoE Fibre Channel Forwarders along the path. the MAC address swapping behavior occurring at FCoE Fibre Channel Forwarders makes its operation resemble routing more than bridging. and a Fibre Channel forwarding decision is taken on the exposed Fibre Channel frame. while source and destination MAC addresses assigned are the MAC addresses of the current and next-hop FCoE FCF or the frame target. When the FCoE endpoint sends FCoE frames to the network. When these frames reach the FCoE FCF. frames are deencapsulated. Subsequently. Figure 11-5 demonstrates the swap of outer Ethernet encapsulation MAC addresses as the FCoE frame is forwarded between initiator and target. these frames include the source MAC address of the endpoint and the destination MAC address of the FCoE Fibre Channel Forwarder. Even though Layer 2 operation implies bridging. respectively.

The idea behind FCoE is simple: run Fibre Channel protocol frames encapsulated in Ethernet headers. Fibre Channel protocol has established itself as the most reliable and best performing protocol for storage traffic transport. FCoE is . Participation of all FCoE Fibre Channel Forwarders in the QCN process brings little value beyond Priority Flow Control and as such. Because those FCoE frames no longer contain the original MAC address of the initiator. Cisco Nexus 5000 Series DCB-capable switches do not implement the QCN feature. while allowing the flexibility of running it on top of the ever-increasing high-speed Ethernet fabric. Figure 11-6 Fibre Channel Encapsulated in Ethernet The FC-BB-5 working group of INCITS Technical Committee T11 developed the FCoE standard. Figure 11-6 demonstrates the principle of encapsulating the original Fibre Channel frame with Ethernet headers for transmission across the Ethernet network. It leverages the same fundamental characteristics of the traditional Fibre Channel protocol.Figure 11-5 FCoE Forwarding MAC Address Swap This MAC address swap occurs for all FCoE frames passing through the FCoE Fibre Channel Forwarders along the path. unless all FCoE Fibre Channel Forwarders along the path from the point of congestion toward the source of transmission participate in the QCN process. it is impossible for the intermediate FCoE Fibre Channel Forwarders to generate QCN messages directly to the source of transmission to throttle down its FCoE transmission into the network. Fibre Channel Over Ethernet Over the years.

The capability to converge LAN and SAN traffic over a single Ethernet network is not trivial.one of the options available for storage traffic transport. which must be preserved. starting with FC-2 and continuing up the Fibre Channel Protocol stack. Figure 11-7 depicts the breadth of storage transport technologies. FCoE_LEP has the standard Fibre Channel layers. FCoE Logical Endpoint The FCoE Logical Endpoint (FCoE_LEP) is responsible for the encapsulation and deencapsulation functions of the FCoE traffic. Figure 11-7 Storage Transport Technologies Landscape FCoE protocol is the essential building block of the Cisco Unified Fabric architecture for converging network and storage traffic over the same Ethernet network. because network and storage environments are governed by different principles and have different characteristics. Figure 11-8 depicts the mapping between the layers of a traditional Fibre Channel protocol and the FCoE. .

This compatibility enables all the tools used in the native Fibre Channel environment to be used in the FCoE environment.Figure 11-8 Fibre Channel and FCoE Layers Mapping As you can see in Figure 11-8. Figure 11-9 depicts FCoE and traditional Fibre Channel port designation for the single hop storage topology. FCoE Port Types Similar to the traditional Fibre Channel protocol. and management models of the traditional Fibre Channel protocol. Each port type plays a specific role in the overall FCoE solution. FCoE operation defines several types of ports. administrative. To higher-level system functions. FCoE does not alter higher levels and sublevels of the Fibre Channel protocol stack. FCoE network appears as a standard Fibre Channel network. . which enables it to be ubiquitously deployed while maintaining operational.

Figure 11-9 FCoE and Fibre Channel Port Types in Single Hop Storage Topology FCoE ENodes could be either initiators in the form of servers or targets in the form of storage arrays. FPMA is 48 bits in length and consists of two concatenated 24-bit fields. or FC-MAP. These first hop switches can operate in either full Fibre Channel switching mode or the FCoE-NPV mode (more on that in Chapter 12). although Ethernet is used as a transport for the Fibre Channel traffic. Figure 11-10 depicts the structure of the FPMA MAC address. including Fibre Channel addressing. Each VN_Port has a unique ENode MAC address. and it is basically a Fibre Channel ID assigned to the CNA during the fabric login process. An ENode also has a globally unique burnt-in MAC address. such as the Cisco Nexus 5000 Series switch. The second field is called FC_ID. which is assigned to it by the CNA vendor. You can think of them as equivalent to Host Bus Adapters (HBA) in the traditional Fibre Channel world. Each one presents a VN_Port port type connected to the VF_Port port type on the first hop upstream FCoE switch. Remember. FCoE ENodes FCoE ENodes are the combination of FCoE Logical Endpoints (FCoE_LEP) and Fibre Channel protocol stack on the Converged Network Adapter. such as the Cisco MDS 9000 Series switch. This compares closely with the traditional Fibre Channel environment where initiators and targets present the N_Port port type connected to the F_Port port type on the first-hop upstream Fibre Channel switch. which is dynamically assigned to it by the FCoE Fibre Channel Forwarder on the upstream FCoE switch during the fabric login process. . The dynamically assigned MAC address is also known as the fabric-provided MAC address (FPMA). it does not change the Fibre Channel characteristics. The first field is called FCoE MAC address prefix. FCoE ENode creates at least one VN_Port toward the first-hop FCoE access switch. and it is defined by the FC-BB-5 standard as a range of 256 prefixes.

a challenging situation can occur if each fabric allocates the same FCID to its respective FCoE ENodes. With only default FC-MAP.Figure 11-10 Fabric-Provided MAC Address Cisco has established simple best practices that make manipulation of FC-MAP values beyond the default value of 0E-FC-00 unnecessary. as well as the encapsulation and deencapsulation of the FCoE traffic. the FCoE_LEP. to make sure that the overall FPMA address remains unique across the FCoE fabric. and Ethernet Switch FCoE Encapsulation As fundamentally Ethernet traffic. FCoE ENodes across the two fabrics will end up with identical FPMA values. if two Fibre Channel fabrics are mapped to the same Ethernet FCoE VLAN. In essence. This will ensure the uniqueness of an FPMA address without the need for multiple FC-MAP values. Cisco strongly recommends not mapping multiple Fibre Channel fabrics into the same Ethernet FCoE VLAN. For example. Fibre Channel Forwarder Simply put. To resolve this situation. FCoE Fibre Channel Forwarder (FCF) is a logical Fibre Channel switch entity residing in the FCoE-capable Ethernet switch. FCoE Fibre Channel Forwarder is responsible for the host fabric login. The 256 FC-MAP values are still available in case of a nonunique FC_ID address. Figure 11-11 depicts a relationship between FCoE Fibre Channel Forwarder. Figure 11-11 Fibre Channel Forwarder. it’s like having two servers with the same MAC address in the same VLAN. and the Ethernet switch. FCoE packets must be exchanged in a context of a VLAN. FCoE_LEP. FCoE-capable Ethernet switches must have the proper intelligence to enforce Fibre Channel protocol characteristics along the forwarding path. One or more FCoE_LEPs are used to attach the FCoE Fibre Channel Forwarder internally to the Ethernet switch. When Fibre Channel frames are encapsulated and transported over the Ethernet network in a converged fashion. Just as . you can leverage two different FC-MAP values to maintain FPMA uniqueness.

1Q priority field is used to indicate the Class of Service marking to provide an adequate Quality of Service. With FCoE. no other types of traffic. If the Fibre Channel payload and all the encapsulating headers result in an Ethernet frame size smaller than 64 bytes. so do virtual SANs by creating logical separation for the storage traffic. Each field is 48 bits in length. The Cisco Nexus 5000 Series switches implementation also expects all VLANs carrying FCoE traffic to be exclusively used for this purpose. where all FCoE packets tagged with specific VLAN ID are assumed to belong to the corresponding Fibre Channel VSAN. FCoE leverages EtherType value of 0x8906. Reserved fields have been inserted to allow future protocol extensibility. IEEE 802. especially assigned by IEEE for the protocol to indicate Fibre Channel payload. such as IP network traffic. that is. Figure 11-12 depicts FCoE frame format. should be carried over these VLANs.3 standard defines the minimum-length Ethernet frame as 64 bytes (including Ethernet headers). reserved fields can also be leveraged for padding.1Q tag serves the same purpose as in traditional Ethernet network.virtual LANs create logical separation for the network traffic. The IEEE 802.1Q tags are used to segregate between network and FCoE VLANs on the wire. Both ends of the FCoE link must agree on the same FCoE VLAN ID. Inside the IEEE 802. IEEE 802. With FCoE.1Q tag is 4 bytes in length. that is. Source and Destination MAC Address defines Layer 2 connectivity addressing for FCoE frames. The IEEE 802. The IEEE 802.1Q tag. respectively. Figure 11-12 FCoE Frame Format Let’s take a closer look at some of the frame header fields. Start of Frame (SOF) and End of Frame (EOF) fields indicate the start and end of the encapsulated Fibre Channel frame. it enables creation of multiple virtual network topologies across a single physical network infrastructure. Cisco Nexus 5000 Series switches enable mapping between FCoE VLANs and Fibre Channel VSANs. . as FCoE frames traverse the Ethernet links. and the incorrectly tagged frames are simply discarded. Each field is 8 bits in length.

Encapsulated Fibre Channel frame in its entirety. After FCoE features are enabled. This class provides no-drop service to facilitate guaranteed delivery of FCoE frames end-to-end. Ethernet Frame Check Sequence (FCS) represents the last 4 bytes of the Ethernet frame. class-fcoe default system class is automatically created at the system startup and cannot be deleted. Example 11-1 Displaying FCoE MTU Click here to view code image Nexus5000# show policy-map Type qos policy-maps ==================== policy-map type qos default-in-policy class type qos class-fcoe set qos-group 1 Type network-qos policy-maps =============================== policy-map type network-qos default-nq-policy class type network-qos class-fcoe pause no-drop mtu 2240 . Because of this. This is a standard FCS field in any Ethernet frame. you must manually create class-fcoe default system class. FCoE traffic is assigned to the class based on the FCoE EtherType value (0x8906). These MTU settings are observed for FCoE traffic arriving directly on the physical switch interfaces or through the host interfaces on the Cisco Nexus 2000 Series FCoE Fabric Extenders attached to the parent Cisco Nexus 5000 Series switches. Example 11-1 depicts the MTU assigned to FCoE traffic under class class-fcoe. Before enabling FCoE on the Cisco Nexus 5500 Series switches. including Fibre Channel cyclic redundancy check (CRC). Figure 11-13 depicts the overall length of the FCoE frame. Cisco Nexus 5000 Series switches define by default the Maximum Transmission Unit (MTU) of 2240 bytes for FCoE traffic in class-fcoe. Figure 11-13 FCoE Frame Length As you can see. Non-FCoE traffic is set up by default for the Ethernet standard 1500 byte MTU (excluding Ethernet headers). Jumbo Frame support must be enabled to allow successful transmission of all FCoE frame sizes. the total size of an Ethernet frame carrying a Fibre Channel payload can be up to 2180 bytes. is included in the payload. On the Cisco Nexus 5010 and 5020 switches.

discovering the MAC address of an upstream FCoE Fibre Channel Forwarder.FCoE Initialization Protocol FCoE Initialization Protocol (FIP) is the FCoE control plane protocol responsible for establishing and maintaining Fibre Channel virtual links between pairs of adjacent FCoE devices (FCoE ENodes or FCoE Fibre Channel Forwarders). The FCoE ENode discovers the VLAN to which it needs to send the FCoE traffic by sending an FIP VLAN discovery request to the All-FCF-MAC multicast MAC address. FIP continues operating in the background performing virtual link maintenance functions—specifically. storage array). and ultimately performing fabric login/discovery operation (FLOGI or FDISC). continuously verifying reachability between the two virtual Fibre Channel (vFC) interfaces on the Ethernet network and offering primitives to delete the virtual link in response to administrative actions to that effect. FIP also differentiates from the FCoE protocol messages by using EtherType value 0x8914 in the encapsulating Ethernet header fields. Establishment of the virtual link carries three main steps: discovering the FCoE VLAN. server) or the target (for example. an exchange of Fibre Channel payload encapsulated with the Ethernet headers can begin. Figure 11-14 depicts the phases of FIP operation. potentially across multiple Ethernet segments. After the virtual link has been established. All FCoE Fibre Channel Forwarders listen to the All-FCF-MAC MAC . Figure 11-14 FCoE Initialization Protocol Operation FIP VLAN discovery relies on the use of native VLAN leveraged by the initiator (for example.

CNAs are represented as two separate devices. which can accept and process fabric login requests. From the operating system perspective. The FC-BB-5 standard enables FCoE ENodes to send out advertisements to the All-FCF-MAC multicast MAC address to solicit unicast discovery advertisement back from FCoE Fibre Channel Forwarder to the requesting ENode. . Figure 11-15 depicts the fundamental principle behind leveraging a separate set of operating system drivers when converging NIC and HBA with Converged Network Adapter. just like in the case of physically segregated adapters. in which case FIP VLAN discovery functionality is not invoked. The converged nature of CNAs requires them to operate at 10 Gb speeds or higher. FCoE Fibre Channel Forwarder discovery messages are sent to the All-ENode-MAC multicast MAC address over the FCoE VLAN discovered earlier by the FIP protocol. Following the discovery of FCoE VLAN and the FCoE Fibre Channel Forwarder. Converged Network Adapters are also available in the mezzanine form factor for the use in server blade systems. VLAN ID 1002 is commonly assumed in such case. Converged Network Adapter Converged Network Adapter (CNA) is in essence an FCoE ENode. which could delay FCoE Fibre Channel Forwarder discovery for the new FCoE ENodes joining the fabric during the send interval. they still require two sets of drivers. Cisco Nexus 5000 Series switches support FIP VLAN Discovery functionality and will respond to the querying FCoE ENode.address on the native VLAN. After FCoE VLAN has been discovered. FIP performs fabric login or discovery by sending FLOGI or FDISC unicast frames from VN_Port to VF_Port. The FCoE ENode can also be manually configured for the FCoE VLAN it should use. FIP performs the function of discovering the upstream FCoE Fibre Channel Forwarder. one for the Ethernet and the other for Fibre Channel. These messages are sent out periodically. Note FIP VLAN discovery is not a mandatory component of the FC-BB-5 standard. Discovery messages are also used by the FCoE Fibre Channel Forwarder to inform any potential FCoE ENode attached through the FCoE VLAN that its VF_Ports are available for virtual link establishment with the ENode’s VN_Ports. This is the phase when the FCoE ENode MAC address used for FCoE traffic is assigned and virtual link establishment is complete. as such. CNA is usually a PCI express (PCIe) type adapter. which contains the functions of a traditional Host Bus Adapter (HBA) and a network interface card (NIC) all in one.

Table 11-2 8 Gb Fibre Channel and 10 Gb FCoE When comparing with higher speeds. At the same time. such as Cisco Nexus 5000 Series switch. rarely do Fibre Channel connections from the endpoints to the first hop FCoE switches exceed the speed of 8 Gb. During the time of writing this book. despite appearing to be only 2 Gb faster. The difference might not seem significant. both network and storage traffic is sent over a single unified wire connection from the server to the upstream first hop FCoE Access Layer switch. and CNAs With CNAs. Table 11-2 depicts the throughputs and encoding for the 8 Gb Fibre Channel and 10 Gb FCoE traffic. server CNAs and FCoE storage arrays offer at least 10 Gb FCoE connectivity speeds. FCoE Speed Rapid progress in the speed increase of Ethernet networks versus a slower pace of speed increase in the traditional Fibre Channel networks makes FCoE a compelling technology for high throughput storage-area networks. Although 16 Gb Fibre Channel is only 33% faster than 10 Gb FCoE. 40 Gb FCoE is four times faster than 10 Gb FCoE! Table 11-3 depicts the throughputs and encoding for the 16 Gb Fibre Channel and 40 Gb FCoE traffic.Figure 11-15 Separate Ethernet and Fibre Channel Drivers for NICs. due to different encoding schemes. 10 Gb FCoE achieves 50% more throughput than 8 Gb Fibre Channel. we discover even greater differences. HBAs. however. Table 11-3 16 Gb Fibre Channel and 40 Gb FCoE .

which are glass rods drawn into fine threads of glass protected by a plastic coating. Such modes result from the fact that light will propagate in the fiber core only at discrete angles within the cone of acceptance. Figure 11-16 depicts the fiber optic core and coating. Current multimode cabling standards include 50 microns and 62. . with Graded Index being more popular because it offers hundreds of times more bandwidth than Step-Index fiber does.Speed is definitely not the only factor in storage-area networks. given the similar characteristics between Fibre Channel and FCoE. The fibers drawn from these pre-forms are then typically packaged into cable configurations. Multimode fiber type has a much larger core diameter compared to single-mode fiber. you can clearly see the very distinct throughput advantages in favor of the FCoE. Figure 11-17 depicts Step-Index multimode fiber design. FCoE Cabling Options for the Cisco Nexus 5000 Series Switches This section reviews the options for physically connecting Converged Network Adapters (CNA) to Cisco Nexus 5000 Series switches. Optical fibers are hair-thin structures created by forming pre-forms. however. Figure 11-16 Fiber Optic Core and Coating Fiber manufacturers use various vapor deposition processes to make the pre-forms. and connecting the switches to an upstream LAN and Fibre Channel SAN environments. Multimode fiber refers to the fact that numerous modes or light rays are simultaneously carried through. which are then placed into an operating environment for decades of reliable performance. Optical Cabling An optical fiber is a flexible filament of very clear glass capable of carrying information in the form of light.5 microns core diameter. Step-Index and Graded Index offer two different fiber designs. allowing for a larger number of modes.

Current single-mode cabling standards include 9 microns core diameter. Consequently. Figure 11-19 depicts a single-mode fiber design. .Figure 11-17 Step-Index Multimode Fiber Design Figure 11-18 depicts Graded Index multimode fiber design. Figure 11-18 Graded Index Multimode Fiber Design Single-mode (or monomode) fiber enjoys lower fiber attenuation than multimode fiber and retains better fidelity of each light pulse because it exhibits no dispersion caused by multiple modes. information can be transmitted over longer distances.

1000BASE-LX/LH transceivers transmitting in the 1300 nanometers wavelength over multimode FDDI-grade OM1/OM2 fiber require mode conditioning patch cables. It is compatible with the IEEE 802. SFP Transceiver Modules A Small Form-Factor Pluggable (SFP) is a compact.3z 1000BASE-LX standard. Following are the different types of SFP transceivers existing at the time of writing this book: 1000BASE-T SFP 1 Gigabit Ethernet copper transceiver for copper networks operates on standard Category 5 unshielded twisted-pair copper cabling for links of up to 100 meters (~330 feet) in length.62 miles) over laser-optimized 50 microns multimode fiber cable. while allowing both receive and transmit operations to take place. Due to its small form factor. Sonet/SDH. MDI is Medium Dependent Interface. and Fibre Channel applications across all Cisco switching and routing platforms.3z 1000BASE-SX standard. Figure 11-20 Fiber Optic SFP Module LC Connector Let’s now review various transceiver module options available at the time of writing this book for deploying 1 Gigabit Ethernet. SFP is responsible for converting between the electrical and optical signals. 10 Gigabit Ethernet. hot swappable. SFP is also sometimes referred to as mini Gigabit Interface Converter (GBIC) because both provide similar functionality. and 40 Gigabit Ethernet connectivity. 100 Megabit. and it operates over either 50 microns multimode fiber links for a distance of up to 550 meters (~1800 feet) or over 62. as an example. 1000BASE-LX/LH SFP 1 Gigabit Ethernet long-range fiber transceiver operates over standard single-mode fiber-optic link spanning distances of up to 10 kilometers (~6. These modules support 10 Megabit. . and 1000 Megabit (1 Gigabit) auto-negotiation and auto-MDI/MDIX. MDI-X is Medium Dependent Interface Crossover. SFP type transceivers can be leveraged in a variety of high-speed Ethernet and Fibre Channel deployments.5 microns Fiber Distributed Data Interface (FDDI)-grade multimode fiber for a distance of up to 220 meters (~720 feet).Figure 11-19 Single-Mode Fiber There are many fiber optic connector types. It is compatible with the IEEE 802. It can also support distances of up to 1 kilometer (~0. Figure 11-20 depicts SFP module LC connector. LC is the most popular one for both multimode and single-mode fiber optic connections. 1000BASE-SX SFP 1 Gigabit Ethernet fiber transceiver is used only for multimode fiber networks.2 miles) and up to 550 meters (~1800 feet) over any multimode fiber. input/output device that is used to link the ports on different network devices together. Transceiver Modules Cisco Transceiver Modules support Ethernet.

Connectivity up to 400 meters (~0. Both Cisco Nexus 5000 Series switches and Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-SR SFP. Figure 11-21 Bidirectional Transmission over a Single Strand. Figure 11-21 depicts the principle of wavelength multiplexing to achieve transmission over a single strand fiber. leveraging an LC type connector. Connectivity up to 300 meters (~1000 feet) is possible when leveraging 2. .3ah 1000BASE-BX10-D and 1000BASE-BX10-U standards. whereas 1000BASE-BX10-U transmits at a 1310 nanometer wavelength and receives a 1490 nanometer signal. compatible with the IEEE 802. This module has a 20-pin connector on the electrical interface and duplex Lucent Connector (LC) on the optical interface.000MHz/km OM3 multimode fiber with a core size of 50 microns.2 miles). The communication over a single strand of fiber is achieved by separating the transmission wavelength of the two devices. operate over a single strand of standard single-mode fiber for a transmission range of up to 10 kilometers (~6. Single-Mode Fiber SFP+ Transceiver Modules The SFP+10 Gigabit Ethernet transceiver module is a bidirectional device with a transmitter and receiver in the same physical package.700MHz/km OM4 multimode fiber with a core size of 50 microns. 1000BASE-BX10-D transmits a 1490 nanometer wavelength and receives a 1310 nanometer signal. This SFP provides an optical link budget of 21 dB. 1000BASE-BX-D and 1000BASE-BX-U 1 Gigabit Ethernet transceivers. such as fiber quality.25 miles) is possible when leveraging 4. 1000BASE-ZX SFP 1 Gigabit Ethernet extra long-range fiber transceiver operates over standard single-mode fiber-optic link spanning extra long-reaching distances of up to approximately 70 kilometers (~43. but the precise link span length depends on multiple factors. number of splices.5 miles).5 microns. It operates on 850 nanometer wavelength. 1000BASE-BX10-D and 1000BASE-BX10-U transceivers represent two sides of the same fiber link. Note the presence of a wavelength-division multiplexing (WDM) splitter integrated into the SFP to split the 1310 nanometer and 1490 nanometer light paths.1000BASE-EX SFP 1 Gigabit Ethernet extended-range fiber transceiver operates over standard single-mode fiber-optic link spanning long-reaching distances of up to 40 kilometers (~25 miles). A 5dB inline optical attenuator should be inserted between the fiber-optic cable and the receiving port on the SFP at each end of the link for back-to-back connectivity. The following options exist for 10 Gigabit Ethernet SFP transceivers during the time of writing this book: SFP-10G-SR SFP 10 Gigabit Ethernet short-range transceiver supports a link length of 26 meters (~85 feet) over standard FDDI-grade multimode fiber with a core size of 62. and connectors.

To ensure that specifications are met over FDDI-grade OM1 and OM2 fibers. QSFP Transceiver Modules The Cisco 40GBASE QSFP (Quad Small Form-Factor Pluggable) hot-swappable input/output device portfolio (see Figure 11-22) offers a wide variety of high-density and low-power 40 Gigabit Ethernet connectivity options for the data center.SFP-10G-LRM SFP 10 Gigabit Ethernet short-range transceiver supports link lengths of 220 meters (~720 feet) over standard FDDI-grade multimode fiber with a core size of 62. such as Cisco Nexus 5000 Series switches. Cisco Nexus 5500 switches and Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-ER SFP. It operates on 1310 nanometer wavelength. It operates on 1550 nanometer wavelength. FET-10G is supported only on fabric uplinks between the Cisco Nexus 2000 Series FCoE Fabric Extender and a Cisco parent switch. A 5dB 1550 nanometer fixed loss attenuator is required for distances greater than 20 kilometers (~12. leveraging an LC type connector. such as Cisco Nexus 5000 Series switches.5 microns.5 miles). It operates on 1550 nanometer wavelength. leveraging an LC type connector.2 miles) over standard G. Both Cisco Nexus 5000 Series switches and Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-LR SFP. SFP-10G-ER SFP 10 Gigabit Ethernet extended range transceiver supports a link length of up to 40 kilometers (~25 miles) over standard G. neither Cisco Nexus 5000 Series switches nor Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-LRM optics. No mode conditioning patch cord is required for applications over OM3 or OM4. leveraging an LC type connector. the SFP-10G-ZR SFP transceiver is neither supported on Cisco Nexus 5000 Series switches nor Cisco Nexus 2000 Series FCoE Fabric Extenders. Cisco Nexus 5000 switches do not support SFP-10G-ER SFP. SFP+ Copper Twinax direct-attach 10 Gigabit Ethernet pre-terminated cables are suitable for very short distances and offer a cost-effective way to connect within racks and across adjacent racks.652 single-mode fiber.652 single-mode fiber. SFP-10G-LR SFP 10 Gigabit Ethernet long-range transceiver supports a link length of 10 kilometers (~6.652). SFP-10G-LRM Module also supports link lengths of 300 meters (~1000 feet) over standard single-mode fiber (G.652 single-mode fiber. SFP-10G-ZR 10 Gigabit Ethernet very long-range transceiver supports link lengths of up to about 80 kilometers (~50 miles) over standard G. It operates over a core size of 50 microns and at 850 nanometers wavelength. . It operates on 1310 nanometer wavelength. FET-10G Fabric Extender Transceiver (FET) 10 Gigabit Ethernet cost-effective short-range transceiver supports link lengths of 25 meters (~80 feet) over OM2 multimode fiber and the link length of up to 100m (~330 feet) over laser-optimized OM3 or OM4 multimode fiber. At the time of writing this book. the transmitter should be coupled through a mode conditioning patch cord. Connectivity is typically between the server and a Top-of-Rack switch or a server and a Cisco Nexus 2000 Series FCoE Fabric Extender. At the time of writing this book. leveraging an LC type connector. This interface is not specified as part of the 10 Gigabit Ethernet standard and is instead built according to Cisco specifications. These cables can also be used for fabric uplinks between the Cisco Nexus 2000 Series FCoE Fabric Extender and a Cisco parent switch.

respectively.3ba standard. It supports link lengths of 100 meters (~330 feet) and 150 meters (~500 feet) over laser-optimized OM3 and OM4 multimode fibers. QSFP-40G-SR4 QSFP 40 Gigabit Ethernet short-range transceiver operates over 850 nanometer and 50 micron core size multimode fiber. It can also be used in a 4×10 Gigabit Ethernet mode for interoperability with 10GBASE-SR interfaces up to 100 meters (~330 feet) and 150 meters (~500 feet) over OM3 and OM4 fibers.Figure 11-22 Cisco 40GBASE QSFP Transceiver It offers flexibility of interface choice for different distance requirements and fiber types. over laser-optimized OM3 and OM4 multimode fibers. Figure 11-23 Cisco QSFP BiDi Wavelength Multiplexing Each QSFP 40 Gigabit Ethernet BiDi transceiver consists of two 20 Gigabit Ethernet transmit and receive channels enabling an aggregated 40 Gigabit Ethernet link over a two-strand multimode fiber connection. which connects the 40GBASE-SR4 module to four 10GBASE-SR . which enables reuse of existing 10 Gigabit Ethernet duplex multimode cabling infrastructure for migration to 40 Gigabit Ethernet connectivity. respectively. It supports link lengths of 100 meters (~330 feet) and 150 meters (~500 feet). The 4×10 Gigabit Ethernet connectivity is achieved using an external 12-fiber parallel to 2-fiber duplex breakout cable. respectively. while being interoperable with other IEEE-compliant 40GBASE interfaces. It primarily enables high-bandwidth 40 Gigabit Ethernet optical links over 12-fiber parallel fiber terminated with MPO/MTP multifiber connectors. and its highspeed electrical interface is compliant with the IEEE 802. The following options exist for 40 Gigabit Ethernet QSFP transceivers during the time of writing this book: QSFP 40Gbps Bidirectional (BiDi) 40 Gigabit Ethernet short-range cost-effective transceiver is a pluggable optical transceiver with a duplex LC connector interface over 50 micron core size multimode fiber. Figure 11-23 depicts the use of wavelength multiplexing technology allowing 40 Gigabit Ethernet connectivity across just a pair of multimode fiber strands. BiDi transceiver complies with the QSFP MSA specification. BiDi transceivers offer a compelling solution. enabling its use on all QSFP 40 Gigabit Ethernet platforms.

This module can be used for native 40 Gigabit Ethernet optical links over 12-fiber parallel cables with MPO/MTP connectors or in a 4×10 Gigabit Ethernet mode with parallel to duplex fiber breakout cables for connectivity to four 10GBASE-SR interfaces. Figure 11-25 Cisco 40GBASE-CR4 QSFP Active Optical Cables QSFP-40G-LR4 QSFP 40 Gigabit Ethernet long-range transceiver module supports link lengths of up to 10 kilometers (~6. Multiplexing and de-multiplexing of the four wavelengths are managed within the device. or active optical cables. Such cables can either be active copper. Figure 11-24 Cisco 40GBASE-CR4 QSFP Direct-Attach Copper Cables Active optical cables (AOC) are much thinner and lighter than copper cables. Each 10 Gigabit lane of this module is compliant with IEEE 10GBASE-SR specifications. 40GBASE-CR4 QSFP to QSFP direct-attach 40 Gigabit Ethernet short-range pre-terminated cables offer a highly cost-effective way to establish a 40 Gigabit Ethernet link between QSFP ports of Cisco switches within the rack and across adjacent racks. Figure 11-25 depicts short range pre-terminated 40Gb QSPF to QSFP active optical cable. Cisco offers both active and passive copper cables. Figure 11-24 depicts short range pre-terminated 40 Gigabit Ethernet QSPF to QSFP copper cable. which is critical in high-density racks. Both QSFP-40G-LR4 and QSFP-40GE-LR4 operate over 1310 nanometer single-mode fiber. The 40 Gigabit Ethernet signal is carried over four wavelengths. It extends the reach of the IEEE 40GBASE-SR4 interface to 300 meters (~1000 feet) and 400 meters (~0. respectively. passive copper. QSFP-40GE-LR4 QSFP 40 Gigabit Ethernet long-range transceiver supports 40 Gigabit . At the time of writing this book.25 miles) over laseroptimized OM3 and OM4 multimode parallel fiber. which makes cabling easier. QSFP-40G-CSR4 QSFP 40 Gigabit Ethernet short-range transceiver operates over 850 nanometer and 50 micron core size multimode fiber. AOCs enable efficient system airflow and have no EMI issues.2 miles) over a standard pair of G.optical interfaces.652 single-mode fiber with duplex LC connectors.

It is interoperable with QSFP-40G-LR4 for distances up to 2 kilometers (~1. 50 micron core size laser-optimized OM3 and OM4 multimode fiber across distances of up to 100 meters (~330 feet) and 150 meters (~500 feet). Figure 11-26 Cisco 40Gb QSFP to Four SFP+ Copper Breakout Cable Active optical breakout cables (AOC) are much thinner and lighter than copper breakout cables. FET-40G QSFP 40 Gigabit Ethernet cost-effective short-range transceiver connects links between the Cisco Nexus 2000 Series FCoE Fabric Extender and the Cisco parent switch.25 miles).Ethernet rate only. which is critical in high-density racks. which makes cabling easier. Figure 11-26 depicts the 40 Gigabit Ethernet copper breakout cable for connecting four 10 Gigabit Ethernet capable nodes (servers) into a single 40 Gigabit Ethernet QSPF switch interface. AOCs enable efficient system airflow and have no EMI issues. WSP-Q40GLR4L QSFP 40 Gigabit Ethernet long-range transceiver supports link lengths of up to 2 kilometers (~1. Multiplexing and de-multiplexing of the four wavelengths are managed within the device.652 single-mode fiber with duplex LC connectors. These breakout cables connect to a 40 Gigabit Ethernet QSFP port of a Cisco switch on one end and to for 10 Gigabit Ethernet SFP+ ports of a Cisco switch on the other end. These active optical breakout cables connect to a 40 Gigabit Ethernet QSFP port of a Cisco switch on one end and to four 10 Gigabit Ethernet SFP+ ports of a Cisco switch on the other end. whereas the QSFP-40G-LR4 supports OTU3 data rate in addition to 40 Gigabit Ethernet Ethernet rate. QSFP to four SFP+ Direct-Attach 40 Gigabit Ethernet pre-terminated breakout cables are suitable for very short distances and offer a highly cost-effective way to connect within the rack and across adjacent racks. Such cables can either be active copper. This module can be used for native 40 Gigabit Ethernet optical links over 12-fiber ribbon cables with MPO/MTP connectors or in 4×10Gb mode with parallel to duplex fiber breakout cables for connectivity to four FET10G interfaces. such as Cisco Nexus 5000 Series switches. or active optical cables. The interconnect will work over parallel 850 nanometer.25 miles) over a standard pair of G. Figure 11-27 depicts the 40 Gigabit Ethernet active optical breakout cable for connecting four 10 Gigabit Ethernet capable nodes (servers) into a single 40 Gigabit Ethernet QSPF switch interface. respectively. passive copper. . The 40 Gigabit Ethernet signal is carried over four wavelengths.

at Cisco’s discretion. with no reliance on . In the course of providing support for a Cisco networking product Cisco may require that the end user install Cisco transceivers if Cisco determines that removing third-party parts will assist Cisco in diagnosing the cause of a support issue. Cisco may withhold support under warranty or a Cisco support program. When the switch detects a non-Cisco certified transceiver. When a product fault or defect occurs in the network. and adaptive server and server-adapter agnostic architecture across data center racks and points of delivery (PoD) Simplified operations with investment protection by delivering new features and functionality on the parent switches with automatic inheritance for physical and virtual server-attached infrastructure Single point of management and policy enforcement across thousands of network access ports Scalable 1 Gigabit Ethernet and 10 Gigabit Ethernet server access. at Cisco’s discretion. which complements Cisco Unified Fabric strategy. Delivering FCoE Using Cisco Fabric Extender Architecture Cisco Fabric Extender (FEX) technology delivers an extensible and scalable fabric solution. If a product fault or defect is detected and Cisco believes the fault or defect can be traced to the use of third-party transceivers (or other non-Cisco components). scalable.Figure 11-27 Cisco 40Gb QSFP to Four SFP+ Active Optical Breakout Cable Unsupported Transceiver Modules Cisco highly discourages the use of non-officially certified and approved transceivers inside Cisco network devices. Cisco may withhold support under warranty or a Cisco support program. Cisco will continue to provide support for the affected product under warranty or covered by a Cisco support program. It is based on the IEEE 802. then. such as SMARTnet service.1BR (Bridge Port Extension) standard and enables the following main benefits: Common. the following warning message will be displayed on the terminal: Caution When Cisco determines that a fault or defect can be traced to the use of third-party transceivers installed by a customer or reseller. and Cisco concludes that the fault or defect is not attributable to the use of third-party transceivers (or other non-Cisco components) installed inside Cisco network devices. then.

Cisco Virtual Machine Fabric Extender (VM-FEX): Extends the fabric into the hypervisor for virtual machines 4. for example. Cisco Nexus 2000 Series Fabric Extenders (FEX): Extends the fabric to the top of rack 2. and it is possible to have cascading Fabric Extender topologies deployed. leverage Adapter-FEX on the server to . Cisco Adapter Fabric Extender (Adapter-FEX): Extends the fabric into the server 3.Spanning Tree Protocol for logical loop-free topology Low and predictable latency at scale for any-to-any traffic patterns Extended and expanded switching fabric access layer all the way to the server hypervisor for virtualized servers or to the server converged network interface card in case of nonvirtualized servers Reduced power and cooling needs Overall reduced operating expenses and capital expenditures Cisco Fabric Extender technology is delivered in the four following architectural choices depicted in Figure 11-28. Cisco UCS 2000 Series Fabric Extender (I/O module): Extends the fabric into the Cisco UCS Blade Chassis Cisco Fabric Extender architectural choices are not mutually exclusive. Figure 11-28 Cisco Fabric Extender Architectural Choices 1.

At the time of writing this book. must be forwarded to the parent switch for forwarding. The following protocols are supported: Link Layer Discovery Protocol (LLDP). or FCoE Fabric Extenders for simplicity. Cisco Nexus 2000 Series FCoE Fabric Extenders support optimal cabling strategy that simplifies network operations. and 2348TQ. All operation. and management functions are performed on the parent switch.connect to Cisco Nexus 2000 Series FCoE Fabric Extender. Collectively. Cisco NX-OS provides the capability to offload link-level protocol processing to the FCoE Fabric Extender CPU. Cisco Nexus 2000 Series FCoE Fabric Extenders operate as remote line cards to the Cisco Nexus 5000 Series parent switch. administration. FCoE and Cisco Nexus 2000 Series Fabric Extenders Extending FCoE to the top of rack by leveraging Cisco Nexus 2000 Series Fabric Extenders creates a scale-out Unified Fabric solution. 2248PQ. and Link Aggregation Control Protocol (LACP). 2232TM-E. even between the two ports on the same FCoE Fabric Extender. . 2348UPQ. Data Center Bridging Exchange (DCBX). 2232TM. Cisco Discovery Protocol (CDP). More on that later in this chapter. the following Cisco Nexus 2000 Series Fabric Extender models support FCoE: 2232PP. Short intra-rack server cable runs: Twinax or fiber cables connect servers to top-of-rack Cisco Nexus 2000 Series FCoE Fabric Extenders. Physical host interfaces on a Cisco Nexus 2000 Series FCoE Fabric Extender are represented as logical interfaces on the Cisco Nexus 5000 Series parent switch. Longer inter-rack horizontal runs of fiber: Cisco Nexus 2000 Series FCoE Fabric Extenders in each rack are connected to parent switches that are placed at the end or middle of the row. Figure 11-29 depicts the optimal cabling strategy made available by leveraging Cisco Fabric Extender architecture for Top-of-Rack deployment. they are referred to as Cisco Nexus 2000 Series FCoE Fabric Extenders. so any traffic. FCoE Fabric Extenders have no local switching capabilities. Note To reduce the load on the control plane of the parent device. This model allows simple cabling and easier rollout of racks and PODs.

also known as network interfaces (NIF). and as such should be carefully considered. If the rack server density is low enough. Cisco Nexus 2232 FCoE Fabric Extender has eight 10 Gigabit Ethernet fabric uplinks. FCoE and Straight-Through Fabric Extender Attachment Topology In the straight-through Fabric Extender attachment topology. server cabling will be extended beyond a single rack (for the racks where FCoE Fabric Extenders are absent). each FCoE Fabric Extender is connected to a single Cisco Nexus parent switch. FCoE Fabric Extenders can also be deployed every few racks to accommodate the required switch interface capacity. . Connection could be in a form of a single link or a port channel. For example. up to a maximum number of uplinks available on the FCoE Fabric Extender. The following sections examine each one in the context of FCoE. This parent switch exclusively manages the ports on the FCoE Fabric Extender and provides its features. Two attachment topologies exist when connecting Cisco Nexus 2000 Series FCoE Fabric Extenders to Cisco Nexus 5000 Series parent switch: straight-through and vPC. Figure 11-30 depicts straight-through FCoE Fabric Extender attachment topology. which can influence power consumption and cooling.Figure 11-29 FCoE Fabric Extenders Cabling Strategy Each rack could have two FCoE Fabric Extenders to provide in-rack server connectivity redundancy. In that case.

and its front panel interfaces.0(2)N1(1) [Switch version: 6. Example 11-2 shows FCoE Fabric Extender registration information as displayed on the Cisco Nexus 5000 Series parent switch. Table 11-4 shows the oversubscription levels that each server will experience as its traffic passes through the Fabric Extender.Interface Up. FCoE Fabric Extender will keep being registered. Example 11-2 Displaying FCoE Fabric Extender Registration Info on the Parent Switch Click here to view code image nexus5000# show fex 100 FEX: 100 Description: FEX0100 state: Online FEX version: 6.0(2)N1(1)] Extender Serial: FOC1616R00Q Extender Model: N2K-C2248PQ-10GE.Interface Up. State: Active The number of uplinks also determines the oversubscription levels introduced by the FCoE Fabric Extender into server traffic. Part No: 73-14775-02 Pinning-mode: static Max-links: 1 Fabric port for control traffic: Eth2/3 FCoE Admin: false FCoE Oper: true FCoE FEX AA Configured: false Fabric interface state: Po100 . will keep being up on the parent switch.Interface Up.Figure 11-30 Straight-Through FCoE Fabric Extender Attachment Topology As long as a single uplink between the FCoE Fabric Extender and the parent switch exists. if all host interfaces on the Cisco Nexus 2232 FCoE Fabric Extender are connected to 1 Gb servers.Interface Up. State: Active Eth2/2 . State: Active Eth2/3 .Interface Up. . State: Active Eth2/1 . State: Active Eth2/4 . also known as host interfaces (HIF). For example.

Table 11-4 Number of Fabric Extender Uplinks and Oversubscription Levels It is recommended that you provision as many fabric uplinks as possible to limit the potential adverse effects of the oversubscription. static-pinning mapping looks like what’s shown in Table 11-5. Note It is recommended to use a power of 2 to determine the amount of FCoE Fabric Extender fabric uplinks. For FCoE traffic. With straight-through FCoE Fabric Extender attachment topology. This facilitates better port channel hashing and more even traffic distribution across the FCoE Fabric Extender fabric uplinks. you can choose to leverage either static or dynamic pinning. Table 11-5 Static Pinning Mapping on the Cisco Nexus 2232 FCoE Fabric Extender It is also possible to display the static pinning mapping by issuing the show interface ethernet X/Y fex-intf command on the Cisco Nexus 5000 Series parent switch command-line interface. a range of host interfaces on the Cisco Nexus 2000 Series FCoE Fabric Extender toward the servers are statically pinned to a given fabric uplink port. where X/Y indicates the interface toward the Cisco Nexus 2000 Series FCoE Fabric Extender. higher oversubscription levels will translate into more PAUSE frame and overall slower storage connectivity. as shown in Example 11-3. Example 11-3 Display Static Pinning Mapping on the Cisco Nexus 2232 FCoE Fabric Extender Click here to view code image . In the case of Cisco Nexus 2232 FCoE Fabric Extenders. With static pinning mode.

. nexus5000# show interface ethernet 1/8 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Eth1/8 Eth100/1/32 Eth100/1/31 Eth100/1/30 Eth100/1/1 Eth100/1/29 The essence of the static pinning operation is that if Cisco Nexus 2000 Series FCoE Fabric Extender’s fabric uplink interface toward the Cisco Nexus 5000 Series parent switch fails.. If the servers are dual-homed to two different Cisco Nexus 2000 Series FCoE Fabric Extenders. a failover event will be triggered on the server. Figure 11-31 depicts server connectivity failover in Cisco Nexus 2232 FCoE Fabric Extender static pinning deployment.. in which case the server will failover to the redundant network connection. If a server is singly connected. will see their Ethernet link go down. it will lose its network connectivity until FCoE Fabric Extender uplink is restored or until the server is physically replugged into an operational Ethernet port. if the second fabric uplink interface on the Cisco Nexus 2232 FCoE Fabric Extender fails.nexus5000# show interface ethernet 1/1 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Eth1/1 Eth100/1/4 Eth100/1/3 Eth100/1/2 . For example.. FCoE Fabric Extender will disable the server facing host interfaces. which are pinned to that specific fabric uplink interface. host interfaces 5 to 8 pinned to that fabric uplink interface will be brought down and the servers. which are connected to these host interfaces. or if the servers are dual-homed to the different pinning groups on the same Cisco Nexus 2000 Series FCoE Fabric Extender. Figure 11-31 FCoE Fabric Extender Static Pinning and Server NIC Failover .

This connection could be in a form of a single link or a port . each FCoE Fabric Extender is connected to both Cisco Nexus parent switches. even in the case of FCoE Fabric Extender uplink interface failure. where each group of four 1 Gb FCoE Fabric Extender host interfaces shares the bandwidth of one 10 GB FCoE Fabric Extender uplink.Static pinning configuration on the Cisco Nexus 2232 FCoE Fabric Extender results in a 4:1 oversubscription ratio. while in case of dynamic pinning. which translates to higher oversubscription levels for the server traffic. a port channeled fabric uplink interface is leveraged to pass the traffic between the Cisco Nexus 2000 Series FCoE Fabric Extender and the Cisco Nexus 5000 Series parent switch. Figure 11-32 depicts dynamic pinning behavior on the Cisco Nexus 2232 FCoE Fabric Extender. server traffic crossing the FCoE Fabric Extender is hashed onto the remaining member interfaces of the uplink port channel. This results in decreased uplink bandwidth. a failure of a single port channel member interface does not bring down any host facing interfaces on the FCoE Fabric Extender. If an individual port channel fabric uplink member interface failure occurs. FCoE and vPC Fabric Extender Attachment Topology In the vPC FCoE Fabric Extender attachment topology. Figure 11-32 FCoE Fabric Extender Dynamic Pinning and Server NIC Failover The most significant difference between static and dynamic pinning modes is that static pinning mode allows maintaining the same oversubscription levels for the traversing server traffic. With dynamic pinning mode. and as such. The possible downside of static pinning is that it relies on healthy server network connectivity failover operation. which should be regularly verified. the oversubscription levels will raise with any failed individual FCoE Fabric Extender uplink interface.

channel. Figure 11-33 vPC FCoE Fabric Extender Attachment Topology Because in case of vPC FCoE Fabric Extender attachment topology both Cisco Nexus 5000 Series parent switches control the FCoE Fabric Extender. they must be placed in a vPC domain. Example 11-4 Fabric Extender Uplink Port Channel for vPC Attached FCoE Fabric Extender Click here to view code image nexus5000-1(config)# interface e1/1-4 nexus5000-1(config-if-range)# channel-group 100 nexus5000-1(config)# interface po100 nexus5000-1(config-if)# vpc 100 nexus5000-1(config-if)# switchport mode fex-fabric nexus5000-1(config-if)# fex associate 100 Relevant configuration parameters must be identical on both parent switches. Example 11-4 shows an example of Cisco Nexus 5000 Series parent switch configuration of the FCoE Fabric Extender uplink port channel for vPC attached FCoE Fabric Extender. FCoE Fabric Extender will keep being registered. Both parent switches manage the ports on that FCoE Fabric Extender and provide its features. As in the case of straight-through FCoE Fabric Extender attachment. and its front panel interfaces will keep being up on the parent switches. up to a maximum number of uplinks available on the FCoE Fabric Extender. Cisco Nexus 5000 Series parent switches automatically check for compatibility of some of these parameters on the vPC . As long as a single uplink exists between the FCoE Fabric Extender and any one of its two parent switches. Figure 11-33 depicts FCoE Fabric Extender attachment topology leveraging vPC topology with either single or port-channeled fabric uplink interfaces toward each of the Cisco Nexus 5000 Series parent switches. the number of uplinks determines the oversubscription levels introduced by the FCoE Fabric Extender for server traffic.

interfaces. while network traffic is allowed to use both parent switches. It is referred to as Enhanced vPC. DWRR. This setup creates a dual-layer vPC between servers. so dynamic pinning is automatically leveraged instead. or even at the physical network level. therefore. Port channel mode (on. which allows the configuration for host interfaces on the Cisco Nexus 2000 Series FCoE Fabric Extender to be synchronized between the two parent switches. To achieve connectivity redundancy. off. Figure 11-34 depicts Enhanced vPC topology where each FCoE Fabric Extender is pinned to only a single Cisco Nexus 5500 parent switch for the FCoE traffic. . or active) Link speed per port channel Duplex mode per port channel Trunk mode per port channel (native VLAN) Spanning Tree Protocol mode Spanning Tree Protocol region configuration for Multiple Spanning Tree (MST) Protocol Enable or disable state per VLAN Spanning Tree Protocol global settings Spanning Tree Protocol interface settings Quality of service (QoS) configuration and parameters (PFC. configuration changes that are made to host interfaces on a vPC attached FCoE Fabric Extender must be manually synchronized between the two parent switches. To simplify the configuration tasks. In this case. vPC FCoE Fabric Extender attachment topology does not support static pinning. The per-interface parameters must be consistent per interface. High availability is an essential requirement in most data center designs. FCoE servers can use a port channeled connection toward a pair of upstream Cisco Nexus 2000 Series FCoE Fabric Extenders attached to two different Cisco Nexus parent switches. such as Cisco Nexus 5500 switches. the supervisor level. and Cisco Nexus parent switches. even though network traffic is dual homed between the FCoE Fabric Extender and a parent switch pair. and the global parameters must be consistent globally. FCoE Fabric Extenders. whether it is accomplished at the port level. Cisco Nexus 5000 Series parent switches support a configuration-synchronization feature. the FCoE traffic must be single homed to maintain SAN Side A / Side B isolation. put in a vPC domain. MTU) Note The two parent switches are configured independently. Note Cisco Nexus 5000 (5010 and 5020) switches do not support Enhanced vPC.

Example 11-5 shows the command-line interface configuration required to pin the FCoE Fabric Extender to only a single Cisco Nexus 5500 parent switch for the FCoE traffic in Enhanced vPC topology. In the Enhanced vPC topology. although FEX 101 and FEX 102 are connected and registered to both N5500-1 and N5500-2. restricting FCoE traffic to a single Cisco Nexus parent switch is called Fabric Extender pinning. This is also true for the return FCoE traffic from a Cisco Nexus 5500 parent switches toward Cisco Nexus 2000 Series FCoE Fabric Extenders where FCoE servers are connected. Example 11-5 FCoE Traffic Pinning in Enhanced vPC Topology Configuration Click here to view code image nexus5500-1(config)# fex 101 nexus5500-1(config-fex)# fcoe nexus5500-2(config)# fex 102 nexus5500-2(config-fex)# fcoe In this configuration. traffic load on the FCoE Fabric Extender uplink ports is not evenly distributed because FCoE traffic from a Fabric Extender is pinned to only a single Cisco Nexus . the FCoE traffic from FEX 101 is sent only to N5500-1.Figure 11-34 FCoE Traffic Pinning in Enhanced vPC Topology With Enhanced vPC topology. As a result. and the FCoE traffic from FEX 102 is sent only to N5500-2. SAN A and SAN B separation is achieved.

Straight-through FCoE Fabric Extender topology also allows easier control of how the bandwidth of FCoE Fabric Extender uplink is shared between the network and FCoE traffic by configuring an ingress and egress queuing policy. Figure 11-35 depicts server connectivity topologies with and without Cisco Nexus 2000 Series FCoE Fabric Extenders supporting Adapter-FEX solution deployment. so network and FCoE traffic each get half of the guaranteed bandwidth in case of congestion. as discussed earlier in this chapter. as well as the server physical adapters. each type of traffic can take all available bandwidth. you can leverage straight-through topology for FCoE deployments. FCoE and Server Fabric Extenders Previous sections describe how Cisco Fabric Extender architecture leveraging Cisco Nexus 2000 Series FCoE Fabric Extenders provides substantial advantages for FCoE deployment. including the ones going to the second Cisco Nexus parent switch. We are now going to further extend those advantages into the servers. Adapter-FEX Cisco Adapter-FEX technology virtualizes PCIe bus.parent switch for SAN side separation. The default QoS template allocates half of the bandwidth to each one. To prevent such an imbalance of traffic distribution across FCoE Fabric Extender uplinks. This results in a reduced number of network and storage switch ports. while the network traffic is hashed across all FCoE Fabric Extender uplinks. however. Figure 11-35 Adapter-FEX Server Topologies The Cisco Adapter-FEX technology addresses these main challenges: Organizations are deploying virtualized workloads to meet the strong need for saving costs and reducing physical equipment. Enough FCoE Fabric Extender uplink bandwidth should be provisioned to prevent the undesirable oversubscription of links that carry both FCoE and network traffic. leveraging Adapter-FEX technology. Virtualization technologies and the increased number of CPU cores. presenting a server operating system with a set of virtualized adapters for network and storage connectivity. require servers to be equipped with a large number of network connections. When there is no congestion. .

The server can either be directly connected to the Cisco Nexus 5500 switch or it can be connected through the Cisco Nexus 2000 Series FCoE Fabric Extender. disparate provisioning. as shown in Example 11-6. Example 11-6 Installing and Activating the Cisco Adapter-FEX Feature on Cisco Nexus 5500 Switches Click here to view code image nexus5500(config)# install feature-set virtualization nexus5500(config)# feature-set virtualization Note Cisco Nexus 5000 (5010 and 5020) switches do not support Adapter-FEX. and maintaining policy consistently across mobility events. and switch ports.This increase has a tremendous impact on CapEx and OpEx because of the large number of adapters. Virtual Ethernet Interfaces By virtualizing the server Converged Network Adapter. latency. QOS. and cable management costs. and isolation of traffic across multiple cores and virtual machines. the network adapter on the server operates in a Classical Ethernet mode. Data Center Bridging Exchange (DCBX) protocol must also run between the Cisco Nexus 5500 switch interface and the connected server. . enforcing policies. it is challenging to provide guaranteed bandwidth. private VLANs. Adapter-FEX allows instantiating virtual network interface card (vNIC) server interface mapped to the virtual Ethernet (vEth) interface on the upstream Cisco Nexus 5500 switch. They also often encounter a dramatic increase in the number of management points. in which case it has full network connectivity via its physical (nonvirtualized) interface to the upstream Cisco Nexus 5500 switch or Cisco Nexus 2000 Series FCoE Fabric Extender. Example 11-7 depicts command-line interface configuration required to enable VN-Tag mode on the switch interface or the FCoE Fabric Extender host interface. and inconsistency between the physical and virtual data center access layers. cooling. management. available on the Cisco Nexus 5500 switches can be equally applied on the vEth interfaces representing vNICs. Network administrators face challenges in establishing policies. cables. No licensing requirement exists on the Cisco Nexus 5500 switch to run Adapter-FEX. such as ACLs. Before vEth interfaces can be created. it does require feature installation and activation. which in turn is connected and registered with the Cisco Nexus 5500 parent switch. the physical Cisco Nexus 5500 switch interface the server is connected to must be configured with VN-Tag mode. network administrators experience lack of visibility into the traffic that is exchanged among virtual machines residing on the same host. With a consolidated infrastructure. however. and so on. Network administrators struggle to link the virtualized network infrastructure and virtualized server platforms to the physical network infrastructure without degrading performance and with a consistent feature set and operational model. Before VN-Tag mode is activated. Features. which directly affects power. and operational models. In virtualized environments.

before the server administrator completes all the necessary configuration tasks on the server side to bring up the vEth-to-vNIC relationship. Example 11-8 shows vEth configuration and the channel binding. leveraging channel 50. It is possible to associate configuration to the interface before it is even brought up. The vNIC on the Converged Network Adapter also must be configured to use channel 50 for successful binding. the server is connected to a Cisco Nexus 2000 Series FCoE Fabric Extender host interface Ethernet 101/1/1. Example 11-8 Static vEth Interface Provisioning Click here to view code image nexus5500(config)# interface veth10 nexus5500(config-if)# bind interface ethernet 101/1/1 channel 50 . the UCS Virtual Interface Card. which is bound to a vEth interface 10 on the upstream Cisco Nexus 5500 switch. the server starts to include Network Interface Virtualization (NIV) capability type-length-value (TLV) fields in its Data Center Bridging Exchange (DCBX) advertisements. When creating a static vEth interface. as depicted in Figure 11-36. supporting Con