You are on page 1of 413

Building HP FlexFabric Data Centers

Learner guide - book 1 of 3

HP ExpertOne
Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learner guide - book 1 of 3

HP ExpertOne
Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
 Copyright 2014 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for HP products and
services are set forth in the express warranty statements accompanying such products and services. Nothing
herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial
errors or omissions contained herein.
This is an HP copyrighted work that may not be reproduced without the written permission of HP. You may not
use these materials to deliver training to any person outside of your organization without the written permission
of HP.

Building HP FlexFabric Data Centers


Learner Guide - book 1 of 3
Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
This course is an HP ExpertOne authorized course designed to prepare you for
the associated certification exam. All material to be used and studied in
preparation to pass the certification exam is included in this training.

HP ExpertOne provides training and certification for the most sought-after IT


disciplines, including convergence, cloud computing, software-defined
networking, and security. You get the hands-on experience you need to hit the
ground running. And you learn how to design solutions that deliver business
value.

HP ExpertOne gives you:


 A full range of skill levels, from foundational to master
 Personalized learning plans and resources through My ExpertOne
 Certifications that command some of the highest pay premiums in the
industry
 A focus on end-to-end integration, open standards, and emerging
technologies
 Maximum credit for certifications you already hold
 A supportive global community of IT professionals
 A curriculum of unprecedented breadth from HP, the world’s most
complete technology company

Visit hp.com/go/ExpertOne to learn more about HP certifications and find the


training you need to adopt new technologies that will further enhance your IT
expertise and career.

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Contents
Module 1 - Datacenter Products and Technologies Overview....................................................1-1
Objectives........................................................................................................................1-1
HP FlexFabric Overview..................................................................................................1-2
The World is Moving to a New Style of IT.......................................................................1-3
Apps Are Changing - Networks Must Change.................................................................1-4
Multi-tier Legacy Architecture in the Data Center (DC)...................................................1-5
HPN FlexFabric Value Proposition..................................................................................1-6
HP FlexFabric Product Overview....................................................................................1-7
HP FlexFabric Core Switches..........................................................................................1-8
HP FlexFabric 12900 Switch Series................................................................................1-9
HP 12500E Switch Series.............................................................................................1-10
HP 12500 Switch Series Overview................................................................................1-11
HP FlexFabric 11908 Switch Series..............................................................................1-12
HP FlexFabric 7900 Switch Series................................................................................1-13
HP FlexFabric Access Switches....................................................................................1-14
HP FlexFabric 5930 Switch Series................................................................................1-16
HP FlexFabric 5900CP Converged Switch....................................................................1-17
FlexFabric 5700 Datacenter ToR Switch.......................................................................1-18
HP HSR6800 Router Series..........................................................................................1-19
Virtual Services Router..................................................................................................1-20
IMC VAN Fabric Manager.............................................................................................1-21
HP FlexFabric Cloud: Virtualized DC Use Case............................................................1-22
Data Center Technologies Overview.............................................................................1-23
Overview of DC Technologies.......................................................................................1-24
Multi-tenant Support......................................................................................................1-25
Multi-tenant Isolation.....................................................................................................1-26
Multi-tenant Isolation with MDC and MCE.....................................................................1-27
Multi-tenant Isolation for Layer 2...................................................................................1-28
Network Overlay Functions...........................................................................................1-30
SDN: Powering Your Network Today and Tomorrow....................................................1-31
Data Center Ethernet Fabric Technologies...................................................................1-32
Data Center Ethernet Fabric Technologies 2................................................................1-33
Data Center Ethernet Fabric Technologies 3................................................................1-34
Server Access Layer – Hypervisor Networking ............................................................1-36
Server Access Layer – Converged Storage .................................................................1-37
Server Access Layer – FC/FCoE .................................................................................1-38
Data Center Interconnect Technologies........................................................................1-39
Data Center Interconnect Technologies 2.....................................................................1-40

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Data Center Interconnect Technologies 3.....................................................................1-41
Learning Activity: Technologies in the Data Center.......................................................1-42
Learning Activity: Answers............................................................................................1-45
Summary.......................................................................................................................1-47
Learning Check.............................................................................................................1-48
Learning Check Answers...............................................................................................1-49
Module 2 - Multitenant Device Context (MDC)............................................................................2-1
Objectives........................................................................................................................2-1
Feature overview.............................................................................................................2-2
MDC Overview.....................................................................................................2-2
IRF versus MDC..................................................................................................2-2
MDC features.......................................................................................................2-3
MDC applications.................................................................................................2-3
MDC Benefits overview...................................................................................................2-5
MDC benefits.......................................................................................................2-5
Feature overview.............................................................................................................2-6
MDC features.......................................................................................................2-6
Supported platforms........................................................................................................2-8
Supported products.............................................................................................2-8
Supported products.............................................................................................2-9
Use Case 1: Datacenter change management.............................................................2-13
Overview............................................................................................................2-13
Development Network.......................................................................................2-13
Quality Assurance (QA) Network.......................................................................2-13
Use Case 2: Customer isolation....................................................................................2-15
Use Case 3: Infrastructure & customer isolation...........................................................2-16
Use Case 4: Hardware limitation workaround...............................................................2-17
MDC numbering and naming.........................................................................................2-18
Architecture...................................................................................................................2-19
Architecture, Control Plane............................................................................................2-21
Architecture, ASICs.......................................................................................................2-22
Architecture, ASIC control.............................................................................................2-23
Architecture, ASIC control.............................................................................................2-24
Architecture, hardware limits.........................................................................................2-26
Architecture (continued)................................................................................................2-28
Architecture, Console Ports...........................................................................................2-29
Console Port......................................................................................................2-29
Management-Ethernet ports..............................................................................2-29
Design considerations...................................................................................................2-30
ASIC restrictions................................................................................................2-30

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Platforms............................................................................................................2-30
Basic configuration steps...............................................................................................2-31
Overview............................................................................................................2-31
Basic configuration steps...................................................................................2-31
Configuration step 1: Define a new MDC......................................................................2-32
Configuration step 2: Authorize MDC for a line card.....................................................2-33
Configuration step 3: Allocate interfaces per ASIC.......................................................2-34
Configuration step 4: Start MDC....................................................................................2-36
Configuration step 5: Access the MDC..........................................................................2-37
MDC advanced configuration topics..............................................................................2-38
Restricting MDC resources: Limit CPU.........................................................................2-40
Restricting MDC resources: Limit memory....................................................................2-42
Restricting MDC resources: Limit storage.....................................................................2-44
Management Ethernet...................................................................................................2-45
Device firmware updates...............................................................................................2-46
Network Virtualization Types.........................................................................................2-47
IRF.....................................................................................................................2-47
MDC...................................................................................................................2-47
MDC and IRF.....................................................................................................2-47
IRF-Based MDCs..........................................................................................................2-49
IRF-Based MDCs..........................................................................................................2-50
MDCs and IRF types.....................................................................................................2-51
Overview............................................................................................................2-51
12500/12500E...................................................................................................2-51
10500/11900/12900...........................................................................................2-51
Configuration examples.................................................................................................2-53
12500/12500E...................................................................................................2-53
10500/11900/12900...........................................................................................2-53
More MDC and IRF configuration information...............................................................2-54
10500/11900/12900 link failure scenario.......................................................................2-55
12500/12500E link failure scenario...............................................................................2-56
IRF-based MDC: IRF Fabric Split..................................................................................2-58
Multi Active Detection (MAD).........................................................................................2-59
Learning Activity: MDC Review.....................................................................................2-60
Learning Activity: Answers............................................................................................2-61
Summary.......................................................................................................................2-62
Learning Check.............................................................................................................2-63
Learning Check Answers...............................................................................................2-65
Lab Activity 2: Lab Topology.........................................................................................2-66
Lab Activity Preview: MDC Overview............................................................................2-67

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Lab Activity 2 Debrief.....................................................................................................2-68
Module 3 - Multi-CE (MCE).........................................................................................................3-1
Objectives........................................................................................................................3-1
MPLS L3VPN overview...................................................................................................3-2
MPLS L3VPN overview.......................................................................................3-2
Basic MPLS L3VPN architecture.........................................................................3-2
Site.......................................................................................................................3-3
Termininology..................................................................................................................3-4
VRF / VPN Instance.............................................................................................3-4
VPN-IPv4 address...............................................................................................3-4
Route target attribute...........................................................................................3-5
MCE / VRF-Lite....................................................................................................3-6
MCE overview.................................................................................................................3-7
MCE overview......................................................................................................3-7
Feature overview.............................................................................................................3-8
MCE features.......................................................................................................3-8
Supported platforms........................................................................................................3-9
Supported products.............................................................................................3-9
Design considerations...................................................................................................3-10
Overview............................................................................................................3-10
Use Case 1: Multi-tenant datacenter.............................................................................3-11
Use Case 2: Campus with independent business units................................................3-12
Use Case 3: Overlapping IP segments.........................................................................3-13
Use Case 4: Isolated management network..................................................................3-14
Use Case 5: Shared services in Data Center................................................................3-15
Basic configuration steps...............................................................................................3-16
Configuration step 1: Define VPN-Instance...................................................................3-17
Step 1: Define VPN-Instance (continued)......................................................................3-18
Configuration step 2: Route Distinguisher.....................................................................3-19
Step 2: Route Distinguisher (continued)........................................................................3-20
Syntax................................................................................................................3-20
Example.............................................................................................................3-20
Configuration step 3.1: Define L3 Interface...................................................................3-21
Step 3: Define L3 Interface (continued).........................................................................3-22
Syntax................................................................................................................3-22
Usage guidelines...............................................................................................3-23
Examples...........................................................................................................3-23
Configuration step 4: Bind L3 Interface.........................................................................3-26
Step 5: Configure IP on L3 address..............................................................................3-28
Overview............................................................................................................3-28

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
IP address..........................................................................................................3-28
Syntax................................................................................................................3-28
display ip routing-table vpn-instance.................................................................3-29
Syntax................................................................................................................3-29
Example.............................................................................................................3-29
Configuration step 6: Configure Routing (1 of 3)...........................................................3-30
Overview............................................................................................................3-30
Static Routes.....................................................................................................3-30
Configuration step 6: Configure routing (2 of 3)............................................................3-31
ping....................................................................................................................3-31
Syntax................................................................................................................3-31
Examples...........................................................................................................3-33
Configuration step 6: Configure routing (3 of 3)............................................................3-36
Syntax................................................................................................................3-36
Example.............................................................................................................3-36
VPN-Instance dynamic routing - OSPF example..........................................................3-37
Overview............................................................................................................3-37
Loopback configuration......................................................................................3-37
OSPF.................................................................................................................3-38
VPN-instance dynamic routing - OSPF example...........................................................3-40
Overview............................................................................................................3-40
display ospf lsdb................................................................................................3-40
Syntax................................................................................................................3-40
Example.............................................................................................................3-42
display ospf peer................................................................................................3-45
Syntax................................................................................................................3-45
Example.............................................................................................................3-45
VPN-instance dynamic routing - OSPF example...........................................................3-48
Overview............................................................................................................3-48
Syntax................................................................................................................3-48
Example.............................................................................................................3-49
Lab Activity 3.1: Lab Topology......................................................................................3-50
Lab Activity 3.1 Preview: Configuring Basic MCE.........................................................3-51
Lab Activity 3.1 Debrief..................................................................................................3-52
MCE: Advanced configuration.......................................................................................3-53
VPN instance routing limits............................................................................................3-54
Overview............................................................................................................3-54
Warning message examples.............................................................................3-55
Syntax................................................................................................................3-55
Usage guidelines...............................................................................................3-55

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Examples...........................................................................................................3-55
Route leaking.................................................................................................................3-57
Route leaking - Static route example.............................................................................3-58
Syntax................................................................................................................3-59
Route leaking: Static route example..............................................................................3-61
Route leaking: Static route example (continued)...........................................................3-62
Route leaking - Static route restrictions.........................................................................3-63
Lab Activity 3.2: Lab Topology......................................................................................3-64
Lab Activity 3.2 Preview: Advanced VPN instance configuration..................................3-65
Lab Activity 3.2 Debrief..................................................................................................3-66
Management access VPN instance...............................................................................3-67
Management access VPN instance...............................................................................3-68
Management access VPN-Instance (1/2)......................................................................3-69
Overview............................................................................................................3-69
SNMP Agent......................................................................................................3-70
Syslog................................................................................................................3-72
Syntax................................................................................................................3-72
Examples...........................................................................................................3-72
NTP....................................................................................................................3-73
Syntax................................................................................................................3-73
Examples...........................................................................................................3-74
Management access VPN-Instance (2/2)......................................................................3-75
RADIUS.............................................................................................................3-75
sFlow/Netflow....................................................................................................3-76
OpenFlow..........................................................................................................3-77
IMC Management Access using VPN-Instance.............................................................3-79
IMC Management Access using VPN-Instance.............................................................3-80
IMC Management Access using VPN-Instance.............................................................3-81
IMC Management Access using VPN-Instance.............................................................3-82
Overview............................................................................................................3-82
ACLs..................................................................................................................3-83
Telnet server acl................................................................................................3-85
Syntax................................................................................................................3-85
Examples...........................................................................................................3-85
Learning Activity: MCE Review.....................................................................................3-86
Answers.........................................................................................................................3-87
Optional lab activity 3.3: Lab Topology..........................................................................3-88
Lab Activity 3.3 Preview: Management VPN-Instance..................................................3-89
Lab Activity 3.3 Debrief..................................................................................................3-90
Summary.......................................................................................................................3-91

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Learning Check.............................................................................................................3-92
Learning Check Answers...............................................................................................3-93
Module 4 - DCB Datacenter Bridging..........................................................................................4-1
Objectives........................................................................................................................4-1
DCB Topics.....................................................................................................................4-2
Datacenter Bridging – Introduction ................................................................................4-3
DCB vs Previous Technologies.......................................................................................4-4
DCB Components............................................................................................................4-5
DCB Feature Overview....................................................................................................4-6
DCB - Supported Products..............................................................................................4-7
Access switches..................................................................................................4-7
Core switches......................................................................................................4-7
Full HP Supported configuration limited to select products.................................4-7
Design Considerations....................................................................................................4-8
DCBX – Data Center Bridging eXchange ......................................................................4-9
Configuration Steps for DCBX.......................................................................................4-10
DCBX Step 1: Enable Global LLDP...............................................................................4-11
DCBX Step 2: Enable Interface LLDP DCBX TLVs.......................................................4-12
DCBX Step 3: Verify......................................................................................................4-13
Ethernet Flow Control....................................................................................................4-14
PFC – Enhancing Ethernet Flow Control .....................................................................4-15
PFC – Configuration Modes .........................................................................................4-16
Configuration Steps for PFC Manual Mode...................................................................4-17
PFC Manual Step 1: Enable Interface PFC Mode.........................................................4-18
PFC Manual Step 2: Enable Lossless for Dot1p...........................................................4-19
Configuration Steps for PFC Auto Mode.......................................................................4-20
PFC Auto Step 1: Enable Interface PFC Mode.............................................................4-21
PFC Auto Step 2: Enable Lossless for Dot1p................................................................4-22
PFC Auto Step 3: Verify................................................................................................4-23
APP – Application TLV .................................................................................................4-24
APP – Application TLV .................................................................................................4-25
APP – Application TLV .................................................................................................4-26
APP – Application TLV .................................................................................................4-27
Configuration Steps for APP..........................................................................................4-28
APP Step 1: Configure Traffic ACLs for Layer 2...........................................................4-29
APP Step 3: Configure QOS Traffic Classifier...............................................................4-31
APP Step 4: Configure QOS Traffic Behavior...............................................................4-32
APP Step 5: Configure QOS Policy...............................................................................4-33
APP Step 6: Activate the QoS Policy............................................................................4-34
APP Step 7: Verify.........................................................................................................4-35

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
APP Step 7: Verify Continued.......................................................................................4-36
APP - Other examples...................................................................................................4-37
ETS – Enhanced Transmission Selection ....................................................................4-38
ETS – Enhanced Transmission Selection ....................................................................4-40
ETS – Enhanced Transmission Selection ....................................................................4-41
ETS – Enhanced Transmission Selection ....................................................................4-42
Configuration Steps for ETS..........................................................................................4-43
ETS Step 1: QoS Map dot1p-lp.....................................................................................4-44
ETS Step 2: Interface Scheduling and Weights............................................................4-45
ETS Step 2 Continued: A Weight Problem....................................................................4-46
ETS Step 2 Continued: Assign Queues to SP...............................................................4-47
ETS Step 3: Verify Local Configuration.........................................................................4-48
Learning Activity: DCB Operation and Component Review..........................................4-49
Learning Activity: Answers............................................................................................4-50
Lab Activity 4: Lab Topology.........................................................................................4-51
Lab Activity Preview: Data Center Bridging...................................................................4-52
Lab Activity 4 Debrief.....................................................................................................4-53
Summary.......................................................................................................................4-54
Learning Check.............................................................................................................4-55
Learning Check Answers...............................................................................................4-56
Module 5 - Fibre Channel over Ethernet (FCoE).........................................................................5-1
Objectives........................................................................................................................5-1
FC and FCoE Overview...................................................................................................5-2
What is a SAN?...............................................................................................................5-3
SAN Components............................................................................................................5-4
HP Disk Storage Systems Portfolio.................................................................................5-5
Converged Networking - Cookbooks...............................................................................5-6
Host (Initiator) – (Originator) ..........................................................................................5-7
Disk Array (Target) – (Responder) .................................................................................5-9
Nodes, Ports, and Links................................................................................................5-10
FC Frame and Addressing............................................................................................5-11
Fibre Channel Frame.....................................................................................................5-12
Fibre Channel Frame Header........................................................................................5-13
Fibre Channel Terminology...........................................................................................5-15
SCSI (FCP) write operation...........................................................................................5-16
FC World Wide Name (WWN).......................................................................................5-17
FC WWN Structure........................................................................................................5-18
Fibre Channel ID Addressing (1 of 3)............................................................................5-19
Fibre Channel ID Addressing (2 of 3)............................................................................5-20
Fibre Channel ID Addressing (3 of 3)............................................................................5-21

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fabric Domain IDs.........................................................................................................5-22
Principal Switch Election...............................................................................................5-23
FC Interswitch Forwarding.............................................................................................5-24
Learning Activity: FC Review.........................................................................................5-25
Learning Activity: Answers............................................................................................5-27
FC Flow Control Overview.............................................................................................5-28
FC Classes and Flow control.........................................................................................5-29
FC Class 2 Flow Control Scenario................................................................................5-30
ISL Bandwidth Aggregation...........................................................................................5-31
FC Forwarding...............................................................................................................5-32
FC Forwarding...............................................................................................................5-33
Fabric Login (FLOGI).....................................................................................................5-35
Simple Name Service Database....................................................................................5-36
VSANs – Virtual SAN/Fabrics ......................................................................................5-37
VSAN vs Physical SAN.................................................................................................5-38
VSANs – Virtual SAN/Fabrics on Comware .................................................................5-39
VSANs – Virtual SAN/Fabrics on Comware .................................................................5-40
VSANs - Tagging...........................................................................................................5-41
Basic Configuration Steps.............................................................................................5-42
Configuration Step 1: System-working-mode................................................................5-43
Configuration Step 2: Define FCoE Operating Mode....................................................5-44
Configuration Step 3: Define VSAN...............................................................................5-45
Configuration Step 4: Transport VLAN and Bind VSAN................................................5-46
Configuration Step 5: Configure FC Interface...............................................................5-47
Configuration Step 5: Validate Interface Status.............................................................5-48
Configuration Step 6: FC Interface Port Type (1 of 2).................................................5-49
Configuration Step 6: FC Interface Port Type (2 of 2).................................................5-50
Configuration Step 7: Assign FC Interface to VSAN.....................................................5-51
Configuration Step 8: Set Default Zone Permit.............................................................5-52
Configuration Step 9: Status Review.............................................................................5-53
Optional Debugging.......................................................................................................5-54
Lab Activity 5.1: Lab Topology......................................................................................5-55
Lab Activity Preview: FC/FCoE - Native FC Setup........................................................5-56
Lab Activity 5.1 Debrief..................................................................................................5-57
FCoE Overview.............................................................................................................5-58
FCoE I/O Consolidation.................................................................................................5-59
FCoE Goals...................................................................................................................5-60
FCoE Terminology.........................................................................................................5-61
Converged Network Adapters (CNA)............................................................................5-62
HP CNA Products..........................................................................................................5-63

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
FCoE Server Access.....................................................................................................5-64
FCoE Stack Overview...................................................................................................5-65
FCoE Encapsulation......................................................................................................5-66
FIP: FC Initialization Protocol........................................................................................5-67
FIP: VLAN and FCF Discovery......................................................................................5-68
FIP: FLOGI and FPMA..................................................................................................5-69
FCoE Design considerations.........................................................................................5-70
Configuration Steps for FCoE Host Access...................................................................5-71
Configuration Steps for FCoE Host Access...................................................................5-72
Configuration Step 1: Create Virtual FC Interface.........................................................5-73
Configuration Step 2: VFC FC Port Type......................................................................5-74
Configuration Step 3: Bind VFC to Interface (1 of 2).................................................5-75
Configuration Step 3: Bind VFC to Interface (2 of 2).................................................5-76
Configuration Step 4: Assign VFC Interface to VSAN...................................................5-77
Configuration Step 5: Physical Interface VLAN Assignment.........................................5-78
Lab Activity 5.2: Lab Topology......................................................................................5-79
Lab Activity Preview: FC/FCoE - FCoE Server Access.................................................5-80
Lab Activity 5.2 Debrief..................................................................................................5-81
Fabric Expansion...........................................................................................................5-82
FabricExpansion: E_Port...............................................................................................5-83
Fabric Expansion: Routing Table Exchange.................................................................5-84
Configuration steps for Fabric Expansion with FCoE....................................................5-85
Configuration Step 1: Create New VFC Interface..........................................................5-86
Configuration Step 2: Set FC Port Type to E_Port........................................................5-87
Configuration Step 3: Verify Status...............................................................................5-88
Multi-path - Concepts....................................................................................................5-89
Multi-path – Automatic Failover ...................................................................................5-90
Lab Activity 5.3: Lab Topology......................................................................................5-91
Lab Activity Preview: FC/FCoE - FC Fabric Extension..................................................5-92
Lab Activity 5.3 Debrief..................................................................................................5-93
Fabric Zoning.................................................................................................................5-94
Fabric Zoning Concepts................................................................................................5-95
Zone Members..............................................................................................................5-96
Zone Members..............................................................................................................5-97
Zone Enforcement.........................................................................................................5-98
Zoning Configuration Prerequisites...............................................................................5-99
Configuration Steps for Zoning....................................................................................5-100
Configuration Step 1: Prepare Zone Member Alias.....................................................5-101
Configuration Step 2: Define Zones............................................................................5-102
Configuration Step 3: Define a Zone Set.....................................................................5-103

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Configuration Step 4: Distribute and Activate Zone Set..............................................5-104
Configuration Step 5: Verify.........................................................................................5-105
NPV – NPIV Overview ...............................................................................................5-106
Server Virtualization with NPIV....................................................................................5-107
FC Switch with NPV Mode..........................................................................................5-109
FC Switch NPV Mode - Considerations.......................................................................5-110
Prerequisites to Configure NPV Mode.........................................................................5-112
Configuration Steps for NPV Mode.............................................................................5-113
Configuration Step 1: Configure global NPV mode.....................................................5-114
Configuration Step 2: Configure FC or VFC Interfaces...............................................5-115
Configuration Step 3: Uplink Interface NP_Port..........................................................5-116
Configuration Step 4: Downlink Interfaces F_Port.......................................................5-117
Configuration Step 5: Verify Status.............................................................................5-118
Lab Activity 5.4: Lab Topology....................................................................................5-119
Lab Activity Preview: FC/FCoE - NPV.........................................................................5-120
Lab Activity 5.4 Debrief................................................................................................5-121
Summary.....................................................................................................................5-122
Learning Check...........................................................................................................5-123
Learning Check Answers.............................................................................................5-125

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies
Overview
Module 1

Objectives
This module introduces HP’s FlexFabric portfolio, and describes how these
products can be used to deploy simple, scalable, automated data center
networking solutions. Specific data center technologies are also introduced. These
include multi-tenant solutions such as MDC, MCE, and SPBM, along with
Hypervisor integration protocols like PBB and VEPA. Other connectivity solutions
include MPLS L2VPN, VPLS, EVI, SPBM, and TRILL.
After completing this module, you should be able to:
 Understand the components of the HP FlexFabric network architecture
 Describe common datacenter networking requirements
 Position the HP FlexFabric products
 Describe the HP IMC VAN Modules

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 1-1

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HP FlexFabric Overview

Figure 1-1: HP FlexFabric Overview

This module provides an overview of the components that are involved in the
FlexFabric network architecture. It describes common data center networking
requirements, positions HP FlexFabric products, and describes the HP data center
technologies.

1-2 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

The World is Moving to a New Style of IT

Figure 1-2: The World is Moving to a New Style of IT

Many IT functions and systems are continuing to change at a relatively brisk pace.
New paradigms arise, such as cloud computing and networking, big data, BYOD
and new security mechanisms, to name a few. With these new paradigms comes
new challenges and new requirements, influencing how we build networks going
forward:
 Cloud: We must understand how to build an agile, flexible and secure network
edge, especially with regards to multi-tenancy.
 Security: We have to rebuild the perimeter of the network wherever a device
connects without degrading the quality of business experience.
 Big Data: We have to enable the network to respond dynamically to real-time
data analytics and to deal with the volume of traffic involved.
 Mobility: We need to simplify the policy model in the campus by unifying wired
and wireless networks. In the data center, we need to increase the agility and
performance of mobile VMs.
A converged infrastructure can meet these needs by providing several key
features, including:
 A resilient fabric for less downtime and faster VM mobility
 Network virtualization for faster data center provisioning
 Software Defined Networking (SDN) – to simplify deployment and security –
creating business agility and network alignment to business priorities.

Rev. 14.41 1-3

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Apps Are Changing - Networks Must Change

Figure 1-3: Apps Are Changing - Networks Must Change

Applications are changing and the networks infrastructure must be capable of


handling these new application requirements. One significant trend is a massive
increase in virtualization. Almost any service will be offered as a virtualized
service, hosted inside a data center. These virtualized services can be in private
clouds, a customer’s local data center, or public clouds. They might even be
offered as a type of hybrid cloud service, which is a mix of private and public
clouds.
Inside the data center, the bulk of data traffic is now server-to-server. This is mainly
due to the change in application behavior, since we see much more use of
federated applications as opposed to monolithic application models of the past
Previously, companies may have used a single email server that provided multiple
functions. In today’s environment, companies may instead leverage a front-end
server, a business logic server, and a back-end database system. In such a
deployment, each client request towards the data center is handled by multiple
services inside the data center. This results in similar client-server interactions as
in the past, but with increased server-to-server traffic to fulfill those client requests.
Also, many storage services and protocols are now being supported by a
converged network that handles both traditional client-server traffic, as well as disk
storage-related traffic.

1-4 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

Multi-tier Legacy Architecture in the Data


Center (DC)

Figure 1-4: Multi-tier Legacy Architecture in the Data Center (DC)

Federated applications and virtualization has changed the way traffic flows through
the infrastructure. As packets must be passed between more and more servers,
increased latency can impact performance and end-user productivity. Networks
must be designed to mitigate these risks, while ensuring a stable, loop-free
environment. Network loops in a large data center environment can have
egregious impacts on the business, so the ability to maintain loop-free paths is of
particular importance.

Rev. 14.41 1-5

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HPN FlexFabric Value Proposition

Figure 1-5: HPN FlexFabric Value Proposition

HP’s FlexFabric approach has a focus on the three customer benefits. The
network should be simple, scalable, and automated.
Simple – reducing operational complexity by up to 75%
 Unified virtual/physical and LAN/SAN fabrics
 OS/feature consistently, no licensing complexity, cost

Scalable – double the fabric scaling, with up to 500% improved service delivery
 Non-blocking reliable fabric for 100-10,000 hosts
 Spine and leaf fabric optimized for Cloud, SDN

Automated – cutting network provisioning time from months to minutes


 300% faster time to service delivery, Software-Defined Network Fabric
 Open, standards based programmability, SDN App Store and SDK

1-6 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

HP FlexFabric Product Overview

Figure 1-6: HP FlexFabric Product Overview

This product overview section begins with a discussion of core and aggregation
switches. This is followed by an overview of access switches, and the IMC network
management systems.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 1-7

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HP FlexFabric Core Switches

Figure 1-7: HP FlexFabric Core Switches

The figure introduces the current portfolio of HP FlexFabric core switches. This
includes the HP FlexFabric 12900, 12500, 11900 and the 7904 Switch Series.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

1-8 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

HP FlexFabric 12900 Switch Series

Figure 1-8: HP FlexFabric 12900 Switch Series

The HP FlexFabric 12900 Switch Series is an exceedingly capable core data


center switch. The switch includes support for Open Flow 1.3, laying a foundation
for SDN and investment protection.
It provides 36 Tbps of throughput in a non-blocking fabric, and supports up to 768
10 gigabit ports, and up to 256 40Gbps ports. The 12900-series supports Fiber
Channel over Ethernet (FCoE) and Data Center Bridging (DCB).
The switch allows for In Service Software Upgrades (ISSU) to minimize downtime.
Additionally, protocols like TRILL and SPB can be used to provide scalable
connectivity between data center sites. All of these functions can be used in
conjunction with IRF to offer a redundant, flexible platform.

Rev. 14.41 1-9

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HP 12500E Switch Series

Figure 1-9: HP 12500E Switch Series

The HP 12500E Switch Series allows for up to 24Tbps switching capacity. It is


available in 8 and 18-slot chassis. It supports very large Layer 2 and Layer 3
address and routing tables, and data buffers. It allows for up to four units in an IRF
system.
The HP 12500 Switch Series has been updated, and so now supports high density
10 gigabit, 40 gigabit or 100 gigabit Ethernet modules - up to 400 gigabit per slot. It
can support traditional Layer 2 and Layer 3 functions IPv4 and IPv6. These
devices also feature support for the more modern protocols, such as MPLS, VPLS,
MDC, EVI, and more.
Wire-speed services provide a high-performance backbone while the energy-
efficient design lowers operational costs.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

1-10 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

HP 12500 Switch Series Overview

Figure 1-10: HP 12500 Switch Series Overview

The figure compares the features and capabilities of the 12500C and 12500E
platforms. The 12500C is based on Comware5 while the 12500E is based on
Comware7. The use of Comware7 results in enhanced MPU performance.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 1-11

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HP FlexFabric 11908 Switch Series

Figure 1-11: HP FlexFabric 11908 Switch Series

The HP FlexFabric 11900 Switch Series supports up to 7.7Tbps of throughput in a


non-blocking fabric. This switch can be a good choice for data center aggregation
switch.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

1-12 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

HP FlexFabric 7900 Switch Series

Figure 1-12: HP FlexFabric 7900 Switch Series

The HP FlexFabric 7900 Switch Series is the next generation compact modular
data center core switch. It is based on the same architecture and ComWare7 code
as larger chassis-based switches.
The feature set includes full support for IRF, TRILL, DCB, EVI, MDC, OpenFlow
and VXLAN.

Rev. 14.41 1-13

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HP FlexFabric Access Switches

Figure 1-13: HP FlexFabric Access Switches

The HP 5900 Switch Series can serve as traditional top-of-rack access switches.
The HP 5900AF Switch Series is available in various models, including 48 1Gbps
port versions and as 48 10Gbps switch port versions, with 4x 40Gbps uplink ports.
It is also available with 48 1/10Gbps ports, with 4 x 40Gbps uplink connections.
The 1/10Gbps port version is especially convenient for data centers which are
migrating servers from 1 to 10Gbps interfaces.
The 5930 is a Top-of-Rack (ToR) switch with 32 10/40Gbps ports. This switch
could be used to terminate 40 gigabit connections from blade server enclosures, or
it could be deployed as a distribution or aggregation layer device to concentrate a
set of HP 5900Switch Series. Each of the 40Gbps ports can be split out as four
10Gbps ports with a special cable. This means that the 32 40Gbps ports could
become 128 10Gbps ports, available in a 1U device.
The “CP” in the 5900CP model stands for Converged Ports. As the name implies,
both Fibre Channel over Ethernet (FCoE) and native Fibre Channel (FC) are
supported in a single, converged ToR access switch. All of the 5900 Switch Series
shown here support FCoE, but only the 5900CP also supports native FC
connectivity. The module installed in each port determines whether that port
functions as a 10Gbps FCoE port, or as an 8Gbps FC port. The 5900CP supports
FCoE-to-FC gateway functionality.
The HP FlexFabric 5900v is a virtual switch that can be installed as a replacement
for the VMware switch on a Hypervisor. The 5900v is based on the VEPA protocol.
This means that the 5900v does not support local direct configuration. Inter-VM
traffic will be sent to an external ToR switch to be serviced. This is why the 5900v
must be deployed in combination with a physical switch which also supports the
VEPA protocol. All Comware7-based 5900-series switches support VEPA.
In HP blade enclosures can have interconnects installed. These interconnects
must match the physical form factor of the blade enclosure. The HP 6125 XLG can
provide this blade server interconnectivity.
This switch belongs to the HP 5900 Switch Series family of switches, as it provides
10Gbps access ports for blade servers, along with 4 x 40Gbps uplink ports. As a
Comware7-based product, the 6125 XLG can be configured with the same
protocols and features as traditional HP 5900 Switch Series. For example, features
like FCoE and IRF are supported. This means that multiple 6125 XLG switches in
the same blade enclosure can be grouped together as a single virtual IRF system.
1-14 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

It also supports VEPA, and so can work with the 5900v switch running on a
Hypervisor.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 1-15

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HP FlexFabric 5930 Switch Series

Figure 1-14: HP FlexFabric 5930 Switch Series

The HP FlexFabric 5930 Switch Series are built on the latest generation of ASICs,
and so includes hardware support for VXLAN & NVGRE. VXLAN is an overlay
virtualization technology which is largely promoted by VMware. NVGRE is an
overlay technology which is largely promoted with Microsoft and used in their
HyperV product.
Since the HP FlexFabric 5930 Switch Series has hardware support for both
technologies, both products can be interconnected with traditional VLANs, with
support for OpenFlow and SDN. With 32 40Gbps ports, it is suitable as a
component in large scale spine or leaf networks that can leverage IRF and TRILL.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

1-16 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

HP FlexFabric 5900CP Converged Switch

Figure 1-15: HP FlexFabric 5900CP Converged Switch

The HP FlexFabric 5900CP supports 48 x 10Gbps converged ports. Support for


4/8Gbps FC or 1/10Gbps Ethernet is available on all ports. It supports HP’s
universal converged optic transceivers. The hardware optics in each port
determines whether that port will function as a native FC port, or as an Ethernet
port. The converged optics interface is a single device that can be configured to
operate as either of the two. This means that the network administrator can easily
change the operational mode of the physical interface via CLI configuration. This
eliminates the need to unplug receivers and reconnect transceivers for this
purpose.

Rev. 14.41 1-17

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FlexFabric 5700 Datacenter ToR Switch

Figure 1-16: FlexFabric 5700 Datacenter ToR Switch

The HP FlexFabric 5700 Top-of-Rack switch is available in various combinations of


1Gbps and 10Gbps port configurations with 10Gbps or 40Gbps uplinks, as shown
in the figure above. This relatively new addition to the FlexFabric family offers L2
and L3 lite support, IRF support of nine switches to simplify management
operations.
The 5700 switch series delivers 960Gbps switching capacity and is SDN-ready.

1-18 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

HP HSR6800 Router Series

Figure 1-17: HP HSR6800 Router Series

The HP HSR6800 Router Series provides comprehensive routing, firewall and


VPN functions. It uses 2Tbps backplane to support 420Mpps routing throughput.
This is a high-density WAN router that can support up to 31 10Gbps Ethernet ports
and is 40/100Gbps ready.
Two of these carrier-class devices can be grouped into an IRF team to operate as
a single, logical router entity. This eases configuration and change management,
and eliminates the need for other redundancy protocols like VRRP.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 1-19

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Virtual Services Router

Figure 1-18: Virtual Services Router

The Virtual Services Router (VSR) can be seen as a network function virtualization
(NFV) technology. It is very easy to deploy the VSR on any branch or data center
or cloud infrastructure. It is based on Comware7 and can be installed on a
hypervisor, such as VMware ESXi or LINUX KVM.
The VSR makes it very easy and convenient to support a multi-tenant data center.
New router instances can be quickly deployed inside the hosted environment to
provide routed functionality for a specific customer solution. VSR comes in multiple
versions, with various licensing options to provide more advanced capabilities.

1-20 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

IMC VAN Fabric Manager

Figure 1-19: IMC VAN Fabric Manager

Basic data center management of devices is handled by IMC. The VAN Fabric
Manager (VFM) is a software module that can be added to IMC. This module adds
advanced traffic management capabilities for many data center protocols, such as
SPB, TRILL, and IRF. Storage protocols such as DCB and FCoE are also
supported.
It also manages the data center interconnect protocol such as EVI, and provides
zoning services for converged storage management.
You can easily view and manage information about VM migrations. VM migration
records include the VM name, source and destination server, start and end times
for the migration, and name of the EVI service to which the VM belongs. You can
also perform a migration replay, which allows you to playback the migration
process, allowing you to view the source, destination, and route of a migration in a
video.

Rev. 14.41 1-21

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HP FlexFabric Cloud: Virtualized DC Use Case

Figure 1-20: HP FlexFabric Cloud: Virtualized DC Use Case

The figure shows an example of an HP FlexFabric deployment. At the access


layer, 5900v’s are deployed inside a blade server hypervisor environment, in
conjunction with 5900-series switches with VESA support.
With a deployment of HP blade systems, the 6125 XLGs can be used for
interconnectivity.
In this scenario the access layer is directly connected to the core, which could be
comprised of 12900 or 11900-series devices. Connectivity to remote locations can
be provided by the HSR 6800 router, and the entire system can be managed from
a single pane-of-glass with HP’s IMC. Additional insight and management for data
center specific technologies can be provided by the addition of the VFM module for
IMC.

1-22 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

Data Center Technologies Overview

Figure 1-21: Data Center Technologies Overview

The data center may provide support for multiple tenants. Multiple infrastructures
may co-exist in an independent way.
The data center should also have support for Ethernet fabric technologies to
provide interconnect between all the switches, as well as converged FC/FCoE
support. This fabric should integrate with Hypervisor environments.
Also, data center interconnect technologies Network overlay technologies are used
to connect several multi-tenant data centers together in a scalable, seamless way.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 1-23

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Overview of DC Technologies

Figure 1-22: Overview of DC Technologies

The figure provides an overview of data center technologies and generalizes


where these technologies are deployed.
 Multi-tenant support is provided by technologies such as MDC, MCE and
SPBM. Hypervisor integration is provided by PBB and VEPA protocols, along
with the 5900v switch product.
 Overlay networking solutions are provided by VXLAN and SDN.
 Data center interconnect technologies include MPLS L2VPN, VPLS, EVI, and
SPBM.
 OpenFlow technology can be used to understand, define, and control network
behavior.
 Large-scale Layer 2 Ethernet fabrics can be deployed using traditional link
aggregation along with TRILL or SPBM.
 IRF or Enhanced IRF can be used to improve manageability and redundancy
in the Ethernet fabric.
 Storage and Ethernet technologies can be converged with switches that
support DCB, FCoE, and native FC.

1-24 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

Multi-tenant Support

Figure 1-23: Multi-tenant Support

Multi-tenancy support involves the ability to support multiple business units,


customers, and services over a common infrastructure. This data center
infrastructure must provide techniques to isolate multiple customers from each
other.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 1-25

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Multi-tenant Isolation

Figure 1-24: Multi-tenant Isolation

Several isolation techniques are available, in two general categories. Physical


isolation is one solution. However, this solution is less scalable due to the cost of
purchasing separate hardware for each client, as well as the space, power, and
cooling concerns. With logical isolation, isolated services and customers share a
common hardware infrastructure. This reduces initial capital expenditures and
improves return on investment.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

1-26 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

Multi-tenant Isolation with MDC and MCE

Figure 1-25: Multi-tenant Isolation with MDC and MCE

One isolation technique is Multi-tenant Device Context (MDC). This technology


creates a virtual device inside a physical device. This ensures customer isolation
at the hardware layer, since ASICs or line cards are dedicated to each customer.
Since each MDC has its own configuration file, with separate administrative logins,
isolation at the management layer is also achieved. There is also isolation of
control planes, since each MDC has its own path selection protocol, such as
TRILL, SPB, OSPF, or STP. Isolation at the data plane is achieved through
separate routing tables and Layer 2 MAC address tables.
Another technology to provide Layer 3 routing isolation is Multi-customer Carrier
Ethernet (MCE). This is also known in the market as Virtual Routing and
Forwarding (VRF). With VRF, separate virtual routing instances can be defined in a
single physical router.
This technology maintains separate routing functionality and routing tables for
each customer. However, the platform’s hardware limitations still apply. For
example, ten MCE’s might be configured on a device that has a hardware limit of
128,000 IPv4 routes. In this scenario, all ten customer MCE routing tables must
share that 128,000 entry maximum.
Unlike MDC, which allows for different management planes per customer, MCE
features a single management plane for all customers. In other words, a single
administrator configures and manages all customer MCE instances.

Rev. 14.41 1-27

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Multi-tenant Isolation for Layer 2

Figure 1-26: Multi-tenant Isolation for Layer 2

VLANs are the traditional method used to isolate Layer 2 networks, and this
remains a prominent technology in data centers. However, the 4094 VLAN
maximum can be a limiting factor for large, multi-tenant deployments. Another
difficulty is preventing each client from using the same set of VLANs.
QinQ technology alleviates some of these concerns. Each customer has their own
set of 4096 VLANs, using a typical 802.1q tag. An outer 802.1q tag is added,
which is unique to each client. The data center uses this unique outer tag to move
frames between customer devices. Before the frame is handed off to the client, the
outer tag is removed.
A limitation of this technique involves the MAC address table. All customer VLANs
traverse the provider network with a common outer 802.1q tag. Therefore, all client
VLANs share the same MAC address table. It is possible for this to increase the
odds of MAC address collision – multiple devices that use the same address.
Another option is Shortest Path Bridging using MAC in MAC mode (SPBM). SPBM
can also isolate customers, similar to QinQ. Unlike QinQ, SPBM creates a new
encapsulation, with the original customer frame as the payload of the new frame.
This new outer frame includes a unique customer service identifier, providing a
highly scalable solution.
SPBM supports up to 16 million service identifiers. Each of the 16 million
customers can have their own set of 4094 VLANs. A common outer VLAN identifier
tag can be used for all client VLANs, like with QinQ. Alternatively, different
customer VLANs can use different identifiers. Compared to QinQ, SPBM provides
increased scalability while limiting the issue of MAC address collision.

Virtual eXtensible LAN (VXLAN) is another technology that provides a virtualized


VLAN for Hypervisor environments. A Virtual Machine (VM) can be assigned to a
VXLAN, and use it to communicate with other VMs in the same VXLAN.
1-28 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

This technology requires some integration with traditional VLANs via a hardware
gateway device. This functionality can be provided by the HP Comware 5930
switch. VXLAN supports up to 16 million VXLAN IDs so is quite scalable.
VXLAN provides a single VXLAN ID space. While SPBM could be used to
encapsulate 4094 traditional VLANs into a single customer service identifier, with
VXLAN, a customer with 100 VLANs would use 100 VXLAN IDs. For this reason,
some planning is required to ensure that each client uses a unique range of
VXLAN IDs.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 1-29

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Network Overlay Functions

Figure 1-27: Network Overlay Functions

Network overlay functions provide a virtual network for a specific, typically VM-
based service.
Software Defined Networking (SDN) can be considered a network overlay function,
since it can centralize the control of traffic flows between devices, virtual or
otherwise.
VXLAN is an SDN technology that can provide overlay networks for VMs. Each
VM can be assigned to a unique VXLAN ID, as supposed to a physical, traditional
VLAN ID. HP is developing solutions to integrate SDN and VXLAN solutions. This
will enable inter-connectivity between VXLAN-assigned virtual services and
physical hosts.
NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

1-30 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

SDN: Powering Your Network Today and


Tomorrow

Figure 1-28: SDN: Powering Your Network Today and Tomorrow

SDN can be used to control the network behavior inside the data center. The SDN
architecture consists of the infrastructure, control, and application layers.
The infrastructure layer consists of overlay technologies such as VXLAN or
NVGRE. Or it can is consist of devices that support OpenFlow.
The control plane is to be delivered by the HP Virtual Application Network (VAN)
SDN controller. This controller will be able to interact with VXLAN and OpenFlow-
enabled devices. It will have the ability to be directly configured, or to be controlled
by an external application, such as automation, cloud management, or security
tools. The HP SDN app store will provide centralized availability for SDN-capable
applications. Load-balancing will also be provided.

Rev. 14.41 1-31

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Data Center Ethernet Fabric Technologies

Figure 1-29: Data Center Ethernet Fabric Technologies

This section will focus on Ethernet fabric technologies for the data center. An
Ethernet fabric should provide a high speed Layer 2 interconnect with efficient path
selection. It should also provide scalability to enable ample bandwidth and link
utilization.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

1-32 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

Data Center Ethernet Fabric Technologies 2

Figure 1-30: Data Center Ethernet Fabric Technologies 2

IRF combines two or more devices into a single, logical device. IRF systems can
be deployed at each layer in the data center. For example, they are often deployed
at the core layer of a data center, and could also be used to aggregate access
layer switches. Servers could also be connected to IRF systems at the access
layer.
These layers can be interconnected by traditional multi-chassis link aggregations,
which provides an active-active redundancy solution. Each IRF system is
managed as an independent entity. If a customer has 200 physical access
switches, they could be grouped into 100 IRFs, each IRF system containing two
physical switches. If a new VLAN must be defined, it must be defined on each of
the 100 IRF systems.
Enhanced IRF (EIRF) is the next generation of IRF technology, allowing for the
grouping of up to 100 or more devices into a single logical device. Enhanced IRF
can combine multiple layers into a single logical system. For instance, several
aggregation and access layer switches can be combined into a single logical
device.
Like traditional IRF, this provides a relatively easy active-active deployment model.
However, with Enhanced IRF a large set of physical devices will be perceived as a
single, very large switch with many line cards. If 100 physical switches were
combined into a single EIRF system, they are all managed as a single entity. If a
new VLAN must be defined, it only needs to be defined one time, as opposed to
multiple times with traditional IRF. Also, EIRF eliminates the need to configure
multi-chassis link aggregations as inter-switch links.

Rev. 14.41 1-33

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Data Center Ethernet Fabric Technologies 3

Figure 1-31: Data Center Ethernet Fabric Technologies 3

IRF and EIRF offer a compelling, HP Comware-based solution for building an


Ethernet fabric. TRILL and SPBM offer other, standards-based technologies for
data center connectivity. HP Comware IRF or EIRF technology can provide switch
and link redundancy while connecting to a standards-based TRILL or SPBM fabric.
TRILL ensures that the shortest path for Layer 2 traffic is selected, while allowing
maximum, simultaneous utilization of all available links. For example, two server
access switches could connect to multiple aggregation switches and also be
directly connected to each other. Traffic flow between servers on the two switches
can utilize the direct connection between the two switches, while other traffic uses
the access-to-aggregation switch links. This is an advantage over traditional STP-
based path selection, which would require one of the links (likely the access-
access connection) to be disabled for loop prevention.
TRILL can also take advantage of this active-active, multi-path connectivity for
cases when switches have, say four uplinks between them. The traffic will be load-
balanced over all equal-cost links. This load balancing can be based on
source/destination MAC address pairs, or source/destination IP addressing.
A limitation of TRILL is the fact that it supports a single VLAN space only. While
TRILL provides for very efficient traffic delivery, it remains limited by the 4094
VLAN maximum.
SPBM is similar to TRILL in its ability to leverage routing-like functionality for
efficient Layer 2 path selection. Compared to TRILL, SPBM offers a more
deterministic method of providing load-sharing over multiple equal-cost paths. This
allows the administrator to engineer specific paths for specific customer traffic
SPBM also offers the potential for greater scalability than TRILL. This is because
SPBM supports multiple VLAN spaces, since each customer’s traffic is uniquely
tagged with a service identifier in the SPBM header.

1-34 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

RFC 7272 is a relatively recent standard that will allow the use of a 24-bit identifier,
as opposed to the current 12-bit VLAN ID. This will allow greater scalability for
multiple tenants. This feature is not currently supported on HP Comware switches.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 1-35

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Server Access Layer – Hypervisor Networking

Figure 1-32: Server Access Layer – Hypervisor Networking

Hypervisor networking is supported at the access layer of a data center


deployment, in the form of VEPA and EVB. These technologies enable integration
between virtual and physical environments.
For example, the HP Comware Hypervisor 5900v provides a replacement option
for the Hypervisor’s own built-in software vSwitch. The 5900v sends inter-VM
traffic to an external, physical switch for processing. This external switch must
support VEPA technology to be used for this purpose.
Typically, most inter-VM traffic is handled by a physical switch anyway, since there
are typically multiple ESX hosts. Traffic between VMs hosted by different ESX
platforms are handled by an external physical switch. Only inter-VM traffic on the
same ESX host is handled by that host’s internal vSwitch. The VEPA EVB model
ensures a more consistent traffic flow, since all inter-VLAN traffic is via an external
switch.
This results in greater visibility and insight into inter-VLAN traffic flow. Traditional
network analysis tools and port mirroring tools are thus capable of detailed traffic
inspection and analysis.

1-36 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

Server Access Layer – Converged Storage

Figure 1-33: Server Access Layer – Converged Storage

Storage convergence means that a single infrastructure has support for both
native Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iSCSI.
With Fibre Channel technology, a physical Host Bus Adapter (HBA) is installed in
each server to provide access to storage devices. To ensure lossless delivery of
storage frames, FC uses a buffer-to-buffer credit system for flow control. A
separate Ethernet interface is installed in the server to perform traditional Ethernet
data communications.
FCoE is a technology that provides traditional FC and 10Gbps Ethernet support
over a single Converged Network Adapter (CNA). The server’s application layer
continues to perceive a separate adapter for each of these functions. Therefore,
the CNA must accept traditional FC frames, encapsulate them in Ethernet and
send it over the converged network fabric. A suite of Data Center Bridging (DCB)
protocols enhance the Ethernet standard. This ensures the lossless frame delivery
that is required by FC.
iSCSI encapsulates traditional SCSI protocol communications inside a TCP/IP
packet, which is then encapsulated in an Ethernet frame. The iSCSI protocol does
not require that Ethernet be enhanced by DCB or any other special protocol suite.
Instead, capabilities inherent to the TCP/IP protocol stack will mitigate packet loss
issues.
However enterprise-class iSCSI deployments should have robust QoS capabilities
and hardware switches with enhanced buffer capabilities. This will help to ensure
that iSCSI frame delivery is reliable, with minimal retransmissions.
Although DCB was originally developed to ensure lossless delivery for FCoE, it
can also be used for iSCSI deployments. This minimizes frame drop and
retransmission issues.

Rev. 14.41 1-37

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Server Access Layer – FC/FCoE

Figure 1-34: Server Access Layer – FC/FCoE

The 5900CP provides native FC fabric services. Since it provides both FCoE and
native FC connections, it can act as a gateway between native FC and FCoE
environments.
In addition to this FC-FCoE gateway service, other deployment scenarios are
supported by the HP 5900CP. It can be used to interconnect a collection of
traditional FC storage and server devices, or to connect a collection of FCoE-
based systems.
Multiple Fiber Channel device roles are supported. The 5900CP can fill the FCF
role to support full fabric services. It can also act as an NPV node to support
endpoint ID virtualization.

1-38 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

Data Center Interconnect Technologies

Figure 1-35: Data Center Interconnect Technologies

Data center Interconnect technologies allow customer services to be


interconnected across multiple data center sites. Two data center locations could
be deployed, or multiple data centers could be spread over multiple locations for
additional scalability and redundancy.
These technologies typically require options for path redundancy and scalable
Layer 2 connectivity between the data centers. This ensures that all customer
requirements can be met, such as the ability to move VMs to different physical
hosts via technologies such as VMWare’s vMotion.

Rev. 14.41 1-39

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Data Center Interconnect Technologies 2

Figure 1-36: Data Center Interconnect Technologies 2

Data centers can be connected using some traditional Layer 2 connection. This
could be dark fiber connectivity between two sites, or some other connectivity
available from a service provider. Once these physical connections are
established, traditional VLAN trunk links and link aggregation can be configured to
connect core devices at each site.
MPLS L2VPN is typically offered and deployed by a service provider, although
some larger enterprises may operate their own internal MPLS infrastructure. Either
way, L2VPN tunnels can be established to connect sites over the MPLS fabric.
In this way, MPLS L2VPN provides a kind of “pseudo wire” between sites. It is
important to note this connection lacks the intelligence to perform MAC-learning or
other Layer 2 services. It is simply a “dumb” connection between sites.

1-40 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

Data Center Interconnect Technologies 3

Figure 1-37: Data Center Interconnect Technologies 3

MPLS Virtual Private LAN Service (VPLS) is another option that is typically
deployed by a service provider. Some enterprises may have their own MPLS
infrastructure, over which they may wish to deploy a VPLS solution. Unlike MPLS
L2VPN, VPLS has the intelligence to perform traditional Layer 2 functions, such as
MAC learning for each connected site. Therefore, when a device at one location
sends a unicast frame into the fabric, it can be efficiently forwarded to the correct
site. This is more efficient than having to flood the frame to all sites.
Ethernet Virtual Interconnect (EVI) is an HP propriety technology to interconnect
data centers with Layer 2 functionality. This technology enables the transport of L2
VPN and VPLS without need for an underlying MPLS infrastructure. Any typical IP
routed connection between the data centers can be used to interconnect up to
eight remote sites.
The advantage of EVI is that it is very easy to configure as compared to MPLS.
MPLS requires expertise with several technologies, including IP backbone
technologies, label switching, and routing. EVI also makes it easy to optimize the
Ethernet flooding behavior.

Rev. 14.41 1-41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learning Activity: Technologies in the Data


Center

Figure 1-38: Review of Technologies in the Data Center

You have learned about high-level features of various data center technologies.
Use the provided list of technologies to fill in the blanks in the chart below.

MDC FCoE EVI


EIRF VXLAN IRF
SPBM Native FC iSCSI
MPLS L2VPN SPBM TRILL
SDN VEPA MCE
EVB 5900CP 5900v
DCB MPLS VPLS

Multi-tenant support
 __________ - creates virtual devices inside a physical device. Each client has
isolated ASICs, admin login and configurations, control plane, and data plane
 __________ – aka VRF, a single physical router can host multiple virtual
routers, with separate routing tables and data planes for each customer.
However, there’s a single management plane, single admin
 __________ - Over 16 million customers can each have a separate set of
4094 VLANs

1-42 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

Overlay Networking
 __________ – An overlay technology in which 16 million VXLANs are shared
among all customers, requires a hardware device like the HP Comware 5930
for integration with traditional VLANs.
 __________ – Centralized control of traffic flow between devices, using The
HP VAN controller

Server access layer hypervisor integration


 __________ – a technology to allow inter-VM traffic to be handled by a
physical switch, providing more consistent inter-VM traffic flow and greater
visibility and services for this traffic.
 __________ – a protocol allowing communication between a hypervisor and a
physical switch
 __________ – provides a replacement option for a hypervisor’s built-in
vSwitch, enabling connectivity to physical switches that support VEPA

Server Access Layer convergence


 __________ – A suite of protocols used to enhance the Ethernet standard to
enable Ethernet and storage convergence by ensuring the lossless frame
delivery that is required by FC
 __________ – A single CNA is installed in a server to provide both storage
device and 10Gbps Ethernet connectivity.
 __________– A separate HBA is installed in a server to communicate with
storage devices
 __________– encapsulates traditional SCSI protocol communications inside a
TCP/IP packet and an Ethernet frame. Does not require enhancements to the
Ethernet protocol, but should be deployed in conjunction with robust QoS
capabilities and hardware switches with enhanced buffering.
 __________ – an HP device that provides native FC connectivity, converged
FCoE connectivity, and can act as a gateway between the two.

Data center interconnect technologies


 __________ – typically offered by service providers, and provides a kind of
“dumb pseudo-wire” between two sites.
 __________ – typically offered by service providers, with the intelligence to
perform MAC learning and provide more efficient forwarding of frames
between two or more sites.
 __________ – an HP proprietary technology that enables Layer 2 connectivity
between two or more sites, over any routed infrastructure. It is very easy to
configure, and to optimize flooding behavior, as compared to other
technologies.

Rev. 14.41 1-43

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HP Ethernet Fabric technologies for redundancy and manageability.


 __________ – combines two or more devices of the same device type, and in
the same Data Center layer, such as core layer or access layer.
 __________ – can combine up to 100 devices of various device types and
layers into one logical device

Standards-based, large-scale Layer 2 Ethernet Fabric technologies


 __________ – provides routing-like shortest-path selection for Layer 2 traffic,
load balancing based on source/destination MAC or IP addresses, limited to a
single VLAN space of 4094 VLANS.
 __________ – provides routing-like shortest-path selection for Layer 2 traffic,
load balancing based on a more deterministic method, supports multiple
VLAN spaces for greater scalability

1-44 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

Learning Activity: Answers


Multi-tenant support
 MDC - creates virtual devices inside a physical device. Each client has
isolated ASICs, admin login and configurations, control plane, and data plane
 MCE – aka VRF, a single physical router can host multiple virtual routers, with
separate routing tables and data planes for each customer. However, there’s a
single management plane, single admin
 SPBM - Over 16 million customers can each have a separate set of 4094
VLANs

Overlay Networking
 VXLAN – An overlay technology in which 16 million VXLANs are shared
among all customers, requires a hardware device like the HP Comware 5930
for integration with traditional VLANs.
 SDN – Centralized control of traffic flow between devices, using The HP VAN
controller

Server access layer hypervisor integration


 EVB – a technology to allow inter-VM traffic to be handled by a physical
switch, providing more consistent inter-VM traffic flow and greater visibility and
services for this traffic.
 VEPA – a protocol allowing communication between a hypervisor and a
physical switch
 5900v – provides a replacement option for a hypervisor’s built-in vSwitch,
enabling connectivity to physical switches that support VEPA

Server Access Layer convergence


 DCB – A suite of protocols used to enhance the Ethernet standard to enable
Ethernet and storage convergence by ensuring the lossless frame delivery
that is required by FC
 FCoE – A single CNA is installed in a server to provide both storage device
and 10Gbps Ethernet connectivity.
 native FC – A separate HBA is installed in a server to communicate with
storage devices
 iSCSI – encapsulates traditional SCSI protocol communications inside a
TCP/IP packet and an Ethernet frame. Does not require enhancements to the
Ethernet protocol, but should be deployed in conjunction with robust QoS
capabilities and hardware switches with enhanced buffering.
 5900CP – an HP device that provides native FC connectivity, converged FCoE
connectivity, and can act as a gateway between the two.

Rev. 14.41 1-45

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Data center interconnect technologies


 MPLS L2VPN – typically offered by service providers, and provides a kind of
“dumb pseudo-wire” between two sites.
 MPLS VPLS – typically offered by service providers, with the intelligence to
perform MAC learning and provide more efficient forwarding of frames
between two or more sites.
 EVI – an HP proprietary technology that enables Layer 2 connectivity between
two or more sites, over any routed infrastructure. It is very easy to configure,
and to optimize flooding behavior, as compared to other technologies.

HP Ethernet Fabric technologies for redundancy and manageability.


 IRF – combines two or more devices of the same device type, and in the
same Data Center layer, such as core layer or access layer.
 EIRF – can combine up to 100 devices of various device types and layers into
one logical device

Standards-based, large-scale Layer 2 Ethernet Fabric technologies


 TRILL – provides routing-like shortest-path selection for Layer 2 traffic, load
balancing based on source/destination MAC or IP addresses, limited to a
single VLAN space of 4094 VLANS.
 SPBM – provides routing-like shortest-path selection for Layer 2 traffic, load
balancing based on a more deterministic method, supports multiple VLAN
spaces for greater scalability

1-46 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

Summary

Figure 1-39: Summary

HP’s FlexFabric provides a simple, scalable, automated approach to data center


networking solutions.
HP’s FlexFabric product portfolio includes core switches like the 12900, 12500,
11900, and 7904. It also includes 5900AF, 5930, 5900CP, 5900v, and 6125XLG
access switches. For routing, the HSR 6800 and VSR are available. Improved
visibility and management functions for TRILL/SPB and FCoE/FC are available
with the IMC VAN fabric manager product.
Technologies that support multi-tenant solutions include MDC, MCE, and SPBM.
Hypervisor integration is provided by PBB and VEPA.
Overlay solutions include VXLAN and SDN, while data center interconnect
technologies include MPLS L2VPN, VPLS, EVI, and SPBM.
Large-scale Layer 2 fabrics can be deployed using TRILL or SPBM, with IRF and
EIRF providing improved manageability and redundancy.
The HP data center portfolio can create converged network support with DCB,
FCoE, and native FC.

Rev. 14.41 1-47

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learning Check
After each module, your facilitator will lead a class discussion to capture key
insights and challenges from the module and any accompanying lab activity. To
prepare for the discussion, answer each of the questions below.
1. HP’s FlexFabric includes the following components (choose all that apply)?
a. Core switches
b. Aggregation switches
c. MSM 4x0-series access points.
d. Access switches.
e. The 5900CP converged switch.
f. Both physical and virtual services routers
g. HP’s IMC management platform
2. The IMC VAN fabric manager provides which three capabilities (Choose
three)?
a. Unified SPB, TRILL, and IRF fabric management
b. VPN connectivity and performance management.
c. VXLAN system management
d. Unified DCB, FCoE, and FC SAN management.
e. EVI protocol management for data center interconnects.
f. Switch and router ACL configuration management.
3. Which two statements are true about multi-tenant isolation for Layer 2?
a. VLANs provide a traditional method to isolate Layer 2 networks that is
limited to 4094 VLANs
b. With QinQ technology, up to 256 customers can each have their own set
of 4094 isolated VLANs.
c. DCB is an overlay technology that allows a converged infrastructure
d. Shortest Path Bridging MAC-in-MAC mode can support 16 million isolated
customers through the use of and I-SID
4. Which technology can extend a Layer 2 VLAN across multiple data centers
using a Layer 3 technology?
a. DCB.
b. EIRF.
c. SDN.
d. TRILL.
e. VXLAN.

1-48 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Datacenter Products and Technologies Overview

Learning Check Answers


1. a, b, d, e, f, g
2. a, d, e
3. a, d
4. d

Rev. 14.41 1-49

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

1-50 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context
Module 2

Objectives
Multitenant Device Context (MDC) is a technology that can partition a physical
device or an IRF fabric into multiple logical switches called "MDCs."
Each MDC uses its own hardware and software resources, runs independently of
other MDCs, and provides services for its own customer. Creating, starting,
rebooting, or deleting an MDC does not affect any other MDC. From the user's
perspective, an MDC is a standalone device.
After completing this module, you should be able to:
 Describe MDC features
 Explain MDC use cases
 Describe MDC architecture and operation
 Describe support for MDC on various hardware platforms
 Understand firmware updates and ISSU with MDC
 Describe supported IRF configurations with MDC

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev.14.31 2-1

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Feature overview

Figure 2-1: Feature overview

MDC Overview
Multitenant Device Context (MDC) can partition either a single physical device or
an IRF fabric into multiple logical switches called "MDCs."
With MDC, physical networking platforms, such as HP 11900, 12500, and 12900
switches can be virtualized to support multitenant networks. In other words, MDC
provides customers with 1:N device virtualization capability to virtualize one
physical switch into multiple logical switches as shown in Figure 2-1.
Other benefits of MDC include:
• Complete separation of control planes, data planes and forwarding capabilities.
• No additional software license required to enable MDC.
• Reduced power, cooling and space requirements within the data center.
• Up to 75% reduction of devices and cost when compared to deployments
without 1:N device virtualization.
• Modification of interface allocations without stopping MDCs

IRF versus MDC


What is the difference between MDC and technologies like IRF?
The main difference is that in the case of IRF (N:1 Virtualization), you are
combining multiple physical devices into one logical device. With MDC on the
other hand (N:1 Virtualization), you are splitting either a single device or a logical
IRF device into separate discrete logical units.
The reason for doing this is to provide network features such as VLANs, routing,
IRF and other features to different entities (customers, development network), but
still use the same hardware. Customers can also be given different feature sets
inside the same logical "big box" device. Each of the MDCs operate as a totally
independent device inside the same physical device (or IRF fabric).
2-2 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Instead of buying additional core switches for different customers or business


units, a single core switch or IRF fabric can be used to provide the same hardware
feature set to multiple customers or business units.

MDC features
Each MDC uses its own hardware and software resources, runs independently of
other MDCs, and provides services for its own environment. Creating, starting,
rebooting, or deleting an MDC does not affect the configuration or service of any
other MDC. From the user's perspective, an MDC is a standalone device.
Each MDC is isolated from the other MDCs on the same physical device and
cannot communicate with them via the switch fabric. To allow two MDCs on the
same physical device to communicate with each other, you must physically
connect a port allocated to one MDC to a port allocated to the other MDC using an
external cable. It is not possible to make a connection between MDCs over the
backplane of the switch.
Each MDC has its own management, control and data planes, which is the same
size as the physical device. For example, if the device has a 64-KB space for ARP
entries, each MDC created on the device gets a separate 64-KB space for its own
ARP entries.
Management of MDCs on the same physical device is done via the default MDC
(admin MDC), or via management protocols such as SSH or telnet.

MDC applications
MDC can be used for applications such as the following:
• Device renting
• Service hosting
• Staging of a new network on production equipment
• Testing features such as SPB and routing that cannot be configured on a
single device
• Student labs
Instead of purchasing new devices, you can configure more MDCs on existing
network devices to expand the network.

Rev. 14.41 2-3

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

As an initial example, in the above figure a service provider provides access


services to three companies, but only deploys a single physical device (or IRF
stack). The provider configures an MDC for each company on the same hardware
device to logically create three separate devices.
The administrators of each of the three companies can log into their allocated
MDC to maintain their own network without affecting any other MDC. The result is
the same as deploying a separate gateway for each company.
Additional use cases will be discussed later in this module.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

2-4 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

MDC Benefits overview

Figure 2-2: MDC Benefits overview

MDC benefits
Higher utilization of existing network resources and fewer hardware upgrade
costs: Instead of purchasing new devices, you can configure more MDCs on
existing network devices to expand the network. For example, when there are
more user groups, you can configure more MDCs and assign them to the user
groups. When there are more users in a group, you can assign more interfaces
and other resources to the group.
Lower management and maintenance cost: Management and maintenance of
multiple MDCs occur on a single physical device.
Independence and high security: Each MDC operates like a standalone physical
device. It is isolated from other MDCs on the same physical device and cannot
directly communicate with them. To allow two MDCs on the same physical device
to communicate, you must physically connect a cable from a port allocated to one
MDC to another port allocated to the other MDC.

Rev. 14.41 2-5

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Feature overview

Figure 2-3: Feature overview

MDC features
An MDC can be considered a standalone device. Creating, running, rebooting or
deleting a MDC does not affect the configuration or service of any other MDC. This
is because of Comware v7's container based OS level virtualization technology as
shown in Figure 2-3.
Each MDC is a new logical device defined on the existing physical device. The
physical device could either be a single switch or an IRF fabric.
A traditional switching device has its own control, management and data planes.
When you define a new MDC, the same features and restrictions of the physical
device will apply to the new MDC and the new MDC will have separated control
and management planes. Each MDC has a separate telnet server process,
separate SNMP process, separate LACP process, separate OSPF process etc.
In addition, each MDC will also have an isolated data plane. This means that the
VLANs defined in one MDC are totally independent of the VLANs defined in a
different MDC. As an example, MDC1 can have VLANs 10, 20 and 30 configured.
MDC2 can also have VLANs 10, 20 and 30 configured, but here is no
communication between VLAN 10 on MDC1 and VLAN 10 on MDC2.
Each MDC also has its own hardware limits. This is because resources are
assigned to MDCs down to the ASIC level.
A switch configured without multiple MDCs has a limit of 4094 VLANs in the overall
chassis. However, once a new MDC is created, ASICs and line cards within the
physical device are assigned to the new MDC and can be programmed by the new
management and control plane. Each MDC is a new logical device inside the
physical device and has a separate limit of 4094 VLANs. Other features such as
the number of VRFs supported are also set per MDC and what is configured in
one MDC does not affect other MDCs limits.
In other words, if you have 4 MDCs on a chassis, the total chassis will support 4
times the hardware and software limits of the same chassis with a single MDC or a
2-6 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

traditional chassis. As an example, rather than supporting only 4094 VLANs, 4 x


4094 VLANs are supported with a total of 16,376 VLANs supported (4094 per
MDC and running 4 MDCs).
MDCs share and compete for CPU resources. If an MDC needs a lot of CPU
resources while the other MDCs are relatively idle, the MDC can access more
CPU resources. If all MDCs need a lot of CPU resources, the physical device
assigns CPU resources to MDCs according to their CPU weights.
Use the limit-resource cpu weight command to assign CPU weights to user
MDCs.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-7

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Supported platforms

Figure 2-4: Supported platforms

Supported products
MDC is supported on chassis based platforms running the HP Comware 7
operating system. MDC is not by the HP Comware 5 operating system. As an
example, the 12500 series switches require main processing units (MPUs) running
HP Comware 7 and not HP Comware 5. This also applies to the HP 10500 series
switches.
MDC is only available on chassis based switches and not fixed port switches. This
is due to the processing and memory requirements of running separate virtual
switches within the same physical switch. If you configured three MDCs that would
require 3 x LACP process, 3 x BGP processes, 3 x OSPF processes, 3 x telnet
processes etc. Fixed port switches do not have enough memory to run multiple
MDCs and create separate instances of all processes.
In contrast, chassis based switches have the HP Comware operating system
installed on the Main Processing Unit (MPU) and may also have the HP Comware
operating system running on the line cards or Line Processing Units (LPU) with
their own memory. The chassis based switches have more memory and can
therefore run multiple MDCs.
All MDC capable devices have a "default MDC” or “admin MDC.” The default MDC
can access and manage all hardware resources. User MDCs can be created,
managed or deleted via the default MDC and. The default MDC is system
predefined and cannot be created or deleted. The default MDC always uses the
name "Admin" and the ID 1.
The number of MDCs available depends on the Main Processing Unit (MPU)
capabilities and switch generation. The supported number of MPUs is in the range
four to nine:
• The 11900 and 12500 switch series support four MDCs.
• The HP FlexFabric 12900 switch series supports nine MDCs. This is
because the switch has enhanced memory capabilities.

2-8 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Note
When you configure MDCs, follow these restrictions and guidelines:
Only MPUs with 4-GB memory or 8-GB memory space support configuring
MDCs. The MDC feature and the enhanced IRF feature (4-chassis IRF) are
mutually exclusive. When using MDC, the IRF Fabric is currently limited to 2
nodes.

Supported products
The number of MDCs supported per LPU differs depending on LPU memory. Refer
to the tables below for summary and SKUs with LPU memory.

Note
The product details shown below are for reference only.

MDCs support per device and LPU:


Feature LPU with 512MB LPU with 1GB LPU with 4GB
Memory Memory Memory
MDC 1 per module 2 per module 1 to 4 per module
depending on port
groups.

LPUs with 512MB Memory:


SKU Description
JC068A HP 12500 8-port 10-GbE XFP LEC Module
JC065A HP 12500 48-port Gig-T LEC Module
JC476A HP 12500 32-port 10-GbE SFP+ REC Module
JC069A HP 12500 48-port GbE SFP LEC Module
JC075A HP 12500 48-port GbE SFP LEB Module
JC073A HP 12500 8-port 10-GbE XFP LEB Module
JC074A HP 12500 48-port Gig-T LEB Module
JC064A HP 12500 32-port 10-GbE SFP+ REB Module
JC070A HP 12500 4-port 10-GbE XFP LEC Module

LPUs with 1G Memory:


SKU Description
JC068B HP 12500 8-port 10GbE XFP LEC Module
JC069B HP 12500 48-port GbE SFP LEC Module
JC073B HP 12500 8-port 10GbE XFP LEB Module
JC074B HP 12500 48-port Gig-T LEB Module
JC075B HP 12500 48-port GbE SFP LEB Module
JC064B HP 12500 32-port 10GbE SFP+ REB Module
JC065B HP 12500 48-port Gig-T LEC Module
JC476B HP 12500 32-port 10-GbE SFP+ REC Module
JC659A HP 12500 8-port 10GbE SFP+ LEF Module
Rev. 14.41 2-9

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

JC660A HP 12500 48-port GbE SFP LEF Module


JC780A HP 12500 8-port 10GbE SFP+ LEB Module
JC781A HP 12500 8-port 10GbE SFP+ LEC Module
JC782A HP 12500 16-port 10-GbE SFP+ LEB Module
JC809A HP 12500 48-port Gig-T LEC TAA Module
JC810A HP 12500 8-port 10-GbE XFP LEC TAA Mod
JC811A HP 12500 48-port GbE SFP LEC TAA Module
JC812A HP 12500 32p 10-GbE SFP+ REC TAA Module
JC813A HP 12500 8-port 10-GbE SFP+ LEC TAA Mod
JC814A HP 12500 16p 10-GbE SFP+ LEC TAA Module
JC818A HP 12500 48-port GbE SFP LEF TAA Module

LPUs with 4G Memory:


SKU Description
JG792A HP FF 12500 40p 1/10GbE SFP+ FD Mod
JG794A HP FF 12500 40p 1/10GbE SFP+ FG Mod
JG796A HP FF 12500 48p 1/10GbE SFP+ FD Mod
JG790A HP FF 12500 16p 40GbE QSFP+ FD Mod
JG786A HP FF 12500 4p 100GbE CFP FD Mod
JG788A HP FF 12500 4p 100GbE CFP FG Mod

Refer to device release notes to determine support. The following is an example of


HP 12500-CMW710-R7328P01 support of Ethernet interface cards for ISSU and
MDC:
HP SKU Description ISSU MDC
JC075A HP 48-Port GbE SFP Not Supported with
LEB 12500 Module supported limitation: The interfaces
on one card can be
assigned to only one
MDC
JC069A HP 48-Port GbE SFP Not Supported with
12500 Module supported limitation: The interfaces
on one card can be
assigned to only one
MDC
JC074A HP 48-Port Gig-T Module Not Supported with
LEB 12500 Module supported limitation: The interfaces
on one card can be
assigned to only one
MDC
JC065A HP 48-Port Gig-T 12500 Not Supported with
Module supported limitation: The interfaces
on one card can be
assigned to only one
MDC

2-10 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

JC076A HP 4-Port 10GBASE- Not Supported with


R/W LEB 12500 Module supported limitation: The interfaces
on one card can be
assigned to only one
MDC
JC070A HP 4-Port 10-GbE XFP Not Supported with
12500 Module supported limitation: The interfaces
on one card can be
assigned to only one
MDC
JC073A HP 8-Port 10GBASE- Not Supported with
R/W LEB 12500 Module supported limitation: The interfaces
on one card can be
assigned to only one
MDC
JC068A HP 8-Port 10-GbE XFP Not Supported with
12500 Module supported limitation: The interfaces
on one card can be
assigned to only one
MDC
JC064A HP 32-Port 10GE SFP+ Not Supported with
12500 Module (REB) supported limitation: The interfaces
on one card can be
assigned to only one
MDC
JC476A HP 32-port 10GbE SFP+ Not Supported with
REC 12500 Module supported limitation: The interfaces
on one card can be
assigned to only one
MDC
JC073B HP 12500 8-port 10-GbE Supported Supported
XFP LEB Module
JC068B HP 12500 8-port 10-GbE Supported Supported
XFP LEC Module
JC810A HP 12500 8-port 10-GbE Supported Supported
XFP LEC TAA-compliant
Module
JC782A HP 12500 16-port 10- Supported Supported
GbE SFP+ LEB Module
JC783A HP 12500 16-port 10- Supported Supported
GbE SFP+ LEC Module
JC814A HP 12500 16p 10-GbE Supported Supported
SFP+ LEC TAA Module
JC075B HP 12500 48-port GbE Supported Supported
SFP LEB Module

Rev. 14.41 2-11

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

JC069B HP 12500 48-port GbE Supported Supported


SFP LEC Module
JC660A HP 12500 48-port GbE Supported Supported
SFP LEF Module
JC074B HP 12500 48-port Gig-T Supported Supported
LEB Module
JC065B HP 12500 48-port Gig-T Supported Supported
LEC Module
JC780A HP 12500 8-port 10-GbE Supported Supported
SFP+ LEB Module
JC781A HP 12500 8-port 10-GbE Supported Supported
SFP+ LEC Module
JC659A HP 12500 8-port 10-GbE Supported Supported
SFP+ LEF Module
JC064B HP 12500 32-port 10- Supported Supported
GbE SFP+ REB Module
JC476B HP 12500 32-port 10- Supported Supported
GbE SFP+ REC Module
JC811A HP 12500 48-port GbE Supported Supported
SFP LEC TAA-compliant
Module
JC809A HP 12500 48-port Gig-T Supported Supported
LEC TAA-compliant
Module
JC818A HP 12500 48-port GbE Supported Supported
SFP LEF TAA-compliant
Module
JC813A HP 12500 8-port 10-GbE Supported Supported
SFP+ LEC TAA-
compliant Module
JC817A HP 12500 8-port 10-GbE Supported Supported
SFP+ LEF TAA-
compliant Module
JC812A HP 12500 32-port 10- Supported Supported
GbE SFP+ REC TAA-
compliant Module

NOTES

_______________________________________________________________

_______________________________________________________________
2-12 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Use Case 1: Datacenter change management

Figure 2-5: Use Case 1: Datacenter change management

Overview
A number of use cases are discussed in the following pages. In this first use case,
MDC is used to better handle change management procedures in a data center.
Separate MDCs are created for a production network, a quality assurance (QA)
network and a Development network. This is in line with procedures followed by
Enterprise resource planning (ERP) applications which tend to have three
separate installations.
Development Network
A separate development MDC allows for testing to be performed on a separate
logical network, but still using the same physical switches as are used in the
production network.
As an example, a customer may want to test a new load balancer for two to three
weeks. The test can be performed on a temporary basis using the development
network rather than the production network. However, as mentioned both networks
use the same physical switches.
Rather than introducing the additional risk of a new untested device in the
production network, comprehensive tests can be performed using the development
network. Features of the new device can be tested, issues resolved and updated
network configuration verified without affecting the current running network. The
additional benefit of MDC is that the test will be relevant and consistent with the
production network as the tests are being performed on the same hardware as the
production network.
Quality Assurance (QA) Network
A Quality Assurance network is an identical logical copy of the production network.
When a major change is required on the production network, the change can be
validated on the QA network. Changes such as the addition of new VLANs, new
routing protocols or new access control lists (ACLs) can be tested and validated in
advance on the QA network before deploying the change on the production
network.
Rev. 14.41 2-13

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

The advantage of using MDC in this scenario is that all the MDCs are running on
the same physical hardware. Thus the tests and configuration are validated as if
they were running on the production network. This is a much better approach than
using smaller test switches instead of actual production core switches to try to
validate changes. Using different switches does not make the QA tests 100% valid
as there could be differences in firmware or hardware capabilities between the QA
network and the production network when tested on different hardware.

Note
The QA process will validate feature configurations, but cannot be used to test
or validate firmware updates. All MDCs in a physical device or IRF are running
the same firmware version and all MDCs will be upgraded together during a
firmware update

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

2-14 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Use Case 2: Customer isolation

Figure 2-6: Use Case 2: Customer isolation

This second use case uses MDC for customer isolation.


In a data center, multiple customers could use the same core network
infrastructure, but be isolated using traditional network isolation technologies such
as VLANS and VRFs.
A customer may however want further isolation in addition to traditional network
isolation technologies. They may want isolation of their configurations, memory
and CPU resources from other customers. MDC provides this functionality
whereas traditional technologies such as VLANs don't provide this level of
isolation.
This use case is limited by the number of supported MDCs on the physical
switches. As an example, when using a 12500 series switch with 4GB MPU, this
use case will only allow for isolation of two to three customers. This is because
one MDC is used for the Admin MDC and the switch supports a maximum of four
MDCs.
An additional use case for MDC isolation is where different levels of security are
required within a single customer network. A customer may have a lower security
level network and a higher security level network and may want to keep these
separate from each other. These networks would be separated entirely by using
multiple MDCs. This use case is however also restricted by the number of MDCs a
switch can support.

Note
MDCs are different to VRFs as VRFs only separate the data plane and not the
management plane of a device. In the example use case of different security
level networks, multiple network administrators are involved. A lower level
security zone administrator cannot configure or view the configuration of a
higher level security zone. When configuring VRFs however, the entire
configuration would be visible to network administrators.

Rev. 14.41 2-15

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Use Case 3: Infrastructure & customer isolation

Figure 2-7: Use Case 3: Infrastructure & customer isolation

The third MDC use case splits a switch logically into two separate devices. One
MDC is used for core infrastructure and another MDC is used for customers. The
benefit here is that the core data center infrastructure network is isolated from all
customer networks. There are separate VLANs (4094), separate QinQ tags and
separate VRFs per MDC.
The data center core network is logically running a totally separate management
network independent of all customer data networks. Both management and
customer networks still use the same physical equipment.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

2-16 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Use Case 4: Hardware limitation workaround

Figure 2-8: Use Case 4: Hardware limitation workaround

In this fourth use case, MDC is provides a workaround for hardware limitations on
switches. As an example, a data center may use Shortest Path Bridging MAC
mode (SPBM) or Transparent Interconnection of Lots of Links (TRILL). The current
switch ASICs cannot provide the core SPBM service and layer 3 routing services
at the same time.
SPB is essentially a replacement for Spanning Tree. One caveat of SPB is that
core devices simply switch encapsulated packets and do not read the packet
contents. This is similar to the behavior of P devices in an MPLS environment. A
core SPBM device would therefore not be able to route packets between VLANs.
An SPB edge device is typically required for the routing. SPB encapsulated
packets would be de-capsulated so that the device can view the IP frames and
perform inter-VLAN routing.
If IP routing is required on the same physical core as the device configured for
SPB, two MDCs would be configured. One MDC would be configured with SPB
and be part of the SPB network. Another MDC would then be configured that is not
running SPB to provide layer 3 functionality. A physical cable would be used to
connect the two MDCs on the same chassis switch. The SPB MDC is thus
connected to the layer 3 routing core MDC via a physical back-to-back cable.
This scenario would apply for both SPB and TRILL.

Rev. 14.41 2-17

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

MDC numbering and naming

Figure 2-9: MDC numbering and naming

MDC 1 is created by default with HP Comware 7 and is named “Admin” in the


default configuration. Non-default MDCs are allocated IDs 2 and above. Names
are assigned to these MDCs as desired, such as “DevTest”, “DMZ” and “Internal”
as shown in Figure 2-4.

2-18 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Architecture

Figure 2-10: Architecture

It is important to realize that even though MDCs look like two, three or even four
logical devices running on a physical device, there is still only one MPU with only
one CPU.
Only one kernel is booted. On top of this kernel, multiple MDC contexts will be
started and each MDC context will have its own processes and allocated
resources. But, there is still only one Kernel. This also explains why multiple MDCs
need to run the same firmware version.
A device supporting MDCs is an MDC itself, and is called the "Admin" MDC. The
default MDC always uses the name Admin and the ID 1. You cannot delete it or
change its name or ID. By default, there is one kernel that started and it will start
one MDC and one MDC only (the Admin MDC). The Admin MDC is used to
manage any other MDC’s.
The moment a new MDC is defined, all the control plane protocols of the new
MDC will run in that MDC process group. This process group is isolated from other
process groups and they cannot interact with each other.
Processes that form part of the process group can be allocated a CPU weight to
provide more processing to specific MDCs. CPU, disk usage and memory usage
of process groups can also be restricted for any new MDC. Resource allocation
will be covered later in this module.
This restriction does not apply to the Admin MDC. The Admin MDC will always
have 100% access to the system. If necessary, it can take all CPU resources, or
use all memory, or use the entire flash system. The Admin MDC can also access
the files of the other MDCs, since these files are stored in a subfolder per MDC on
the main flash.
It is important to remember that there is still a physical MPU dependency. If the
physical MPU goes down, all of the MDCs running on top of the physical MPU will

Rev. 14.41 2-19

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

also go down. That is why it is worth considering the use of an IRF fabric for high
availability.
As an example, two core physical chassis switches are configured as an IRF
fabric. In addition, three MDCs are configured.
If the first physical switch is powered off, all MDCs (three in this example), will
have a master IRF failure and will activate the slave as the new master (second
chassis).

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

2-20 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Architecture, Control Plane

Figure 2-11: Architecture, Control Plane

When a new MDC is defined, the MDC can be started. A new control plane is
configured for the MDC. However, the MDC only has access to the Main
Processing Unit (MPU). No line cards or interfaces are available to the MDC until
they have been assigned by an administrator to the MDC.
This is similar to booting a chassis with only the MPU and no Line Processing
Units (LPU) / line cards inserted in the chassis.
Using the display interface brief command for example would show no
interfaces.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-21

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Architecture, ASICs

Figure 2-12: Architecture, ASICs

How do you assign line card interfaces to an MDC?


Because of the hardware restrictions on devices, the interfaces on some interface
cards are grouped. Interfaces therefore need to be allocated to the MDC per ASIC
(port group).
It is important to understand how ASICs are used within a chassis based switch.
In a chassis, each of the line cards has one or more local ASICs. This affects the
data plane of the switch as the data plane packet processing is done by the ASIC.
When packets are received by the switch, functions such as VLAN lookups, MAC
address lookups and so on are performed by ASICs. These ASICs also hold the
VLAN table or the IP routing table.
One ASIC can be used by multiple physical interfaces. As an example, one ASIC
on the line card can be used by 24 Gigabit Ethernet ports. Depending on the line
card models there may be up to 4 ASICs on a physical line card. Another example
is a 48 Gigabit Ethernet port line card which could have only two ASICs.

2-22 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Architecture, ASIC control

Figure 2-13: Architecture, ASIC control

Why this is important to understand? Because each of these ASICs has its own
hardware resources and limits. For each ASIC as an example, there is a limit of
4094 VLANs.
The moment you define a new VLAN at the global chassis level, that VLAN will be
programmed by the control plane into each of the ASICs on the chassis. If there
are six different ASICs on a line card, each ASIC will be programmed with all
globally configured VLANs. In a normal chassis all the ASICs are used by the
MPU, so they are programmed by the single control plane.
Each ASIC can only have one control plane or ASIC programming process. The
ASIC can have only one master and cannot be configured by other control planes.
When creating a new MDC, a new control plane is created. Two control planes
cannot modify the same ASIC.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-23

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Architecture, ASIC control

Figure 2-14: Architecture, ASIC control

By default, all ASICs and line cards are controlled by the Admin MDC. When
creating a new MDC, the control of an ASIC can be changed from the default
Admin MDC to that new MDC. This results in all physical interfaces that are bound
to the ASIC also being moved to the new MDC. Individual interfaces cannot be
assigned to an MDC. They are assigned indirectly to the MDC when the ASIC they
use is assigned to the MDC.
All interfaces which are managed by one ASIC must be assigned to the same
MDC. For example 10500/11900/12900 series switches only support one MDC per
LPU. In the configuration, this is enforced by the CLI through port-groups. All
interfaces which are bound to the same ASIC must be assigned as a port-group to
an MDC.
Example 12500/12500E LPU MDC Port Group Implementation:
Number of port Number of Port numbers
groups ports in
supported per each port
LPU group
48-port GE 2 port groups 24 (1 to 24), (25 to 48)
8-port 10GE 4 port groups 2 (1,2) (3,4) (5,6) (7,8)
16-port 10GE 4 port groups 4 (1,3, 5,7), (2,4,6,8),
(9,11,13,15), (10,12,14,16)
32-port 10GE 4 port groups 8 (1,3,5,7,9,11,13,15), (2, 4,
6, 8, 10, 12, 14, 16), (17,
19, 21, 23, 25, 27, 29, 31),
(18, 20, 22, 24, 26, 28, 30,
32)
40-port 10GE 2 port groups 20 (1 to 20), (21 to 48)

2-24 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

48-port 10GE 4 port groups 12 (1 to 12), (13 to 24), (25 to


32), (33 to 48)
16-port 40GE 4 port groups 4 (1 to 4), (5 to 8), (9 to 12),
(13 to 16)
4-port 100GE 2 port groups 2 (1 to 2), (3 to 4)

HP Comware7 will notify which ports belong to a port-group. The following sample
configuration shows 11900 MDC port group allocation:
[DC1-SPINE-1-mdc-2-mdc2]allocate interface FortyGigE 1/1/0/1
Configuration of the interfaces will be lost. Continue? [Y/N]:y
Group error: all interfaces of one group must be allocated to the
same mdc.
FortyGigE1/1/0/1

Port list of group 5:


FortyGigE1/1/0/1 FortyGigE1/1/0/2
FortyGigE1/1/0/3 FortyGigE1/1/0/4
FortyGigE1/1/0/5 FortyGigE1/1/0/6
FortyGigE1/1/0/7 FortyGigE1/1/0/8

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-25

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Architecture, hardware limits

Figure 2-15: Architecture, hardware limits

In addition to a new control plane being created, hardware limits change with the
creation of a new MDC.
As an example, if 1000 VLANs were created using the Admin MDC, these VLANs
would be programmed on each ASIC that is associated with the Admin MDC.
However, ASICs associated with another MDC, such as the Development MDC,
will not have the 1000 VLANs programmed. They only have the VLANs configured
by an administrator of the Development MDC. The control plane of the Admin
MDC does not control and can therefore not program the ASICs associated with
the Development MDC.
If VLAN 10 was configured on the Admin MDC, that VLAN is not programmed onto
the ASICs of the Development MDC. VLAN 10 would only be programmed on the
ASICs if VLAN 10 was configured on the Development MDC. However, VLAN 10
on the Admin MDC is different and totally independent from VLAN 10 on the
Development MDC. MAC addresses learned in the Development MDC are
different from the MAC addresses learned in the Admin MDC.
There is no control plane synchronization between the ASICs of different MDCs.
By default there is only one MDC and all ASICs have the same VLAN information.
However, as soon as multiple MDCs are created, each ASIC in a different MDC is
in effect part of a different switch, controlled and programmed separately. This
principle applies to all the resources and features such as access lists, VRFs, VPN
instances, routing table sizes etc.
This also means that if any MDC is running out of hardware resources at the ASIC
level, the resource shortage will not impact any of the other MDCs.
This is ideal for heavy load environments. Customers could stress test a network
with many VRFs, access lists or quality of service (QoS) rules without affecting
other MDCs. A development MDC could run out of resources without affecting the
production MDC for example.

2-26 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

However, while there is isolation of the data plane by isolating the ASICs, this is
not the case for a number of other components. Switch hardware resources such
as CPU, physical memory and the flash file systems are shared between MDCs.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-27

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Architecture (continued)

Figure 2-16: Architecture (continued)

Each MDC has its own configuration files on a dedicated part of the disk. An MDC
administrator can therefore only modify or restart their own MDC.
Access to the switch CPU and physical memory by MDCs can also be restricted.
There is also good isolation and separation of MDC access to these resources.
For the file system however, there is only one file system available on the flash
card. The Admin MDC (which is the original MDC) has root access to the file
system. This MDC has total control of the flash and has the privileges to perform
operations such as formatting the file system. Any file system operations such as
formatting the flash or using fixdisk are only available from the Admin MDC.
Configurations saved from the Admin MDC are typically saved to the root of the file
system. Other MDCs only have access to a subset of the file structure. This is
based on the MDC identifier. When a new MDC is defined, a folder is created on
flash with the MDC identifier. MDC 2 for example, has a folder "2" created for it on
flash. All files saved by MDC 2 are stored in this subfolder. Additionally, any file
operations such as listing directories and files on the flash using DIR will only show
files within this subfolder. From within the MDC, it appears that root access is
provided, but in effect, only a subfolder is made available to the MDC.
The Admin MDC can view all the configuration files of other MDCs as they are
subfolders in the root of the file system. This is something to consider in specific
use cases.
Within the other MDCs, only the local MDC files are visible. MDC 2 would not be
able to view the files of Admin MDC or other MDCs (such as MDC 3).
The Admin MDC can also be used to monitor and restrict the file space made
available to other MDCs. The Admin MDC has full access and unlimited control
over the file system, but other MDCs can be restricted from the Admin MDC if
required.

2-28 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Architecture, Console Ports

Figure 2-17: Architecture, Console Ports

Console Port
Other components which are shared between the MDC’s are the console port and
the Management-Ethernet ports. The console port and AUX port of the physical
chassis always belong to the Admin MDC (default MDC). Other MDCs do not have
access to the physical console or AUX ports.
To access the console of the other MDCs, first access the admin MDC console
and then use the switchto mdc command to switch to the console of a specific
MDC. This is similar to the Open Application Platform (OAP) connect functionality
used to connect to the console of subslots on other devices like the unified
wireless controllers.
Management-Ethernet ports
The management interface of all MDCs share the same physical Out Of Band
(OOB) management Ethernet port. Switching to this interface using the switchto
command is not possible like with the console port.
The management Ethernet interface is shared between all MDCs. When a new
MDC is created, the system automatically shows the Management Ethernet
interface of the MPU inside the MDC.
You must assign different IP addresses to the Management-Ethernet interfaces so
MDC administrators can access and manage their respective MDCs. The IP
addresses for the management Ethernet interfaces do not need to belong to the
same network segment.
The interface can be configured from within each MDC as the interface is shared
between all the MDCs. This means that the physical interface will accept
configurations from all the MDCs. Network administrators or operators of the
MDCs will need to agree on the configuration of the Management-Ethernet port.

Rev. 14.41 2-29

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Design considerations

Figure 2-18: Design considerations

ASIC restrictions
When designing an MDC solution, remember that ASIC binding determines the
interface grouping that will need to be allocated to an MDC. Interfaces have to be
assigned per ASIC.
Some of the line cards only have a single ASIC. This means that all the interfaces
on the line card will need to be assigned to or removed from an MDC at the same
time.
Some line cards may have 2 or more ASICs. This allows for a smaller number of
interfaces to be assigned to an MDC at the same time.
The second consideration is that the number of MDC’s will depend on the MPU
generation and memory size.
The interfaces in a group must be assigned to or removed from the same MDC at
the same time. You can see how the interfaces are grouped by viewing the output
of the allocate interface or undo allocate interface command:
• If the interfaces you specified for the command belong to the same group
or groups and you have specified all interfaces in the group or groups for
the command, the command outputs no error information.
• Otherwise, the command displays the interfaces that failed to be assigned
and the interfaces in the same group or groups.
Assigning or reclaiming a physical interface restores the settings of the interface to
the defaults. For example, if the MDC administrator configures the interface, and
later on the interfaces are assigned to a different MDC, the interface configuration
settings are lost.
To assign all physical interfaces on an LPU to a non-default MDC, you must first
reclaim the LPU from the default MDC by using the undo location and undo
allocate commands. If you do not do so, some resources might be still occupied
by the default MPU.
Platforms
The number of MDCs supported by a platform also needs to be considered. This
depends on the MPU platform as well as the MPU generation.
You can create MDCs only on MPUs with a memory space that is equal to or
greater than 4 GB. The maximum number of non-default MDCs depends on the
MPU model.
Refer to page 2-9 for more details.

2-30 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Basic configuration steps

Figure 2-19: Basic configuration steps

Overview
In the following pages, the configuration steps for creating and enabling an MDC
will be discussed. Basic MDC configuration is discussed first and then advanced
configuration options such as setting resource limits will be covered.
Basic configuration steps
Step 1: Define the new MDC with the new ID and a new name.
Step 2: Authorize the MDC to use specific line cards. ASICs are not assigned at
this point. Authorization is given so the next step can be used to assign interfaces.
Step 3: Allocate interfaces to the MDC. Remember to allocate per ASIC group.
Step 4: Start the MDC. This starts the new MDC control plane.
Step 5: Access the MDC console by using the switchto command.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-31

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration step 1: Define a new MDC

Figure 2-20: Configuration step 1: Define a new MDC

Step 1: Define the new MDC with the new ID and a new name.
This command needs to be entered from within the default Admin MDC. You
cannot type this command from any non-default MDCs.
From the default MDC enter system view. Next, define a new MDC by specifying a
name of your choice and ID of the MDC. This ID is used for the subfolder on the
flash file system.
Once the MDC is configured, a new process group is defined. The process group
is not started at this point as the MDC needs to be manually started in step 4.
To create an MDC:
Step Command Remarks
1. Enter system system-view
view.
2. Create and MDC. mdc mdc-name [ id mdc-id By default, there is a
] default MDC with the
name Admin and the
ID 1. The default MDC
is system predefined.
You do not need to
create it, and you
cannot delete it.
The MDC starts to
work after you execute
the mdc start
command.
This command is
mutually exclusive with
the irf mode
enhanced command.

2-32 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Configuration step 2: Authorize MDC for a line


card

Figure 2-21: Configuration step 2: Authorize MDC for a line card

When you create an MDC, the system automatically assigns CPU, storage space,
and memory space resources to the MDC to ensure its operation. You can adjust
the resource allocations as required (this is discussed in more detail later in this
module).
An MDC needs interfaces to forward packets. However, the system does not
automatically assign interfaces to MDCs and you must assign them manually.
By default, a non-default MDC can access only the resources on the MPUs. All
LPUs of the device belong to the default MDC and a non-default MDC cannot
access any LPUs or resources on the LPUs. To assign physical interfaces to an
MDC, you must first authorize the MDC to use the interface cards to which the
physical interfaces belong.
Step 2 is to authorize the MDC to access interfaces of a specific line card. This
command is entered from the non-default MDC context. In the figure, MDC 2 with
the name Dev is authorized to allocate interfaces on the line card in slot 2.
This command does not assign any of the interfaces to the MDC at this point. It
only authorizes the assignment of the interfaces on that line card. Interfaces will be
assigned to the MDC in step 3.
Multiple MDCs can be authorized to use the same interface card.
To authorize an MDC to use an interface card:
Step Command Remarks
1. Enter system system-view
view.
2. Enter MDC view. mdc mdc-name [ id mdc-id ]

3. Authorize the In standalone mode: By default, all interface


MDC to use an cards of the device
location slot slot-number
interface card. belong to the default
In IRF mode: MDC, and a non-default
MDC cannot use any
location chassis chassis- interface card.
number slot slot-number
You can authorize
multiple MDCs to use
the same interface card.

Rev. 14.41 2-33

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration step 3: Allocate interfaces per


ASIC

Figure 2-22: Configuration step 3: Allocate interfaces per ASIC

By default, all physical interfaces belong to the default MDC, and a non-default
MDC has no physical interfaces to use for packet forwarding. To enable a non-
default MDC to forward packets, you must assign it interfaces.
The console port and AUX port of the device always belong to the default MDC
and cannot be assigned to a non-default MDC.

Important
! When you assign physical interfaces to MDCs on an IRF member device,
make sure the default MDC always has at least one physical IRF port in the up
state. Assigning the default MDC's last physical IRF port in the up state to a
non-default MDC splits the IRF fabric. This restriction does not apply to 12900
series switches.

Only a physical interface that belongs to the default MDC can be assigned to a
non-default MDC. The default MDC can use only the physical interfaces that are
not assigned to a non-default MDC.
One physical interface can belong to only one MDC. To assign a physical interface
that belongs to a non-default MDC to another non-default MDC, you must first
remove the existing assignment by using the undo allocate interface command.
Assigning a physical interface to or reclaiming a physical interface from an MDC
restores the settings of the interface to the defaults.
Remember that because of hardware restrictions, the interfaces on some interface
cards are grouped. The interfaces that form part of the ASIC group may vary
depending on the line card and the interfaces in a group must be assigned to the
same MDC at the same time.
When interfaces are allocated to the new MDC, they are removed from the default
MDC and moved to the specified non-default MDC. All current interface
configuration is reset on the interfaces when moved to the new MDC. These
interfaces appear as new interfaces in the MDC. They will thus be assigned by
default to VLAN 1. In the figure, interfaces Gigabit Ethernet 2/0/1 to 2/0/48 have
been allocated to MDC 2, named Dev. To configure parameters for a physical
interface assigned to an MDC, you must log in to the MDC.
2-34 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

In IRF mode on 12500 series switches, you must assign non-default MDCs
physical interfaces for establishing IRF connections. A non-default MDC needs to
use the physical IRF ports to forward packets between member devices. This is
discussed in more detail later in this module.
After you change the configuration of a physical IRF port, you must use the save
command to save the running configuration. Otherwise, after a reboot, the master
and subordinate devices in the IRF fabric have different physical IRF port
configurations and you must use the undo allocate interface command and
the undo port group interface command to restore the default and reconfigure
the physical IRF port.
Configuration Procedure:
Step Command Remarks
1. Enter system system-view
view.
2. Enter MDC view. mdc mdc-name [ id mdc-id ]

3. Assign physical (Approach 1) Assign individual Use either or both


interfaces to the interfaces to the MDC: approaches.
MDC. allocate interface { By default, all physical
interface-type interface- interfaces belong to the
number }&<1-24> default MDC, and a
non-default MDC has
Approach 2) Assign a range of no physical interfaces to
interfaces to the MDC: use.
allocate interface You can assign multiple
interface-type interface- physical interfaces to
number1 to interface-type the same MDC.
interface-number2

Rev. 14.41 2-35

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration step 4: Start MDC

Figure 2-23: Configuration step 4: Start MDC

Once interfaces are assigned to the MDC, the MDC can be started. The start
command starts the control plane and management plane of the MDC. The data
plane will be active for any interfaces which have been allocated to this MDC at
the moment the MDC is started.
At this point you may notice that the total memory utilization of the switch will
increase. This is because multiple additional processes for the MDC are being
started.
To start an MDC:
Step Command
1. Enter system view. system-view

2. Enter MDC view. mdc mdc-name [ id mdc-id ]

3. Start the MDC. mdc start

Important
! If you access the BootWare menus and select the Skip Current System
Configuration option while the device starts up, all MDCs will start up without
loading any configuration file.

2-36 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Configuration step 5: Access the MDC

Figure 2-24: Configuration step 5: Access the MDC

A non-default MDC operates as if it were a standalone device. From the system


view of the default MDC, you can log in to a non-default MDC and enter MDC
system view.
In the example in the figure, the console is switched to the Dev MDC from the
Admin MDC. The prompt will display as if you are accessing a new console
session. Within the Dev MDC, you will need to enter the system-view again to
configure the switch. In this example the host name is changed to Dev for the Dev
MDC.
In MDC system view, you can assign an IP address to the Management-Ethernet
interface, or create a VLAN interface on the MDC and assign an IP address to the
interface. This will allow administrators of the MDC to log in to the MDC by using
Telnet or SSH.
To return from a user MDC to the default MDC, use the switchback or quit
command. In this example the switchback command is used to return to the
Admin MDC and the output shows the switch name as switch.
To log in to a non-default MDC from the system view of the default MDC:
Step Command Remarks
1. Enter system system-view
view.
2. Log in to an MDC switchto mdc mdc- You use this command to log
name in to only an MDC that is in
active state.

Rev. 14.41 2-37

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

MDC advanced configuration topics

Figure 2-25: MDC advanced configuration topics

Once basic configuration has been completed, multiple advanced options can be
configured.
Options such as restricting MDC resource access to CPU, memory and file system
access will be discussed in the following pages. Configuration of the Management-
Ethernet interface and firmware updates will also be discussed.
Resource allocation to MDCs are explained in below, values may be modified if
required. The default values shown will fit most customer deployments:
Resource Allocation Information Default
CPU weight • Used to assign MPU and • 10 Default (100 Max)
LPU CPU resources to • By default, the default MDC
each MDC according to has a CPU weight of 10
their CPU weight (unchangeable) on each
• When MDCs need more MPU and each interface
CPU resources, the device card.
assigns CPU resources • Each non-default MDC has
according to their CPU a CPU weight of 10 on each
weights MPU and each interface
• Specify CPU weights for card that it is authorized to
MDCs using “limit-resource use.
cpu weight” command
Disk space • Used to limit the amount of • 100% Default (100% Max)
disk space each MDC can • By default, all MDCs share
use for configuration and the disk space in the
log files system, and an MDC can
• Specify disk space use all free disk space in
percentages for MDCs the system.
using “limit-resource
memory” command
Memory • Used to limit the amount of • 100% Default (100% Max)
space memory space each MDC • By default, all MDCs share
can use the memory space in the
system, and an MDC can
2-38 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

• Specify memory space use all free memory space


percentages for MDCs in the system.card, and
using “limit-resource disk” each non-default MDC has
command a CPU weight of 10 on each
MPU and each interface
card that it is authorized to
use.

Although fabric modules are shared by MDCs, traffic between MDCs are isolated
as source/destination Packet Processors within the chassis are isolated.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-39

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Restricting MDC resources: Limit CPU

Figure 2-26: Restricting MDC resources: Limit CPU

All MDCs are authorized to use the same share of CPU resources. If one MDC
takes too many CPU resources, the other MDCs might not be able to operate. To
ensure correct operation of all MDCs, specify a CPU weight for each MDC.
The amount of CPU resources an MDC can use depends on the percentage of its
CPU weight among the CPU weights of all MDCs that share the same CPU. For
example, if three MDCs share the same CPU, setting their weights to 10, 10, and 5
is equivalent to setting their weights to 2, 2, and 1:
• The two MDCs with the same weight can use the CPU for approximately
the same period of time.
• The third MDC can use the CPU for about half of the time for each of the
other two MDCs.
The CPU weight specified for an MDC takes effect on all MPUs and all LPUs that
the MDC is authorized to use.
The resource limits are only used if required. If an MDC does not require any of
the CPU resources, other MDCs can use all the available CPU. In other words,
there is no hard limit on the CPU usage when CPU resources are available.

2-40 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

To specify a CPU weight for an MDC:


Step Command Remarks
1. Enter system system-view
view.
2. Enter MDC view. mdc mdc-name [ id mdc-id ]

3. Specify a CPU limit-resource cpu weight By default, the default


weight for the MDC. MDC has a CPU weight
weight-value
of 10 (unchangeable) on
each MPU and each
interface card, and each
non-default MDC has a
CPU weight of 10 on
each MPU and each
interface card that it is
authorized to use.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Restricting MDC resources: Limit memory

Figure 2-27: Restricting MDC resources: Limit memory

By default, MDCs on a device share and compete for the system memory space.
All MDCs share the memory space in the system, and an MDC can use all free
memory space in the system. If an MDC takes too much memory space, other
MDCs may not be able to operate normally. To ensure correct operation of all
MDCs, specify a memory space percentage for each MDC to limit the amount of
memory space each MDC can use.
The memory space to be assigned to an MDC must be greater than the memory
space that the MDC is using. Before you specify a memory space percentage for
an MDC, use the mdc start command to start the MDC and use the display
mdc resource command to view the amount of memory space that the MDC is
using.

Note
An MDC cannot use more memory than the allocated value specified by the
limit-resource memory command. This is in contrast to CPU resource limit
which is a weighted value.

2-42 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

To specify a memory space percentage for an MDC:


Step Command Remarks
1. Enter system system-view
view.
2. Enter MDC view. mdc mdc-name [ id mdc-id ]

3. Specify a memory In standalone mode: By default, all MDCs


space percentage for limit-resource memory slot share the memory
the MDC. slot-number ratio space in the system,
and an MDC can
limit-ratio use all free memory
In IRF mode: space in the system.
limit-resource memory
chassis chassis-number
slot slot-number ratio
limit-ratio

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-43

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Restricting MDC resources: Limit storage

Figure 2-28: Restricting MDC resources: Limit storage

By default, MDCs on a device share and compete for the disk space of the
device's storage media, such as the Flash and CF cards. An MDC can use all free
disk space in the system.
If an MDC occupies too much disk space, the other MDCs might not be able to
save information such as configuration files and system logs. To prevent this,
specify a disk space percentage for each MDC to limit the amount of disk space
each MDC can use for configuration and log files.
Before you specify a disk space percentage for an MDC, use the display mdc
resource command to view the amount of disk space the MDC is using. The
amount of disk space indicated by the percentage must be greater than that the
MDC is using. Otherwise, the MDC cannot apply for more disk space and no more
folders or files can be created or saved for the MDC.
If the device has more than one storage medium, the disk space percentage
specified for an MDC takes effect on all the media.
To specify a disk space percentage for an MDC:
Step Command Remarks
1. Enter system system-view
view.
2. Enter MDC view. mdc mdc-name [ id mdc-id ]

3. Specify a disk In standalone mode: By default, all MDCs


space percentage for limit-resource disk slot share the disk space
the MDC. slot-number ratio in the system, and
an MDC can use all
limit-ratio
free disk space in
In IRF mode: the system.
limit-resource disk chassis
chassis-number
slot slot-number ratio
limit-ratio

2-44 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Management Ethernet

Figure 2-29: Management Ethernet

When a non-default MDC is created, the system automatically provides access to


the Management Ethernet interface of the MPU. The Management-Ethernet
interfaces of all non-default MDCs use the same interface type and number and
the same physical port and link as the default MDC's physical Management-
Ethernet interface. However, you must assign a different IP address to the
Management-Ethernet interface so MDC administrators can access and manage
their respective MDCs. The IP addresses for the Management-Ethernet interfaces
do not need to belong to the same network segment.

Rev. 14.41 2-45

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Device firmware updates

Figure 2-30: Device firmware updates

To run Comware 7, MPUs must be fitted with 4GB SDRAM and also have a CF
card of at least 1 GB in size. 4 GB SDRAM is fitted as standard in the JC072B and
the JG497A, but the JC072A must be upgraded from 1 GB to 4 GB of SDRAM by
using two memory upgrade kits (2 x JC609A). If required, 1 GB CF cards
(JC684A) are available for purchase. If an upgraded JC072A needs to be returned
for repair, be sure to retain the upgrade parts for use in the replacement unit.
Due to physical memory limits, interface cards with 512 MB memory do not
support ISSU, and the interfaces on each of these cards can be assigned to only
one MDC. Except for these ISSU and MDC limitations, these cards provide full
support for all other features.
Refer to page 2-9 for more detail.

2-46 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Network Virtualization Types

Figure 2-31: Network Virtualization Types

In this section, MDC and IRF interoperability will be discussed.


IRF
Refer to the left hand figure above. The network virtualization shown in the figure
is the combination of multiple physical switches configured as a single logical
fabric using IRF. Distributed link aggregation could then be used to connect
multiple physical cables to the separate physical switches as a single logical link
connected to a single logical device. Multi-Chassis Link Aggregation (MLAG) could
be used for link aggregation between the IRF fabric and other switches.
IRF supports both 2 and 4 chassis configurations.
MDC
The middle figure above shows MDC on a single physical switch. This has been
discussed at length previously in this module. We have discussed how the MDC
technology provides multi tenant device contexts, where multiple virtual or logical
devices are created on a single physical chassis.
Each of these logical contexts provides unique VLAN and VRF resources and also
provides hardware isolation inside the same physical chassis.
MDC and IRF
Although MDC can be deployed on a single chassis with redundant power
supplies, redundant management modules (MPUs) and redundant line cards
(LPUs), most customers have MDC deployed together with HP Intelligent Resilient
Framework (IRF).
IRF N:1 device virtualization together with MDC 1:N virtualization achieves a
combined N:1 + 1:N device virtualization solution as shown in the right hand figure
above. This achieves higher port densities together with chassis redundancy.
Currently, only 2-chassis IRF & MDC is supported.
The right hand figure above shows MDC and IRF combined to provide a single
virtual device with multiple device contexts. In this example, two physical switches
are virtualized using IRF to create a single logical switch. The IRF fabric is then
Rev. 14.41 2-47

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

carved up into multiple MDCs to provide IRF resiliency for each of the MDCs
defined in the IRF fabric.
This would be used to provide a common control plane, data plane and
management plane for each MDC across 2 physical systems.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

2-48 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

IRF-Based MDCs

Figure 2-32: IRF-Based MDCs

When you configure MDCs, follow these guidelines:


• To configure both IRF and MDCs on a device, configure IRF first.
Otherwise, the device will reboot and load the master's configuration rather
than its own when it joins an IRF fabric as a subordinate member, and
none of its settings except for the IRF port settings take effect.
• Before assigning a physical IRF port to an MDC or reclaiming a physical
IRF port from an MDC, you must use the undo port group interface
command to restore the default. After assigning or reclaiming a physical
IRF port, you must use the save command to save the running
configuration.
By default, when a new IRF fabric is created, only the default Admin MDC is
created on the IRF fabric. All line cards are assigned to the Admin MDC by default.
Line cards and interfaces will then need to be manually assigned to other MDCs
as required.
It is important to note that at the time of this writing only 2 chassis IRF fabrics are
currently supported in conjunction with the MDC feature. A 4 chassis IRF fabric
which provides greater IRF scalability is not currently supported with MDC.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-49

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

IRF-Based MDCs

Figure 2-33: IRF-Based MDCs

As discussed previously, any new MDCs need to be authorized to use line cards
before interfaces can be allocated to the MDC. Once authorized, port groups are
used to allocate interfaces to the MDC.
What kind of combinations would be possible with IRF and MDCs?
The figure above shows various MDC and IRF scenarios.
The first scenario is the most typical. Each MDC is allowed to allocate resources
on both chassis 1 and chassis 2. This will provide redundancy for each of the
configured MDCs.
This is not a required configuration. An MDC can be created without redundancy
(as shown in the second scenario). In this example, only specific line cards on
chassis 1 in the IRF fabric have been allocated to MDC 4. MDC4 does not have
any IRF redundancy on chassis 2. The other MDCs have redundancy and have
line cards allocated on both chassis 1 and chassis 2 in the IRF fabric.
In the third scenario, both MDC 3 and 4 have line cards allocated only on chassis
1, while MDC 1 and 2 have line cards allocated from both chassis 1 and 2. MDC 1
and 2 have redundancy in case of a chassis failure, but MDC 3 and 4 do not have
any redundancy if chassis 1 fails.
In the same way, as seen in the fourth scenario, MDC 1 is only configured on
chassis 1, while MDCs 2, 3 and 4 are only configured on chassis 2. This is also a
supported configuration.
Scenario 5 and 6 show other supported variations of how MDCs can be configured
within an IRF fabric.
As can be seen, various combinations are possible and the administrator can
decide where MDCs operate. There is no limitation on where the MDCs need to be
configured on the chassis devices in the IRF fabric.

2-50 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

MDCs and IRF types

Figure 2-34: MDCs and IRF types

Overview
There are two ways to configure IRF in combination with MDC. This is dependent
on the switch generation.
The method used by the 12500 and 12500E Series Switches has separate IRF
links per MDC. The alternate method used on the 10500, 11900 and 12900 Series
Switches uses a shared IRF link for all MDCs.
12500/12500E
When configuring IRF on the 12500/12500E Series Switches, a dedicated IRF link
per MDC is required.
For MDC 2 on chassis 1 to communicate with MDC 2 on chassis 2, a dedicated
IRF port needs to be configured on both chassis switches that are physically part
of that MDC. For example, if line card 2 was assigned to MDC 2, then you would
need to assign a physical port on line card 2 as an IRF port for MDC 2. If line card
3 was assigned to MDC 3, then a physical port on line card 3 would need to be
configured as an IRF port for MDC 3. This would be configured for each MDC.
This configuration also results in all data packets for an MDC using the dedicated
IRF port between the two chassis switches. As an example, if data is sent between
MDC1 on chassis 1 and MDC1 on chassis 2, the data would traverse the
dedicated IRF port connecting the two MDCs and not other IRF links.
This results in isolation of the data plane as the IRF link of MDC 1 will not receive
traffic from MDC 2 or other MDCs. This also applies to other MDCs.
10500/11900/12900
The version of the IRF and MDC interoperability used on the 10500, 11900 and
12900 Series Switches uses a single shared IRF link for all MDCs rather than a
dedicated IRF link per MDC.
Rev. 14.41 2-51

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

This results in a change of packet flow between physical switches and MDCs. On
a 12500 switch, a packet sent from one MDC to another uses the dedicated link for
that MDC. There is no explicit specification of source MDC when traffic traverses
the IRF link. It is therefore important that IRF the link be correctly connected to the
appropriate MDCs on both chassis switches. If an administrator accidently cabled
MDC2 on chassis 1 to MDC3 on chassis 2 on 12500 switches, traffic will flow
between the two MDCs using that IRF physical link. VLAN 10 traffic in MDC 2
would end up as VLAN 10 traffic on MDC 3 for example. This breaks the original
design principals of MDCs as the switch fabric is now extended from one MDC to
another, whereas MDCs should be separate logical switches. Each MDC should
have a separate VLAN space, but in this example VLANs are shared.
IRF and MDC on 10500, 11900 and 12900 switches no longer require dedicated
links per MDC. A shared IRF link is used and MDC traffic is differentiated using an
additional tag.
Using the same example, if VLAN 10 traffic is sent from MDC2 on chassis 1 to
MDC 2 on chassis 2, an additional tag is added to the traffic across the IRF link.
This allows chassis 2 to different between the VLAN 10 traffic of MDC 2 and the
VLAN 10 traffic of MDC 3.
The IRF port is part of the Admin MDC and direct MDC connections are no longer
supported. IRF commands are not available in non-default MDCs.
Proper bandwidth provisioning is required however, as the IRF port will now be
carrying traffic for multiple MDCs.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

2-52 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Configuration examples

Figure 2-35: Configuration examples

12500/12500E
Differences in IRF approaches are reflected in the configuration commands. When
configuring IRF on 12500/12500E switches, the MDC is specified in the port group
command.
Even though the IRF configured is completed using the Admin MDC, the IRF
configuration associates specific IRF interfaces with specific MDCs.
In the figure, the IRF configuration of IRF port 1/1 is shown. Interface Gigabit
Ethernet 1/3/0/1 is added to the IRF port, but is associated with MDC 2. The
physical interface 1/0/3/1 must be assigned to MDC 2. Gigabit Ethernet 1/3/0/24
could not be used with MDC 2 for example as it has already been associated with
MDC 3 using the allocate interface command. In this example, the interface is
correctly associated with MDC 3.

Note
MDC allows IRF fabrics to use 1 Gigabit Ethernet ports rather than only 10
Gigabit Ethernet ports.

10500/11900/12900
The 10500, 11900 and 12900 Series Switches no longer use the MDC keyword
when IRF is configured. The interfaces are simply bound to the IRF port (1/1 in this
example). The main difference with these switches is that all the interfaces are part
of the Admin MDC. It is no longer possible to bind interfaces associated with non-
default MDCs to the IRF port.

Rev. 14.41 2-53

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

More MDC and IRF configuration information

Figure 2-36: More MDC and IRF configuration information

Because of port groups and ASIC limitations, it may not be possible to assign
individual interfaces to IRF ports. Multiple physical interfaces may need to be
associated with the IRF port at the same time. Groups of four interfaces are often
associated as per the example shown in the figure.
This is similar to the behavior on 5900 switches which also require that a group of
four interfaces be configured for IRF. This doesn't mean that you have to use all
four ports for IRF to function. You could as an example only physically cable two of
the ports. But, you cannot use any of the four ports in the group for any other
function apart from IRF once the group is used for IRF.
In the figure, port TenGigabitEthernet 1/0/0/5 is added to IRF. However, an error is
displayed indicating that ports 1/0/0/5 to 1/0/0/8 need to be shut down. As the
interfaces are part of a port group, they need to be allocated for IRF use as a
group rather than individually. Once allocated, one of the interfaces could be used
for the actual IRF functionality, but the entire group needs to be activated for IRF
use (this is true for certain platforms such as the 5900 series switches but may be
different on other platforms).

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

2-54 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

10500/11900/12900 link failure scenario

Figure 2-37: 10500/11900/12900 link failure scenario

As per IRF best practices, multiple physical interfaces should form part of the IRF
link between switches. If one of the physical interfaces goes down, IRF continues
to use the remaining links. As long as at least one link is active between the
switches, IRF will remain active. There will be reduced bandwidth between the IRF
devices, but IRF functionality is not affected (no split brain).
However, when all physical links between the switches go down, an IRF split will
occur.
Since the admin MDC is used for IRF configuration port configuration, this is also
the MDC where IRF MAD needs to be configured. There will be no MAD
configuration in other MDCs. This also implies that the IRF MAD ports have to
belong to the Admin MDC.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-55

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

12500/12500E link failure scenario

Figure 2-38: 12500/12500E link failure scenario

On the 12500/12500E switches, IRF configuration is more complicated.


There is a base IRF protocol running at the chassis level and in addition, MDCs
use the IRF physical interfaces to exchange data. The data sent by a MDC is for
that particular MDC only. As an example, an IRF link configured in MDC 1 will only
transport data between MDC1 contexts. The link between MDC 2 contexts will only
transport data for MDC 2. The links do not carry data for other MDC contexts, but
are used by the base IRF protocol.
Refer to the first scenario in the figure. If the link between MDC1 on chassis 1 and
MDC 1 on chassis 2 fails, the base IRF protocol will remain online as there are still
3 active links between chassis that can be used by the base IRF protocol.
However, the data plane connection for MDC 1 is down which results in a split for
MDC 1. In a traditional IRF system, that would result in a chassis split brain.
However, in this example by contrast, the base IRF protocol can determine that
both chassis are still online and are still connected because the 3 remaining links
are still active. The base IRF protocol running at the chassis level will trigger MDC
1 to shut down all external ports on the standby chassis, but the core IRF protocol
and other MDCs continue to operate normally.
This is in effect a split brain scenario for MDC 1, but is automatically resolved by
the base IRF protocol because the remaining links are still active and can be used
to detect the failure of the single MDC. Once again, MDC 1 is lost on the standby
chassis, but MDC 2, 3 and 4 will continue to operate normally.
In the second scenario, the IRF link that is part of MDC 2 is lost. In this example,
as per the previous example, the base IRF protocol continues to function normally.
This is because 3 out of 4 links are still up for the base IRF protocol. The data
connection for MDC 2 is down in this example, and this results in a split brain for
MDC 2. The IRF protocol will shut down the external facing interfaces of MDC 2 on
the standby chassis. All other MDCs will continue to operate normally and so will
the base IRF protocol.
2-56 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Another advantage of this setup is that if the IRF link for a given MDC is restored,
the MDC is not rebooted and the ports on the slave device are restored
automatically. There is no reboot of the slave device as long as there is an IRF
connection between the switches.
A similar situation occurs in the third scenario. In this example, both MDC 1 and
MDC 2 will have the external interfaces of the standby chassis shut down because
of the split brain on those MDCs. The base IRF protocol will continue to operate as
normal as there are still two remaining links up between the chassis. MDCs 3 and
4 will also continue to operate normally.
In the last example, all links between the chassis are lost. This means that there is
no communication between the chassis IRF ports. This results in a split brain
scenario for the base IRF protocol and all MDCs. This scenario requires an
external multiple active detection method such as MAD BFD to resolve the split
brain.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-57

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

IRF-based MDC: IRF Fabric Split

Figure 2-39: IRF-based MDC: IRF Fabric Split

An IRF fabric is split when no physical IRF ports connecting the chassis are active.
This results in both chassis becoming active at the same time with the same IP
address and same MAC address. This results in multiple network issues and
requires a split brain protocol such as Multi Active Detection (MAD) to resolve. One
of the systems in the IRF fabric should shut down all external ports.
Previously in this module, we discussed the scenario of a split in a single MDC
where the standby MDC is automatically shut down. When the link recovers, the
MDC is restarted and not the entire chassis. The base kernel and other MDCs will
continue to operate normally.
However, when the entire IRF is lost like in this example, the situation is different.
When the link is recovered, the standby system will need to be rebooted when it
rejoins the fabric. This is similar to a traditional IRF system.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

2-58 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Multi Active Detection (MAD)

Figure 2-40: Multi Active Detection (MAD)

When all physical IRF ports between chassis go down, an additional mechanism is
required to resolve multiple active devices. In order to ensure that the split brain is
detected and resolved, configure traditional MAD BFD or MAD LACP.
MAD BFD may be the preferred MAD method as there is no dependency on any
other devices outside of the IRF fabric and MAD BFD is very fast at detecting the
split.
MAD BFD is configured at the base IRF level and it thus configured using the
Admin MDC. In addition, all MAD BFD links need to be assigned to the Admin
MDC.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-59

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learning Activity: MDC Review

Figure 2-41: MCE Review

Write the letter of the descriptions in the right-hand column in the space provided
under each numbered MCE term in the left-hand column.

MCE Term: Description:


1. 10500/11900/12900 a. Image A in the figure above.

_______________
b. Image B in the figure above.

2. 12500/12500E c. Image C in the figure above.

_______________ d. Image D in the figure above.

3. N:1 Virtualization e. 4-chassis IRF

_______________ f. Shared IRF links are required for all MDCs


through MDC1.

4. 1:N Virtualization g. 2-chassis IRF


_______________
h. Non-default MDCs are not be able to
implement dedicated IRF links.

i. MDC

j. IRF links are required for each MDC.

2-60 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Learning Activity: Answers


10500/11900/12900 (Answers: b, f, h)
Image B in Figure 2-41 shows that 10500/11900s/12900 series switches do not
require IRF links for each MDC.
Shared IRF links are required for all MDCs through MDC1.
Non-default MDCs are not be able to implement dedicated IRF links.

12500/12500E (Answers: a, j)
Image A in Figure 2-41 shows that IRF links are required for each MDC.

N:1 Virtualization (Answers: c, e, g)


N:1 virtualization is shown in image C in Figure 2-41. The network virtualization
shown in the figure is the combination of multiple physical switches configured as
a single logical fabric using IRF. Distributed link aggregation could then be used to
connect multiple physical cables to the separate physical switches as a single
logical link connected to a single logical device. Multi-Chassis Link Aggregation
(MLAG) could be used for link aggregation between the IRF fabric and other
switches.

1:N Virtualization (Answers: d, i):


Image D in Figure 2-41 shows MDC on a single physical switch. In this module we
have discussed how the MDC technology provides multi tenant device contexts
where multiple virtual or logical devices are created on a single physical chassis.

Rev. 14.41 2-61

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Summary

Figure 2-42: Summary

In this module, you learned about Multitenant Device Context (MDC). This is a
technology that can partition a physical device or an IRF fabric into multiple logical
switches called "MDCs."
MDC features and use cases were discussed in this module, including using a
single physical switch for multiple customers, which provides separation but also
leverages as single device.
The MDC architecture, supported devices and operation were discussed. Upgrade
restrictions and options were also discussed.
Lastly, support for MDC and IRF was discussed including the differences between
first and second generation switches such as the 12500 and 12900. The way IRF
ports are configured and the results of link failures including split brain scenarios
was also discussed.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

2-62 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Learning Check
After each module, your facilitator will lead a class discussion to capture key
insights and challenges from the module and any accompanying lab activity. To
prepare for the discussion, answer each of the questions below.
1. An administrator has configured two customer MDCs (MDC 2 and MDC 3) on
a core 12500 switch. What should an administrator configure to allow traffic
between the two MDCs?
a. Create routed ports in each MDC and configure inter-VLAN routing
between the MDCs.
b. Configure VRFs in each MDC and enable route leaking between the
VRFs.
c. Connect a physical cable from a port in MDC 2 to a port in MDC 3 and
then configure the ports to be in the same VLAN on each MDC.
d. Configure routing between MDC 1 and the customer MDCs. Traffic
between customer MDCs must be sent via the Admin MDC.
2. A network administrator has taken delivery of a new HP 12900 switch. How
many MDCs exist when the switch is booted?
a. Zero
b. One
c. Two
d. Four
e. Nine
3. How are interfaces allocated to MDCs?
a. By individual interface
b. By interface group
c. By interface port
d. By MDC number
4. Which device requires separate IRF ports per MDC?
a. 10500
b. 12900
c. 11900
d. 12500

Rev. 14.41 2-63

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

5. A 12500 switch is configured with 4 IRF ports, each of which is in a different


MDC. Port 1 = MDC 1, Port 2 = MDC 2, Port 3 = MDC 4.
IRF Port 1 goes down. What is the result?
a. All MDCs go offline.
b. An IRF split occurs and MAD is required to resolve the split brain.
c. The core IRF protocol goes offline, but IRF within the MDCs continues as
normal.
d. MDC 1 goes offline, but other MDCs continue as normal. The core IRF
protocol requires MAD to resolve the split brain.
e. MDC 1 goes offline, but other MDCs continue as normal. The core IRF
protocol continues as normal.

2-64 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Learning Check Answers


1. c
2. b
3. b
4. d
5. e

Rev. 14.41 2-65

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Lab Activity 2: Lab Topology

Figure 2-43: Lab Activity 2: Lab Topology

This lab is based on a Simware topology which will be started on a Windows


server using VirtualBox.
As you have a dedicated Windows server, you can run the MDC lab indecently of
other students.
The Simware topology will be an isolated topology, which means that the Simware
devices will only be able to communicate with the other Simware devices.
No physical connections are configured between the Simware hosts and the
physical network lab.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

2-66 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multitenant Device Context (MDC)

Lab Activity Preview: MDC Overview

Figure 2-44: Lab Activity Preview: MDC Overview

In this lab, MDC will be configured on a chassis core device.


The chassis device has 2 Management modules and 2 Line cards.
In addition to the default Admin MDC, a user MDC will be created to provide
isolated network services.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 2-67

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Lab Activity 2 Debrief


Use the space below to record your key insights and challenges from Lab
Activity 2.

Debrief for Lab Activity 2


Challenges Key Insights

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

2-68 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)
Module 3

Objectives
Multi-VPN-Instance CE (MCE) enables a switch to function as a Customer Edge
(CE) device of multiple VPN instances in a BGP/MPLS VPN network, thus
reducing network equipment investment. In the remainder of this module we will
use Multi-CE or MCE when talking about Multi-VPN-Instance CE.
After completing this module, you should be able to:
 Describe MCE Features
 Describe MCE use cases
 Configure MCE
 Describe and configure route leaking
 Configure isolated management access

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev.14.31 3-1

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

MPLS L3VPN overview

Figure 3-1: MPLS L3VPN overview

MPLS L3VPN overview


MPLS L3VPN is a L3VPN technology used to interconnect geographically
dispersed VPN sites. MPLS L3VPN uses BGP to advertise VPN routes and uses
MPLS to forward VPN packets over a service provider backbone.
MPLS L3VPN provides flexible networking modes, excellent scalability, and
convenient support for MPLS QoS and MPLS TE.

Note
MPLS basics are discussed in module 3 and MPLS VPNs in other courses.
This course only covers the MCE feature without a detailed discussion of
MPLS L3VPNs.

Basic MPLS L3VPN architecture


A basic MPLS L3VPN architecture has the following types of devices:
• Customer edge device (CE device or CE) - A CE device resides on a
customer network and has one or more interfaces directly connected to a
service provider network. It does not support VPN or MPLS.
• Provider edge device (PE device or PE) - A PE device resides at the edge
of a service provider network and connects to one or more CEs. All MPLS
VPN services are processed on PEs.
• Provider device (P device or P) - A P device is a core device on a service
provider network. It is not directly connected to any CE. A P device has
only basic MPLS forwarding capability and does not handle

CEs and PEs mark the boundary between the service providers and the
customers. A CE is usually a router. After a CE establishes adjacency with a
directly connected PE, it redistributes its VPN routes to the PE and learns remote
VPN routes from the PE. CEs and PEs use BGP/IGP to exchange routing
information. You can also configure static routes between them.
3-2 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

After a PE learns the VPN routing information of a CE, it uses BGP to exchange
VPN routing information with other PEs. A PE maintains routing information about
only VPNs that are directly connected, rather than all VPN routing information on
the provider network.
A P router maintains only routes to PEs. It does not need to know anything about
VPN routing information.
When VPN traffic is transmitted over the MPLS backbone, the ingress PE
functions as the ingress LSR, the egress PE functions as the egress LSR, while P
routers function as the transit LSRs.
Site
A site has the following features:
• A site is a group of IP systems with IP connectivity that does not rely on
any service provider network.
• The classification of a site depends on the topological relationship of the
devices, rather than the geographical relationships, though the devices at
a site are, in most cases, adjacent to each other geographically.
• A device at a site can belong to multiple VPNs, which means that a site
can belong to multiple VPNs.
• A site is connected to a provider network through one or more CEs. A site
can contain multiple CEs, but a CE can belong to only one site.
Sites connected to the same provider network can be classified into different sets
by policies. Only the sites in the same set can access each other through the
provider network. Such a set is called a VPN.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-3

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Termininology

Figure 3-2: Termininology

VRF / VPN Instance


VPN instances, also called virtual routing and forwarding (VRF) instances,
implement route isolation, data independence, and data security for VPNs.
A VPN instance has the following components:
• A separate Label Forwarding Information Base (LFIB).
• A separate routing table.
• Interfaces bound to the VPN instance.
• VPN instance administration information, including route distinguishers
(RDs), route targets (RTs), and route filtering policies.
To associate a site with a VPN instance, bind the VPN instance to the PE's
interface connected to the site. A site can be associated with only one VPN
instance, and different sites can associate with the same VPN instance. A VPN
instance contains the VPN membership and routing rules of associated sites.
With MPLS VPNs, routes of different VPNs are identified by VPN instances.
A PE creates and maintains a separate VPN instance for each directly connected
site. Each VPN instance contains the VPN membership and routing rules of the
corresponding site. If a user at a site belongs to multiple VPNs, the VPN instance
of the site contains information about all the VPNs.
For independence and security of VPN data, each VPN instance on a PE has a
separate routing table and a separate label forwarding information base (LFIB).
A VPN instance contains the following information: an LFIB, an IP routing table,
interfaces bound to the VPN instance, and administration information of the VPN
instance. The administration information includes the route distinguisher (RD),
route filtering policy, and member interface list.
VPN-IPv4 address
Each VPN independently manages its address space. The address spaces of
VPNs might overlap. For example, if both VPN 1 and VPN 2 use the addresses on
subnet 10.110.10.0/24, address space overlapping occurs.
3-4 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

BGP cannot process overlapping VPN address spaces. For example, if both VPN
1 and VPN 2 use the subnet 10.110.10.0/24 and each advertise a route destined
for the subnet, BGP selects only one of them, resulting in the loss of the other
route.
Multiprotocol BGP (MP-BGP) can solve this problem by advertising VPN-IPv4
addresses (also called VPNv4 addresses).

As shown in the above figure, a VPN-IPv4 address consists of 12 bytes. The first
eight bytes represent the RD, followed by a four-byte IPv4 prefix. The RD and the
IPv4 prefix form a unique VPN-IPv4 prefix.
An RD can be in one of the following formats:
• When the Type field is 0, the Administrator subfield occupies two bytes, the
Assigned number subfield occupies four bytes, and the RD format is 16-bit
AS number:32-bit user-defined number. For example, 100:1.
• When the Type field is 1, the Administrator subfield occupies four bytes, the
Assigned number subfield occupies two bytes, and the RD format is 32-bit
IPv4 address:16-bit user-defined number. For example, 172.1.1.1:1.
• When the Type field is 2, the Administrator subfield occupies four bytes, the
Assigned number subfield occupies two bytes, and the RD format is 32-bit
AS number:16-bit user-defined number, where the minimum value of the
AS number is 65536. For example, 65536:1.
To guarantee global uniqueness for a VPN-IPv4 address, do not set the
Administrator subfield to any private AS number or private IP address.
Route target attribute
MPLS L3VPN uses route target community attributes to control the advertisement
of VPN routing information. A VPN instance on a PE supports the following types
of route target attributes:
• Export target attribute—A PE sets the export target attribute for VPN-IPv4
routes learned from directly connected sites before advertising them to
other PEs.
• Import target attribute—A PE checks the export target attribute of VPN-
IPv4 routes received from other PEs. If the export target attribute matches
the import target attribute of a VPN instance, the PE adds the routes to the
routing table of the VPN instance.
Route target attributes define which sites can receive VPN-IPv4 routes, and from
which sites a PE can receive routes.
Like RDs, route target attributes can be one of the following formats:
• 16-bit AS number:32-bit user-defined number. For example, 100:1.
• 32-bit IPv4 address:16-bit user-defined number. For example, 172.1.1.1:1.

Rev. 14.41 3-5

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

• 32-bit AS number:16-bit user-defined number, where the minimum value of


the AS number is 65536. For example, 65536:1.

MCE / VRF-Lite
Multi-CE or VRF-Lite supports multiple VPN instances in customer edge devices.
This feature provides separate routing tables or VPNs without MPLS L3VPNs and
supports overlapping IP addresses.
Refer to the following page for more detail.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-6 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

MCE overview

Figure 3-3: MCE overview

MCE overview
BGP/MPLS VPN transmits private network data through MPLS tunnels over the
public network. However, the traditional MPLS L3VPN architecture requires that
each VPN instance use an exclusive CE to connect to a PE, as shown in Figure 3-
3.
A private network is usually divided into multiple VPNs to isolate services. To meet
these requirements, you can configure a CE for each VPN, which increases device
expense and maintenance costs. Or, you can configure multiple VPNs to use the
same CE and the same routing table, which sacrifices data security.
You can use the Multi-VPN-Instance CE (MCE) function in multi-VPN networks.
MCE allows you to bind each VPN to a VLAN interface. The MCE creates and
maintains a separate routing table for each VPN.
This separates the forwarding paths of packets of different VPNs and, in
conjunction with the PE, can correctly advertise the routes of each VPN to the peer
PE, ensuring the normal transmission of VPN packets over the public network.
As shown in Figure 3-3, the MCE device creates a routing table for each VPN.
VLAN interface 2 binds to VPN 1 and VLAN-interface 3 binds to VPN 2. When
receiving a route, the MCE device determines the source of the routing information
according to the number of the receiving interface, and then adds it to the
corresponding routing table. The MCE connects to PE 1 through a trunk link that
permits packets tagged with VLAN 2 or VLAN 3. PE 1 determines the VPN that a
received packet belongs to according to the VLAN tag of the packet, and sends
the packet through the corresponding tunnel.
You can configure static routes, RIP, OSPF, IS-IS, EBGP, or IBGP between an
MCE and a VPN site and between an MCE and a PE.

Note
To implement dynamic IP assignment for DHCP clients in private networks,
you can configure DHCP server or DHCP relay agent on the MCE. When the
MCE functions as the DHCP server, the IP addresses assigned to different
private networks cannot overlap.

Rev. 14.41 3-7

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Feature overview

Figure 3-4: Feature overview

MCE features
MCE supports the configuration of additional routing tables within a single routing
device. As analogy, this can be compared to VLANs configured on Layer 2
switches. Each VLAN is a separate, isolated Layer 2 network and each VPN
instance is a separate, isolated Layer 3 network. Each VPN instance or VRF is a
separate routing table which runs independently of other routing tables on the
device.
In Layer 2 VLANs, a Layer 2 access port belongs to a single VLAN. In the same
way, in VPN-instances, each Layer 3 routed interface belongs to a single VPN
instance.
Examples of interfaces that belong to a single VPN instance include:
• The Layer 3 interface of a VLAN. Example: interface vlan 10
• Routed ports. Example: Gigabit Ethernet 1/0/2
• Routed subinterfaces. Example: Gigabit Ethernet 1/0/2.10
• Loopback interfaces. Example: interface loopback 1
In Figure 3-4, various interfaces have been defined in separate VPN instances. As
an example, Gigabit Ethernet 1/0 and Gigabit Ethernet 2/0.10 are configured in the
RED VPN instance, Gigabit Ethernet 2/0.20 is configured in the GREEN VPN
instance and loopback 10, interface VLAN 10 and interface VLAN 10 are
configured in the BLUE VPN instance.
Each VPN instance configured by a network administrator has separate interfaces
and separate routing tables.

3-8 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Supported platforms

Figure 3-5: Supported platforms

Supported products
MCE is available on almost all Comware routing devices (switches and routers).
Comware 5 fixed port switches include the 3600v2, 5500, 5800 and 5820
switches. Comware 7 fixed port switches include the 5900, 5920 and 5930
switches. Chassis based switches running either Comware 5 or Comware 7
include the 7500 (Comware 5), 10500, 11900, 12500 and 12900 switches.
Routers that support MCE include the MSR, HSR and SR series routers.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-9

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Design considerations

Figure 3-6: Design considerations

Overview
The number of VPN instances supported is hardware dependent. For software
based routers, the restriction is typically a memory restriction.
For switches, this is typically restricted by the ASICs used in the switches.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-10 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Use Case 1: Multi-tenant datacenter

Figure 3-7: Use Case 1: Multi-tenant datacenter

A number of use cases for MCE will now be discussed.


The first use case is a Multi-Tenant Data Center. This is a data center
infrastructure provided by a hosting provider offering various services to
customers.
A requirement in the environment is that each customer should have a separate
routing infrastructure isolated from other customers.
Access control lists (ACLs) could be used to separate customers, but ACLs need
to be individually configured and are often very complex and are prone to errors.
Customers would still be running within the same routing table instance and a mis-
configured ACL would allow access between customer networks. By default traffic
would be permitted between customers and only with careful ACL configuration are
customers blocked.
MCE in contrast creates separate routing tables and thus separates customer
resources by design. No access is permitted between VPN instances by default.
Only with explicit additional configuration (route leaking) is traffic permitted
between the separate VPN instances. The MCE feature is also much simpler to
configure and maintain than traditional ACLs.
Typically, to ensure that all of these customers can access a common internet
gateway connection, MCE is combined with a virtual firewall per customer. The
firewall used would also be VPN instance aware to ensure separation.
In the figure, the RED and GREEN customer are configured in separate VPN
instances and cannot communicate with each other, even though they are using a
shared network infrastructure. Both customers can access also the Internet via the
common Internet firewall.

Rev. 14.41 3-11

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Use Case 2: Campus with independent


business units

Figure 3-8: Use Case 2: Campus with independent business units

The second use case is a campus with independent business units, or teams or
applications.
In some cases, external teams may be working at a customer site on a specific
project, but may be located throughout the campus. The owner of the
infrastructure may want to isolate the external team from the rest of the network,
but allow them to communicate across different parts of the core infrastructure.
This would create a separate isolated virtual network using the same equipment.
A second example may be the use of external application monitoring. An internal
ERP application may be monitored by an external supplier or partner. MCE could
be use to tightly control which networks are available to the external party. Only
certain internal routes would be advertised and available to the external party.
A third example of service isolation is a managed voice over IP (VoIP)
infrastructure. In this example, the entire VoIP infrastructure is managed and
configured by an external partner. The internal VoIP addressing is isolated from the
normal corporate infrastructure providing better security and separation. The
external VoIP partner can manage the VoIP network, but has no access to the rest
of the network.
A forth example is a guest network. A network may consist of multiple locations
connected via routed links. Each location may need to provide guest connectivity,
but also use a centralized Internet connection. A remote site may be connected via
a routed WAN link to the central site and in this case, configuration of separate
VPN instances may be beneficial to provide guest network isolation across routed
networks.

3-12 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Use Case 3: Overlapping IP segments

Figure 3-9: Use Case 3: Overlapping IP segments

In this third use case example, support for overlapping IP networks is required.
This may occur when companies merge and the same IP address space is used
by multiple parts of the business.
In this case each business or department is separated by VPN instances to isolate
the networks and their addressing.
If connectivity between the instances is required, a VPN instance aware firewall
could be used at the Layer 3 border between instances. This device would perform
network address translation (NAT) between the VPN instances as well as provide
firewall functionality.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-13

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Use Case 4: Isolated management network

Figure 3-10: Use Case 4: Isolated management network

A forth use case of VPN instances is an isolated management network for network
devices.
This would not be required for Layer 2 switches as these devices do not have IP
addresses in the customer network. The management subnet of a Layer 2 device
is by default isolated from the customer or user portion of the network. This is
because a Layer 2 switch only has one Layer 3 IP address which is used
exclusively for device management, but is configured in a separate management
VLAN.
On Inter-VLAN routing devices or Layer 3 devices however, the IP interfaces of the
device are accessible by user or customer devices by design. Separation in this
case would be required. A dedicated VPN instance would be created for the
management interface of the device. Protocols such as SNMP, telnet, SSH and
other traditional networking management protocols would operate inside the
dedicated VPN-Instance and would not be accessible from the customer VPN
instances.

Note
Several HP Provision switches have OOB Management ports. The Provision
OOB Management ports operate by default in their own IP routing space.
There is no requirements to define a new routing table for management
purposes. This is in contrast with HP Comware devices which require
administrators to define a management routing table (VPN Instance) for the
OOB Management port.

3-14 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Use Case 5: Shared services in Data Center

Figure 3-11: Use Case 5: Shared services in Data Center

This last use case discussed is a shared services VPN instance in a data center.
In the first use case discussed, VPN instances were used to separate customer
networks. In this example, VPN instances are extended to provide shared
services.
The type of shared services that a service provider may offer a customer includes
central firewall facilities, backup facilities, network monitoring, hypervisor
management and security services. All services could be provided either within a
single VPN instance or by using multiple VPN instances.
Customers could continue using their own routing protocols such as OSPF within
their customer VPN instances. The shared services instances may even use
different routing protocols. Each VPN instance is still isolated and only specific
routes are permitted between the VPN instances by using route leaking.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-15

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Basic configuration steps

Figure 3-12: Basic configuration steps

The following is an overview of the basic configuration steps:


1. Define a new VPN instance. This creates a new routing table or virtual
routing and forwarding instance (VRF).
2. Each VPN instance is uniquely identified by a route distinguisher (RD). This
is an eight byte value used to uniquely identify routes in Multiprotocol BGP
(MP-BGP). Even though MP-BGP is not used, the RD must be specified.
3. Layer 3 interfaces are then assigned to the VPN instance.
4. All existing interface configuration is removed in step 3. Any IP address or
other configuration will need to be reconfigured.
5. Optionally, dynamic or static routing can be configured.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-16 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Configuration step 1: Define VPN-Instance

Figure 3-13: Configuration step 1: Define VPN-Instance

A VPN instance is a collection of the VPN membership and routing rules of its
associated site.
The first configuration step is to create a VPN instance:
Step Command Remarks
1. Enter system system-view
view.
2. Create a VPN ip vpn-instance vpn- By default, no VPN
instance and enter instance-name instance is created.
VPN instance view.

Once the VPN instance has been defined, a list of VPN instances can be
displayed and the routing table of the VPN instance can be displayed.
By default, no interfaces will be bound to the VPN instance apart from internal
loopback interfaces in the 127.0.0.0 range. The display ip routing-table
vpn-instance <name> will display this.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-17

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 1: Define VPN-Instance (continued)

Figure 3-14: Step 1: Define VPN-Instance (continued)

A VPN instance is a collection of the VPN membership and routing rules of its
associated site.
The first configuration step is to create a VPN instance:
Step Command Remarks
1. Enter system system-view
view.
2. Create a VPN ip vpn-instance vpn- By default, no VPN
instance and enter instance-name instance is created.
VPN instance view.

Once the VPN instance has been defined, a list of vpn instances can be displayed
and the routing table of the VPN instance can be displayed.
By default, no interfaces will be bound to the VPN instance apart from internal
loopback interfaces in the 127.0.0.0 range. The display ip routing-table
vpn-instance <name> will display this.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-18 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Configuration step 2: Route Distinguisher

Figure 3-15: Configuration step 2: Route Distinguisher

The second step is to configure the route-distinguisher (RD) of the VPN instance.
BGP cannot process overlapping VPN address spaces. For example, if both VPN
1 and VPN 2 use the subnet 10.110.10.0/24 and each advertise a route destined
for the subnet, BGP selects only one of them, resulting in the loss of the other
route. Multiprotocol BGP (MP-BGP) can solve this problem by advertising VPN-
IPv4 prefixes.
MCE does not require MP-BGP, but a unique RD is still required.
To configure a Route Distinguisher and optional descriptions:
Step Command Remarks
1. Enter system system-view
view.
2. Create a VPN ip vpn-instance vpn- By default, no VPN
instance and enter instance-name instance is created.
VPN instance view.
3. Configure an RD route-distinguisher route- By default, no RD is
for the VPN distinguisher specified for a VPN
instance. instance.

4. (Optional.) Description description By default, no


Configure a description is
description for the configured for a VPN
VPN instance. instance.
5. (Optional.) vpn vpn By default, no VPN ID is
Configure a VPN configured for a VPN
instance.
ID for the VPN
instance.

Rev. 14.41 3-19

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 2: Route Distinguisher (continued)

Figure 3-16: Step 2: Route Distinguisher (continued)

The command display ip vpn-instance [ instance-name vpn-instance-


name ] displays information about a specified or all VPN instances.

Syntax
display ip vpn-instance [ instance-name vpn-instance-name ]
instance-name vpn-instance-name
Displays information about the specified VPN instance. The vpn-instance-
name is a case-sensitive string of 1 to 31 characters. If no VPN instance is
specified, the command displays brief information about all VPN instances.

Example
Display brief information about all VPN instances.
<switch>display ip vpn-instance
Total VPN-Instances configured : 1
VPN-Instance Name RD Create time
customerA 65000:1 2014/05/13 11:03:35

Command output:
Field Description
VPN-Instance Name Name of the VPN instance.
RD RD of the VPN instance.
Create Time Time when the VPN instance was
created.

3-20 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Configuration step 3.1: Define L3 Interface

Figure 3-17: Configuration step 3.1: Define L3 Interface

Optionally, Layer 3 routed interfaces can be defined in the VPN instance. This
typically applies to switches as most switches have only a single routed interface
by default - interface VLAN 1. Additional Layer 3 interfaces can be created either
as routed ports, or Layer 3 VLAN interface, or routed subinterface, or loopback
interface.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-21

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 3: Define L3 Interface (continued)

Figure 3-18: Step 3: Define L3 Interface (continued)

Use display interface brief to display brief Ethernet interface information. In


the output in the figure, multiple interface types are shown, including a routed port,
routed subinterface, loopback interface and VLAN interface.

Syntax
display interface [ interface-type [ interface-number | interface-
number.subnumber ] ] brief [ description ]

interface-type
Specifies an interface type.
interface-number
Specifies an interface number.
interface-number.subnumber
Specifies a subinterface number, where interface-number is a main
interface (which must be a Layer 3 Ethernet interface) number, and
subnumber is the number of a subinterface created under the
interface. The value range for the subnumber argument is 1 to 4094.
description
Displays the full description of the specified interface. If the keyword is
not specified, the command displays at most the first 27 characters of
the interface description. If the keyword is specified, the command
displays all characters of the interface description.

3-22 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Usage guidelines
If no interface type is specified, this command displays information about all
interfaces.
If an interface type is specified but no interface number or subinterface number is
specified, this command displays information about all interfaces of that type.
If both the interface type and interface number are specified, this command
displays information about the specified interface.
Examples
Display brief information about all interfaces.
<Sysname> display interface brief
The brief information of interface(s) under route mode:
Link: ADM - administratively down; Stby - standby
Protocol: (s) - spoofing
Interface Link Protocol Main IP Description
GE3/0/1 UP UP 10.1.1.2 Link to CoreRouter
GE3/0/2 Stby DOWN --
Loop0 UP UP(s) 2.2.2.9
NULL0 UP UP(s) --
Vlan1 UP DOWN --
Vlan999 UP UP 192.168.1.42

The brief information of interface(s) under bridge mode:


Link: ADM - administratively down
Speed or Duplex: (a)/A - auto; H - half; F - full
Type: A - access; T - trunk; H - hybrid
Interface Link Speed Duplex Type PVID Description
GE3/0/2 DOWN auto A A 1
GE3/0/3 UP 100M(a) F(a) A 1
GE3/0/4 DOWN auto A A 1
GE3/0/5 DOWN auto A A 1
GE3/0/6 UP 100M(a) F(a) A 1
GE3/0/7 DOWN auto A A 1
GE3/0/8 UP 100M(a) F(a) A 1
GE3/0/9 UP 100M(a) F(a) A 999

Rev. 14.41 3-23

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Command output:
Field Description
The brief information of interface(s) Brief information about Layer 3
under route mode: interfaces.
Link: ADM - administratively down; ADM—The interface has been shut down
Stby - standby by the network administrator. To recover
its physical layer state, run the undo
shutdown command.
Stby—The interface is a standby
interface.
Protocol: (s) – spoofing If the network layer protocol of an
interface is UP, but its link is an on-
demand link or not present at all, this field
displays UP (s), where s represents the
spoofing flag. This attribute is typical of
interface Null 0 and loopback interfaces.
Interface Interface name.
Link Physical link state of the interface:
UP—The link is up.
DOWN—The link is physically down.
ADM—The link has been administratively
shut down. To recover its physical state,
run the undo shutdown command.
Stby—The interface is a standby
interface.
Description Interface description configured by using
the description command. If the
description keyword is not specified in the
display interface brief command, the
Description field displays at most 27
characters. If the description keyword is
specified in the display interface brief
command, the field displays the full
interface description.
The brief information of interface(s) Brief information about Layer 2
under bridge mode: interfaces.
Speed or Duplex: (a)/A - auto; H - If the speed of an interface is
half; F – full automatically negotiated, its speed
attribute includes the auto negotiation
flag, indicated by the letter a in
parentheses.
If the duplex mode of an interface is
automatically negotiated, its duplex mode
attribute includes the following options:

3-24 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

(a)/A—Auto negotiation.
H—Half negotiation.
F—Full negotiation.
Type: A - access; T - trunk; H – Link type options for Ethernet interfaces.
hybrid
Speed Interface rate, in bps.
Duplex Duplex mode of the interface:
A—Auto negotiation.
F—Full duplex.
F(a)—Auto negotiated full duplex.
H—Half duplex.
H(a)—Auto negotiated half duplex
Type Link type of the interface:
A—Access.
H—Hybrid.
T—Trunk..
PVID Port VLAN ID.
Cause Causes for the physical state of an
interface to be DOWN.
Not connected—No physical connection
exists (possibly because the network
cable is disconnected or faulty).
Administratively DOWN—The port was
shut down with the shutdown command.
To restore the physical state of the
interface, use the undo shutdown
command.

Rev. 14.41 3-25

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration step 4: Bind L3 Interface

Figure 3-19: Configuration step 4: Bind L3 Interface

By default all Layer 3 interfaces on a device are associated with the default VPN
instance (public VPN instance).
After creating and configuring a VPN instance, associate the VPN instance with
the MCE's interface connected to the site and the interface connected to the PE.
Any IP address configuration on the interface is lost and will need to be
reconfigured.
To associate a VPN instance with an interface:
Step Command Remarks
1. Enter system system-view
view.
2. Enter interface interface interface-type
view. interface-number
3. Associate a VPN ip binding vpn-instance By default, no VPN
instance with the instance is associated
vpn-instance-name
interface. with an interface. The
interface is by default
part of the public /
default instance
The ip binding
vpn-instance
command deletes the
IP address of the
current interface. You
must re-configure an IP
address for the
interface after
configuring the
command.

3-26 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Display detailed information about a specified VPN instance.


<Sysname> display ip vpn-instance instance-name vpn1
VPN-Instance Name and ID : vpn1, 1
Create time : 2000/04/26 13:29:37
Up time : 0 days, 16 hours, 45 minutes and 21 seconds
Route Distinguisher : 10:1
Export VPN Targets : 10:1
Import VPN Targets : 10:1
Description : this is vpn1
Maximum Routes Limit : 200
Interfaces : Vlan-interface2, LoopBack0

Command output:
Field Description
VPN-Instance Name and ID Name and ID of the VPN instance
Create Time Time when the VPN instance was created
Up Time Duration the VPN instance has been up
Route Distinguisher RD of the VPN instance
Export VPN Targets Export target attribute of the VPN instance
Import VPN Targets Import target attribute of the VPN instance
Import Route Policy Import routing policy of the VPN instance
Description Description of the VPN instance
Maximum number of Routes Maximum number of routes of the VPN instance
Interfaces Interfaces bound to the VPN instance

Rev. 14.41 3-27

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 5: Configure IP on L3 address

Figure 3-20: Step 5: Configure IP on L3 address

Overview
Once the Layer 3 interface has been associated with the VPN instance, an IP
address is required. Configure the IP address on the interface in the VPN instance.
The display interface brief command does not indicate VPN instance
membership. To view the VPN instance membership, use the display ip vpn-
instance or display ip routing table commands.
In the example in the figure, an IP address is configured on Gigabit Ethernet 2/0
and this is shown in the output of the display ip vpn-instance instance-
name vpn1

IP address
Use the ip address command to assign an IPv4 address to the management
Ethernet port.
Use the undo ip address command to restore the default.
Syntax
ip address ip-address { mask-length | mask }
undo ip address
ip-address: Specifies an IPv4 address in dotted decimal notation.
mask-length: Specifies the length of the subnet mask, in the range of 0 to 32.
mask: Specifies the subnet mask in dotted decimal notation.
Default: No IPv4 address is configured.

3-28 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

display ip routing-table vpn-instance


Use the display ip routing-table vpn-instance command to display the routing
information of a VPN instance / VRF.
Syntax
display ip routing-table vpn-instance vpn-instance-name [ verbose ]
vpn-instance-name
Name of the VPN instance, a string of 1 to 31 characters.
verbose
Displays detailed information.
Example
Display the routing information of VPN instance vpn2.
<Sysname> display ip routing-table vpn-instance vpn2
Routing Tables: vpn2
Destinations : 5 Routes : 5

Destination/Mask Proto Pre Cost NextHop Interface

127.0.0.0/8 Direct 0 0 127.0.0.1 InLoop0


127.0.0.1/32 Direct 0 0 127.0.0.1 InLoop0
10.214.20.0/24 Direct 0 0 10.214.20.3 Vlan20
10.214.20.3/32 Direct 0 0 127.0.0.1 InLoop0
192.168.10.0/24 RIP 100 1 10.214.20.2 Vlan20

Command output:
Field Description
Destinations Number of destination addresses
Routes Number of routes
Destination/Mask Destination address/mask length
Proto Protocol discovering the route
Pre Preference of the route
Cost Cost of the route
NextHop Address of the next hop along the route
Interface Outbound interface for forwarding packets to the
destination segment

Rev. 14.41 3-29

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration step 6: Configure Routing (1 of 3)

Figure 3-21: Configuration step 6: Configure Routing (1 of 3)

Overview
You can configure static routing, OSPF, EBGP, or IBGP between an MCE and a
VPN site.
Static Routes
An MCE can reach a VPN site through a static route. Static routing on a traditional
CE is globally effective and does not support address overlapping among VPNs.
An MCE supports binding a static route to a VPN instance, so that the static routes
of different VPN instances can be isolated from each other.
To configure a static route to a VPN site:
Step Command Remarks
1. Enter system system-view
view.
2. Configure a ip route-static vpn-instance s- By default, no
static route for a vpn-instance-name dest-address { static route is
VPN instance. mask-length | mask } { configured.
interface-type interface-number
Perform this
[ next-hop-address ] | next-hop-
address [ public ] | vpn- configuration on
instance d-vpn-instance-name the MCE. On the
next-hop-address } [ permanent ] VPN site,
[ preference preference-value ] configure a
[ tag tag-value ] [ description common static
description-text ] route.
3. (Optional.) ip route-static default- The default
Configure the preference default-preference- preference is 60.
default reference value
for static routes.

3-30 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Configuration step 6: Configure routing (2 of 3)

Figure 3-22: Configuration step 6: Configure routing (2 of 3)

Once the static routes has been defined, the routing tables for the VPN instance
can be reviewed.
Network connectivity can also be tested using the ping and tracert tools for
example. These commands require that the -vpn-instance <name> option be
specified to indicate the specific VPN instance. Otherwise traffic is sent in the
public instance.
This also applies to other commands such as viewing the ARP cache.
ping
Use ping to verify whether the destination IP address is reachable, and display
related statistics.
To use the name of the destination host to perform the ping operation, you must
first configure the DNS on the device. Otherwise, the ping operation will fail.
To abort the ping operation during the execution of the command, press Ctrl+C.
Syntax
ping [ ip ] [ -a source-ip | -c count | -f | -h ttl | -i interface-
type interface-number | -m interval | -n | -p pad | -q | -r | -s
packet-size | -t timeout | -tos tos | -v | -vpn-instance vpn-
instance-name ] * host

ip: Supports IPv4 protocol. If this keyword is not specified, IPv4 is also
supported.
-a source-ip: Specifies the source IP address of an ICMP echo request. It
must be an IP address configured on the device. If this option is not specified,
the source IP address of an ICMP echo request is the primary IP address of
the outbound interface of the request.
-c count: Specifies the number of times that an ICMP echo request is sent.
The count argument is in the range of 1 to 4294967295. The default value is
5.
Rev. 14.41 3-31

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

-f: Discards packets larger than the MTU of an outbound interface, which
means the ICMP echo request is not allowed to be fragmented.
-h ttl: Specifies the TTL value for an ICMP echo request. The ttl argument
is in the range of 1 to 255. The default value is 255.

-i interface-type interface-number: Specifies the ICMP echo request


sending interface by its type and number. If this option is not provided, the
ICMP echo request sending interface is determined by searching the routing
table or forwarding table according to the destination IP address.
-m interval: Specifies the interval (in milliseconds) to send an ICMP echo
request. The interval argument is in the range of 1 to 65535. The default value
is 200.
-n: Disables domain name resolution for the host argument. If the host
argument represents the host name for the destination, and this keyword is
not specified, the device translates host into an address.
-p pad: Specifies the value of the pad field in an ICMP echo request, in
hexadecimal format. No more than 8 "pad" hexadecimal characters can be
used. The pad argument is 0 to ffffffff. If the specified value is less than 8
characters, 0s are added in front of the value to extend it to 8 characters. For
example, if pad is configured as 0x2f, then the packets are padded with
0x0000002f to make the total length of the packet meet the requirements of
the device. By default, the padded value starts from 0x01 up to 0xff, where
another round starts again if necessary, like 0x010203…feff01….
-q: Displays only statistics. If this keyword is not specified, the system
displays all information.
-r: Records routing information. If this keyword is not specified, routes are
not recorded.
-s packet-size: Specifies length (in bytes) of an ICMP echo request (not
including the IP packet header and the ICMP packet header). The packet-size
argument is in the range of 20 to 8100. The default value is 56.
-t timeout: Specifies the timeout time (in milliseconds) of an ICMP echo
reply. If the source does not receive an ICMP echo reply within the timeout, it
considers the ICMP echo reply timed out. The timeout argument is in the
range of 0 to 65535. The default value is 2000.
-tos tos: Specifies the ToS value of an ICMP echo request. The tos
argument is in the range of 0 to 255. The default value is 0.
-v: Displays non ICMP echo reply received. If this keyword is not specified,
the system does not display non ICMP echo reply.
-vpn-instance vpn-instance-name: Specifies the MPLS L3VPN to which
the destination belongs, where the vpn-instance-name argument is a case-
sensitive string of 1 to 31 characters. If the destination is on the public
network, do not specify this option.
host: IP address or host name (a string of 1 to 20 characters) for the
destination.

3-32 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Examples
Test whether the device with an IP address of 1.1.2.2 is reachable.
<Sysname> ping 1.1.2.2
PING 1.1.2.2 (1.1.2.2): 56 data bytes, press CTRL_C to break
56 bytes from 1.1.2.2: icmp_seq=0 ttl=254 time=2.137 ms
56 bytes from 1.1.2.2: icmp_seq=1 ttl=254 time=2.051 ms
56 bytes from 1.1.2.2: icmp_seq=2 ttl=254 time=1.996 ms
56 bytes from 1.1.2.2: icmp_seq=3 ttl=254 time=1.963 ms
56 bytes from 1.1.2.2: icmp_seq=4 ttl=254 time=1.991 ms

--- 1.1.2.2 ping statistics ---


5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 1.963/2.028/2.137/0.062 ms

Test whether the device with an IP address of 1.1.2.2 in VPN 1 is reachable.


<Sysname> ping -vpn-instance vpn1 1.1.2.2
PING 1.1.2.2 (1.1.2.2): 56 data bytes, press CTRL_C to break
56 bytes from 1.1.2.2: icmp_seq=0 ttl=254 time=2.137 ms
56 bytes from 1.1.2.2: icmp_seq=1 ttl=254 time=2.051 ms
56 bytes from 1.1.2.2: icmp_seq=2 ttl=254 time=1.996 ms
56 bytes from 1.1.2.2: icmp_seq=3 ttl=254 time=1.963 ms
56 bytes from 1.1.2.2: icmp_seq=4 ttl=254 time=1.991 ms

--- 1.1.2.2 ping statistics ---

5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss


round-trip min/avg/max/std-dev = 1.963/2.028/2.137/0.062 ms

Test whether the device with an IP address of 1.1.2.2 is reachable. Only


results are displayed.
<Sysname> ping -q 1.1.2.2
PING 1.1.2.2 (1.1.2.2): 56 data bytes, press CTRL_C to break

--- 1.1.2.2 ping statistics ---


5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/ std-dev = 1.962/2.196/2.665/0.244 ms

Rev. 14.41 3-33

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Test whether the device with an IP address of 1.1.2.2 is reachable. The route
information is displayed.
<Sysname> ping -r 1.1.2.2
PING 1.1.2.2 (1.1.2.2): 56 data bytes, press CTRL_C to break
56 bytes from 1.1.2.2: icmp_seq=0 ttl=254 time=4.685 ms
RR: 1.1.2.1
1.1.2.2
1.1.1.2
1.1.1.1

56 bytes from 1.1.2.2: icmp_seq=1 ttl=254 time=4.834 ms (same


route)
56 bytes from 1.1.2.2: icmp_seq=2 ttl=254 time=4.770 ms (same
route)
56 bytes from 1.1.2.2: icmp_seq=3 ttl=254 time=4.812 ms (same
route)
56 bytes from 1.1.2.2: icmp_seq=4 ttl=254 time=4.704 ms (same
route)

--- 1.1.2.2 ping statistics ---


5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss
round-trip min/avg/max/std-dev = 4.685/4.761/4.834/0.058 ms
The output shows that:
• The destination is reachable.
• The route is 1.1.1.1 <-> {1.1.1.2; 1.1.2.1} <-> 1.1.2.2.

Command output:
Field Description
PING 1.1.2.2 (1.1.2.2): 56 Test whether the device with IP address 1.1.2.2
data bytes, press CTRL_C to is reachable. There are 56 data bytes in each
break ICMP echo request. Press Ctrl+C to abort the
ping operation.
56 bytes from 1.1.2.2: Received ICMP echo replies from the device
icmp_seq=0 ttl=254 whose IP address is 1.1.2.2. If no echo reply is
time=4.685 ms received during the timeout period, no
information is displayed.
bytes—Number of data bytes in the ICMP reply.
icmp_seq—Packet sequence, used to determine
whether a segment is lost, disordered or
repeated.
ttl—TTL value in the ICMP reply.

3-34 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

time—Response time.
RR: Routers through which the ICMP echo request
passed. They are displayed in inversed order,
which means the router with a smaller distance
to the destination is displayed first.
--- 1.1.2.2 ping statistics --- Statistics on data received and sent in the ping
operation.
5 packet(s) transmitted Number of ICMP echo requests sent.
5 packet(s) received Number of ICMP echo replies received.
0.0% packet loss Percentage of packets not responded to the total
packets sent.
round-trip min/avg/max/std- Minimum/average/maximum/standard deviation
dev = response time, in milliseconds.
4.685/4.761/4.834/0.058 ms

Rev. 14.41 3-35

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration step 6: Configure routing (3 of 3)

Figure 3-23: Configuration step 6: Configure routing (3 of 3)

Use the display arp vpn-instance command to display the ARP entries for a
specific VPN. The command shows information about ARP entries including the IP
address, MAC address, VLAN ID, output interface, entry type, and aging timer.
Syntax
display arp vpn-instance vpn-instance-name [ count ]
vpn-instance-name
Specifies the name of an MPLS L3VPN, a case-sensitive string of 1 to
31 characters.
count
Displays the number of ARP entries.
Example
Display ARP entries for the VPN instance named test.
<Sysname> display arp vpn-instance test

Type: S-Static D-Dynamic M-Multiport I-Invalid

IP Address MAC Address VLAN ID Interface Aging Type


20.1.1.1 00e0-fc00-0001 N/A N/A N/A S

3-36 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

VPN-Instance dynamic routing - OSPF example

Figure 3-24: VPN-Instance dynamic routing - OSPF example

Overview
A separate OSPF process is required for every VPN instance. In Figure 3-24, an
OSPF process of 1001 is configured for VPN instance customerA. When
configuring the OSPF process, specify a unique process number for that OSPF
process and the VPN instance that the OSPF process is associated with.
Each OSPF process configured on a device will have its own link state database
and requires its own router-id which must exist in the VPN instance.
The OSPF configuration process is very similar to traditional OSPF configuration.
A loopback address is configured in the VPN instance before configuring OSPF. If
no routed interfaces are available within the VPN instance, the OSPF process will
not start because no router-id can be allocated to the process.
In Figure 3-24 area 0 is configured within the OSPF process and OSPF is enabled
on all interfaces configured with IPv4 addresses in the VPN instance.
Loopback configuration
By default all Layer 3 interfaces are associated with the default VPN instance.
After creating and configuring a VPN instance, associate the VPN instance with
the MCE's interface connected to the site and the interface connected to the PE.
Any IP address configuration on the interface is lost and will need to be
reconfigured.
To associate a VPN instance with an interface:
Step Command Remarks
1. Enter system system-view
view.
2. Enter interface interface interface-type
view. interface-number
3. Associate a VPN ip binding vpn-instance By default, no VPN
instance with the vpn-instance-name instance is associated
interface. with an interface.
Rev. 14.41 3-37

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

The ip binding
vpn-instance
command deletes the
IP address of the
current interface. You
must re-configure an
IP address for the
interface after
configuring the
command.

OSPF
An OSPF process belongs to the public network or a single VPN instance. If you
create an OSPF process without binding it to a VPN instance, the process belongs
to the public network.
Binding OSPF processes to VPN instances ensures that routes learned populate
the correct VPN instance.
To configure OSPF between an MCE and a VPN site:
Step Command Remarks
1. Enter system system-view
view.
2. Create an ospf [ process-id | Perform this configuration
OSPF process router-id router-id | vpn- on the MCE. On a VPN
for a VPN instance vpn-instance-name site, create a common
] *
instance and OSPF process.
enter OSPF view.
An OSPF process bound
to a VPN instance does
not use the public
network router ID
configured in system
view. Therefore,
configure a router ID for
the OSPF process.
An OSPF process can
belong to only one VPN
instance, but one VPN
instance can use multiple
OSPF processes to
advertise VPN routes.
3. (Optional.) domain-id domain-id [ The default domain ID is
Configure the secondary ] 0.
OSPF domain ID. Perform this configuration
on the MCE. All OSPF
processes of the same
VPN instance must be
configured with the same
3-38 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

OSPF domain ID to
ensure correct route
advertisement..
4. (Optional.) ext-community-type { The defaults are as
Configure the domain-id type-code1 | follows:
type codes of router-id type-code2 |
route-type type-code3 } • 0x0005 for Domain ID.
OSPF extended
community • 0x0107 for Router ID.
attributes. • 0x0306 for Route Type.
5. Optional.) route-tag tag-value By default, no routes are
Configure the redistributed into OSPF.
external route tag
for imported VPN
routes.

6. Redistribute import-route protocol [ By default, no routes are


process-id
remote site redistributed into OSPF.
routes advertised | all-processes | allow-
by the PE into ibgp ] [ allow-direct |
OSPF. cost cost | route-policy
route-policy-name | tag
tag | type type ] *
7. (Optional.) default-route-advertise By default, OSPF does
Configure OSPF summary cost cost not redistribute the
to redistribute the default route.
default route.
This command
redistributes the default
route in a Type-3 LSA.
The MCE advertises the
default route to the site.
8. Create an area area-id By default, no OSPF area
OSPF area and is created.
enter OSPF area
view.

9. Enable OSPF network ip-address By default, an interface


on interfaces that wildcard-mask neither belongs to any
are configured area nor runs OSPF.
with subnets in
the range
specified by the
network
command.

Rev. 14.41 3-39

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VPN-instance dynamic routing - OSPF example

Figure 3-25: VPN-instance dynamic routing - OSPF example

Overview
To view information for a specific OSPF process or VPN instance, specify the
OSPF process number in commands. The VPN instance keyword is not required
as an OSPF process number is associated with an individual VPN instance.
In Figure 3-25, the link state database of OSPF process number 1001 is displayed
as well as the OSPF peers for the process.

display ospf lsdb


Use the display ospf lsdb command to display OSPF LSDB information. If no
OSPF process is specified, this command displays LSDB information for all OSPF
processes.
Syntax
display ospf [ process-id ] lsdb [ brief | [ { asbr | ase | network
| nssa | opaque-area | opaque-as | opaque-link | router | summary }
[ link-state-id ] ] [ originate-router advertising-router-id |
self-originate ] ]

process-id
Specifies an OSPF process by its ID in the range of 1 to 65535.
brief
Displays brief LSDB information.
asbr
Displays Type-4 LSA (ASBR Summary LSA) information in the LSDB.

3-40 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

ase
Displays Type-5 LSA (AS External LSA) information in the LSDB.
network
Displays Type-2 LSA (Network LSA) information in the LSDB.
nssa
Displays Type-7 LSA (NSSA External LSA) information in the LSDB.
opaque-area
Displays Type-10 LSA (Opaque-area LSA) information in the LSDB.
opaque-as
Displays Type-11 LSA (Opaque-AS LSA) information in the LSDB.
opaque-link
Displays Type-9 LSA (Opaque-link LSA) information in the LSDB.
router
Displays Type-1 LSA (Router LSA) information in the LSDB.
summary
Displays Type-3 LSA (Network Summary LSA) information in the
LSDB.
link-state-id
Specifies a Link state ID, in the IP address format.
originate-router advertising-router-id
Displays information about LSAs originated by the specified router.
self-originate
Displays information about self-originated LSAs.

Rev. 14.41 3-41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Example
Display OSPF LSDB information.
<Sysname> display ospf lsdb
OSPF Process 1 with Router ID 192.168.0.1
Link State Database

Area: 0.0.0.0
Type LinkState ID AdvRouter Age Len Sequence Metric
Router 192.168.0.2 192.168.0.2 474 36 80000004 0
Router 192.168.0.1 192.168.0.1 21 36 80000009 0
Network 192.168.0.1 192.168.0.1 321 32 80000003 0
Sum-Net 192.168.1.0 192.168.0.1 321 28 80000002 1
Sum-Net 192.168.2.0 192.168.0.2 474 28 80000002 1

Area: 0.0.0.1
Type LinkState ID AdvRouter Age Len Sequence Metric
Router 192.168.0.1 192.168.0.1 21 36 80000005 0
Sum-Net 192.168.2.0 192.168.0.1 321 28 80000002 2
Sum-Net 192.168.0.0 192.168.0.1 321 28 80000002 1

Type 9 Opaque (Link-Local Scope) Database


Flags: * -Vlink interface LSA
Type LinkState ID AdvRouter Age Len Sequence Interfaces
*Opq-Link 3.0.0.0 7.2.2.1 8 14 80000001 10.1.1.2
*Opq-Link 3.0.0.0 7.2.2.2 8 14 80000001 20.1.1.2

Command output:
Field Description
Area LSDB information of the area.
Type LSA Type.
LinkState ID Link state ID.
AdvRouter Advertising router.
Age Age of LSA.
Len Length of LSA.
Sequence Sequence number of the LSA.
Metric Cost of the LSA.
*Opq-Link Opaque LSA generated by a virtual link.
3-42 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Display Type-2 LSA (Network LSA) information in the LSDB.


<Sysname> display ospf 1 lsdb network

OSPF Process 1 with Router ID 192.168.1.1


Area: 0.0.0.0
Link State Database

Type : Network
LS ID : 192.168.0.2
Adv Rtr : 192.168.2.1
LS Age : 922
Len : 32
Options : E
Seq# : 80000003
Checksum : 0x8d1b
Net Mask : 255.255.255.0
Attached Router 192.168.1.1
Attached Router 192.168.2.1
Area: 0.0.0.1
Link State Database
Type : Network
LS ID : 192.168.1.2
Adv Rtr : 192.168.1.2
LS Age : 782
Len : 32
Options : NP
Seq# : 80000003
Checksum : 0x2a77
Net Mask : 255.255.255.0
Attached Router 192.168.1.1
Attached Router 192.168.1.2

Rev. 14.41 3-43

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Command output:
Field Description
Type LSA type.
LS ID DR IP address.
Adv Rtr Router that advertised the LSA.
LS Age LSA age time.
Len Length of LSA.
Options LSA options:
O-Opaque LSA advertisement capability.
E-AS External LSA reception capability.
EA-External extended LSA reception capability.
DC-On-demand link support.
N-NSSA external LSA support.
P-Capability of an NSSA ABR to translate Type-7
LSAs into Type-5 LSAs.
Seq# LSA sequence number.
Checksum LSA checksum.
Net Mask Network mask.
Attached Router ID of the router that established adjacency with
the DR, and ID of the DR itself.

3-44 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

display ospf peer


Use the display ospf peer command to display information about OSPF
neighbors.
If no OSPF process is specified, this command displays OSPF neighbor
information for all OSPF processes.
If the verbose keyword is not specified, this command displays brief OSPF
neighbor information.
If no interface is specified, this command displays the neighbor information for all
interfaces.
If no neighbor ID is specified, this command displays all neighbor information.
Syntax
display ospf [ process-id ] peer [ verbose ] [ interface-type
interface-number ] [ neighbor-id ]

process-id
Specifies an OSPF process by ID in the range of 1 to 65535.
verbose
Displays detailed neighbor information.
interface-type interface-number
Specifies an interface by its type and number.
neighbor-id
Specifies a neighbor router ID.
Example
Display detailed OSPF neighbor information.
<Sysname> display ospf peer verbose

OSPF Process 1 with Router ID 1.1.1.1


Neighbors

Area 0.0.0.0 interface 1.1.1.1(GigabitEthernet3/0/1)'s neighbors


Router ID: 1.1.1.2 Address: 1.1.1.2 GR State: Normal
State: Full Mode: Nbr is Master Priority: 1
DR: 1.1.1.2 BDR: 1.1.1.1 MTU: 0
Options is 0x02 (-|-|-|-|-|-|E|-)
Dead timer due in 33 sec
Neighbor is up for 02:03:35
Authentication Sequence: [ 0 ]
Neighbor state change count: 6

Rev. 14.41 3-45

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Command output:
Field Description
Area areaID interface Neighbor information of the interface in the
IPAddress (InterfaceName)'s specified area:
neighbors
areaID-Area to which the neighbor belongs.
IPAddress-Interface IP address.
InterfaceName-Interface name.
Router ID Neighbor router ID.
Address Neighbor router address.
GR State GR state.
State Neighbor state:
Down-Initial state of a neighbor conversation.
Init-The router has seen a Hello packet from the
neighbor. However, the router has not
established bidirectional communication with the
neighbor (the router itself did not appear in the
neighbor's hello packet).
Attempt- Available only in an NBMA network,
Under this state, the OSPF router has not
received any information from a neighbor for a
period but can send Hello packets at a longer
interval to keep neighbor relationship.
2-Way-Communication between the two routers
is bidirectional. The router itself appears in the
neighbor's Hello packet.
Exstart-The goal of this state is to decide which
router is the master, and to decide upon the
initial Database Description (DD) sequence
number.
Exchange-The router is sending DD packets to
the neighbor, describing its entire link-state
database.
Loading-The router sends LSRs packets to the
neighbor, requesting more recent LSAs.
Full-The neighboring routers are fully adjacent.
Mode Neighbor mode for LSDB synchronization.
Priority Neighboring router priority.
DR DR on the interface's network segment.
BDR BDR on the interface's network segment.
MTU Neighboring router interface MTU.
Options LSA options:
3-46 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

O-Opaque LSA advertisement capability.


E-AS External LSA reception capability.
EA-External extended LSA reception capability.
DC-On-demand link support.
N-NSSA external LSA support.
P-Capability of an NSSA ABR to translate Type-
7 LSAs into Type-5 LSAs.
Dead timer due in 33 sec This dead timer will expire in 33 seconds.
Neighbor is up for 02:03:35 The neighbor has been up for 02:03:35.
Authentication Sequence Authentication sequence number.
Neighbor state change count Count of neighbor state changes.

Rev. 14.41 3-47

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VPN-instance dynamic routing - OSPF example

Figure 3-26: VPN-instance dynamic routing - OSPF example

Overview
Use the display ip routing-table vpn-instance command to display the
routing information of a VPN instance / VRF.
In Figure 3-26, the output of the routing table for VPN instance customerA is
shown on R1.

Syntax
display ip routing-table vpn-instance vpn-instance-name [ verbose ]
vpn-instance-name
Name of the VPN instance, a string of 1 to 31 characters.
verbose
Displays detailed information.

3-48 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Example
Display the routing information of VPN instance vpn2.
<Sysname> display ip routing-table vpn-instance vpn2
Routing Tables: vpn2
Destinations : 5 Routes : 5

Destination/Mask Proto Pre Cost NextHop Interface

127.0.0.0/8 Direct 0 0 127.0.0.1 InLoop0


127.0.0.1/32 Direct 0 0 127.0.0.1 InLoop0
10.214.20.0/24 Direct 0 0 10.214.20.3 Vlan20
10.214.20.3/32 Direct 0 0 127.0.0.1 InLoop0
192.168.10.0/24 RIP 100 1 10.214.20.2 Vlan20
Command output:
Field Description
Destinations Number of destination addresses
Routes Number of routes
Destination/Mask Destination address/mask length
Proto Protocol discovering the route
Pre Preference of the route
Cost Cost of the route
NextHop Address of the next hop along the route
Interface Outbound interface for forwarding packets to the
destination segment

Rev. 14.41 3-49

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Lab Activity 3.1: Lab Topology

Figure 3-27: Lab Activity 3.1: Lab Topology

Please note that this topology only includes the relevant connections for this lab.
Additional links are available between the devices, but these are not used on this
lab. These links will be shut down during the first task.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-50 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Lab Activity 3.1 Preview: Configuring Basic


MCE

Figure 3-28: Lab Activity 3.1 Preview: Configuring Basic MCE

In this lab, you will configure Multi-Customer CE (VRF lite).


Initially, all devices will be reset to factory and basic IP configuration completed.
VPN Instances will then be created and isolated routing tables displayed.
In the last section of this lab you will configure IP routing within VPN Instances.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-51

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Lab Activity 3.1 Debrief


Use the space below to record your key insights and challenges from Lab
Activity 3.1

Debrief for Lab Activity 3.1


Challenges Key Insights

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-52 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

MCE: Advanced configuration

Figure 3-29: MCE: Advanced configuration

In this section, advanced MDC configuration topics are discussed including the
following:
• Routing table limits are used to ensure that the VPN-instance routing tables
do not consume all the hardware resources of the underlying platform. This
is done by limiting the number of routes permitted in a VPN instance.
• Route leaking is a VPN configuration option which allows routing between
VPN instances. VPN instances are by design isolated from each other.
However, in certain cases, routing is required between VPN instances and
routes can therefore be "leaked" between isolated routing tables. Routes of
one VPN instance can also be advertised into other VPN instances to
provide dynamic routing exchange between routing protocols in different
VPN instances.
• Management Access VPN instances are popular in data center
environments and are typically configured on core and distribution
switches. These switches are performing IP routing and IP forwarding roles
for customer VPNs. To isolate the management function of these devices
from customer networks, management protocols and management
functionality are configured within a dedicated management VPN instance.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-53

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VPN instance routing limits

Figure 3-30: VPN instance routing limits

Overview
VPN instance routing table limits allow a network operator to restrict the number of
active routes allowed in a VPN instance.
It is recommended that VPN instance routing limits be configured on all customer
VPNs to ensure resource protection. If this is not done, a single VPN instance
could potentially consume all hardware resources.
As an example, if OSPF was configured within a customer VPN instance, the
OSPF process within the VPN instance may be learning hundreds or thousands of
routes from an external OSPF router. However, the same underlying hardware or
ASICs are being used for all OSPF processing in all VPN instances on that device.
A customer VPN instance may be able to consume a disproportionate amount of
resources, or in the worst case scenario, consume all resources on core devices.
Underlying ASIC routing table limits apply to all the VPN instances which are
defined. If a switch can support 64 thousand routes as a maximum, one VPN
instance could consume all 64 thousand routes which would mean that there are
no hardware resources available for other VPN instances. This will not only affect
that single VPN instance, but will affect all VPN instances and therefore potentially
affect all customers.
Setting limits on the number of routes permitted in a VPN instance ensures that a
sufficient number of free resources are available on core devices. This protects
both backbone routing as well as routing for other VPN instances.
By default, the number of active routes allowed for a VPN instance is not limited.
Setting the maximum number of active routes for a VPN instance can prevent the
a device from learning too many routes.
Two types of limits are configurable:
• Limit
• Warning threshold
The routing table limit will limit the maximum number of routes accepted by the
routing table. In Figure 3-30, this value is set to 20 which limits the routes in the
VPN instance to a maximum of 20 routes. In Figure 3-30 the warning threshold is
3-54 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

also set to 80 percent. When the number of routes in the VPN instance reaches 16
routes (80% of 20), SNMP traps will be generated to warn network operators that
the number of routes in the routing table is approaching the maximum. This is a
type of high-water mark alert notifying network operators before additional routes
are denied entry to the VPN instance routing table.
Warning message examples
.%May 13 11:56:13:847 2014 HP RM/4/RM_ACRT_REACH_THRESVALUE:
Threshold value 80% of max active IPv4 routes reached in URT of customerA

.%May 13 11:56:33:426 2014 HP RM/4/RM_ROUTE_REACH_LIMIT: Max active


IPv4 routes 20 reached the limit in URT of customerA

Syntax
routing-table limit number { warn-threshold | simply-alert }
undo routing-table limit

number
Specifies the maximum number of routes. The value range depends
on the system operating mode.
warn-threshold
Specifies a warning threshold, in the range of 1 to 100 in percentage.
When the percentage of the number of existing routes to the maximum
number of routes exceeds the specified threshold, the system gives an
alarm message but still allows new routes. If routes in the VPN
instance reach the maximum, no more routes are added
simply-alert
Specifies that when routes exceeds the maximum number, the system
still accepts routes but generates a system log message.

Usage guidelines
A limit configured in VPN instance view applies to both the IPv4 VPN and the IPv6
VPN.
A limit configured in IPv4 VPN view or IPv6 VPN view applies to only the IPv4 VPN or
the IPv6 VPN.
IPv4/IPv6 VPN prefers the limit configured in IPv4/IPv6 VPN view over the limit
configured in VPN instance view.

Examples
Specify that VPN instance vpn1 supports up to 1000 routes, and when routes
exceed the upper limit, can receive new routes but generates a system log
message.
Rev. 14.41 3-55

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

<Sysname> system-view
[Sysname] ip vpn-instance vpn1
[Sysname-vpn-instance-vpn1] route-distinguisher 100:1
[Sysname-vpn-instance-vpn1] routing-table limit 1000 simply-alert
Specify that the IPv4 VPN vpn2 supports up to 1000 routes, and when routes
exceed the upper limit, can receive new routes but generates a system log
message.
<Sysname> system-view
[Sysname] ip vpn-instance vpn2
[Sysname-vpn-instance-vpn2] route-distinguisher 100:2
[Sysname-vpn-instance-vpn2] ipv4-family
[Sysname-vpn-ipv4-vpn2] routing-table limit 1000 simply-alert

Specify that the IPv6 VPN vpn3 supports up to 1000 routes, and when routes
exceed the upper limit, can receive new routes but generates a system log
message.
<Sysname> system-view
[Sysname] ip vpn-instance vpn3
[Sysname-vpn-instance-vpn3] route-distinguisher 100:3
[Sysname-vpn-instance-vpn3] ipv6-family
[Sysname-vpn-ipv4-vpn3] routing-table limit 1000 simply-alert

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-56 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Route leaking

Figure 3-31: Route leaking

Route leaking allows for tightly controlled routed communication between VPN
instances. While the original purpose of VPN instances was to isolate
communication between instances, you may have scenarios where some routed
communication between VPN instances is required.
One of the advantages of using the VPN instance is that we can simply create a
limited number of routes for leaking. If the routes are not manually defined, no
communication will be possible between VPN instances. Access control is
therefore easier to implement than using access control lists within a single routing
table.
One use case for route leaking is to specify that certain subnets in VPN instance A
are reachable from certain subnets instance B. As an example, 10.1.1.0/24 in VPN
instance A is reachable from 10.2.2.0/24 in VPN instance B. Note that 10.2.2.0/24
needs to be reachable from VPN instance A to allow for bidirectional
communication.
Another scenario is a shared service within a data center configured on a subnet
such as 10.254.0.0/16. The shared subnet could provide backup services or
monitoring services to customer VPN instances. Route leaking between a VPN
instance and the public routing table is also possible for scenarios such as a
central firewall with Internet access. The firewall could be in a dedicated VPN
instance as shown in Figure 3-31 or the public routing table.

Note
When configuring route leaking, ensure that bidirectional communication is
enabled by leaking the necessary routes into both VPN instances.

Network Address Translation (NAT) is not supported on some of the Layer 3


switches discussed in this course. Therefore, overlapping IP addresses cannot be
used if only those devices are used. Overlapping subnets in VPN instances can be
used, but require inter-VPN instance NAT which is only supported on routers.

Rev. 14.41 3-57

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Route leaking - Static route example

Figure 3-32: Route leaking - Static route example

The scenario in Figure 3-32 shows two VPN instances which require
communication between them. A shared firewall is configured in the Shared-
Internet (shared) VPN instance. The Core routing device has multiple VPN
instances configured with one interface in the CustomerA VPN instance and the
other in the Shared-Internet (shared) VPN instance. CA-R1 is unaware of any
configured VPN instances and is configured as a traditional router or switch. The
firewall is also unaware of VPN instances in this example. Several firewalls can be
configured to be VPN instance aware, but in this example, the firewall is
configured as a traditional firewall only with IPv4 addresses on the internal and
external interfaces.
The CustomerA VPN instance is configured with subnets in the 10.2.0.0/16 range
and the Shared-Internet (shared) VPN instance with subnets in the 10.3.0.0/16
range. An Internet facing router is performing NAT (not shown in the diagram).
To enable connectivity, static routes need to be configured on the Core, CA-R1
and Firewall devices.
The first static route command in Figure 3-32 is configured on the Core device and
enables connectivity to networks in the range 10.2.0.0/16 via the next hop 10.2.1.2
(CA-R1). The static route is added to the CustomerA VPN instance routing table
on the Core device. The second static route adds a default route to the shared
(Shared-Internet) VPN instance with the next hop set as the Firewall.
CA-R1 has a default route configured with the Core device being the next hop. The
Firewall needs to be configured with subnet 10.2.0.0/16 to allow for bidirectional
traffic. The next hop on the Firewall for 10.2.0.0/16 is set to the Core device.
To enable route leaking, additional static routes are then added to each VPN
instance on the Core device. A static route is added to each VPN instance on the
core device, but in this case the next hop is set to an IP address in a different VPN
instance. The first route leaking command adds a default route (0.0.0.0) to the
CustomerA VPN instance, but sets the next hop to 10.3.1.3 in the shared VPN
instance. The second route leaking command adds network 10.2.0.0/16 to the
shared VPN instance with a next hop of 10.2.1.2 in the CustomerA VPN instance.

3-58 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Syntax
ip route-static vpn-instance s-vpn-instance-name dest-address {
mask | mask-length } { next-hop-address [ public ] [ bfd control-
packet bfd-source ip-address | permanent | track track-entry-number
] | interface-type interface-number [ next-hop-address ] [ backup-
interface interface-type interface-number [ backup-nexthop backup-
nexthop-address ] [ permanent ] | bfd { control-packet | echo-
packet } | permanent ] | vpn-instance d-vpn-instance-name next-hop-
address [ bfd control-packet bfd-source ip-address | permanent |
track track-entry-number ] } [ preference preference-value ] [ tag
tag-value ] [ description description-text ]

undo ip route-static vpn-instance s-vpn-instance-name dest-address


{ mask | mask-length } [ next-hop-address [ public ] | interface-
type interface-number [ next-hop-address ] | vpn-instance d-vpn-
instance-name next-hop-address ] [ preference preference-value ]

vpn-instance s-vpn-instance-name
Specifies a source MPLS L3VPN by its name, a case-sensitive string
of 1 to 31 characters. Each VPN has its own routing table, and the
configured static route is installed in the routing tables of the specified
VPNs.
dest-address
Specifies the destination IP address of the static route, in dotted
decimal notation.
mask
Specifies the mask of the IP address, in dotted decimal notation.
mask-length
Specifies the mask length in the range of 0 to 32.
vpn-instance d-vpn-instance-name
Specifies a destination MPLS L3VPN by its name, a case-sensitive
string of 1 to 31 characters. If a destination VPN is specified, packets
will search for the output interface in the destination VPN based on the
configured next hop address.
next-hop-address
Specifies the IP address of the next hop in the destination vpn-
instance, in dotted decimal notation.
backup-interface interface-type interface-number
Specifies a backup output interface by its type and number. If the
backup output interface is an NBMA interface or broadcast interface
(such as an Ethernet interface, a virtual template interface, or a VLAN
interface), rather than a P2P interface, you must specify the backup
next hop address.
backup-nexthop backup-nexthop-address
Specifies a backup next hop address.
Rev. 14.41 3-59

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

bfd
Enables BFD to detect reachability of the static route's next hop. When
the next hop is unreachable, the system immediately switches to the
backup route.
control-packet
Specifies the BFD control mode.
bfd-source ip-address
Specifies the source IP address of BFD packets. H3C recommends
that you specify the loopback interface address.
permanent
Specifies the route as a permanent static route. If the output interface
is down, the permanent static route is still active.
track track-entry-number
Associates the static route with a track entry specified by its number in
the range of 1 to 1024. For more information about track, see High
Availability Configuration Guide.
echo-packet
Specifies the BFD echo mode.
public
Indicates that the specified next hop address is on the public network.
interface-type interface-number
Specifies an output interface by its type and number. If the output
interface is an NBMA interface or broadcast interface (such as an
Ethernet interface, a virtual template interface, or a VLAN interface),
rather than a P2P interface, the next hop address must be specified.
preference preference-value
Specifies a preference for the static route, in the range of 1 to 255. The
default is 60.
tag tag-value
Sets a tag value for marking the static route, in the range of 1 to
4294967295. The default is 0. Tags of routes are used for route control
in routing policies. For more information about routing policies, see
Layer 3—IP Routing Configuration Guide.
description description-text
Configures a description for the static route, which comprises 1 to 60
characters, including special characters like the space, but excluding
the question mark (?).

3-60 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Route leaking: Static route example

Figure 3-33: Route leaking: Static route example

The routing tables of the devices are updated with the new routes. In Figure 3-33,
the routing tables of VPN instance CustomerA and Shared-Internet (shared) are
shown on the Core device.
Both VPN instances CustomerA and Shared-Internet (shared) display the two
additional static routes previously configured.
CustomerA VPN instance contains the following static routes:
• 10.2.0.0/16 with a next hop of 10.2.1.2 (CA-R1) in the CustomerA VPN
instance. The NextHop (CA-R1) is also in the CustomerA VPN instance.
• 0.0.0.0/0 with a next hop of 10.3.1.3 (Firewall) in the shared VPN instance.
The NextHop (Firewall) is in a different VPN instance.
The Shared-Internet (shared) VPN instance contains the following static routes:
• 10.2.0.0/16 with a next hop of 10.2.1.2 (CA-R1) in the CustomerA VPN
instance. The NextHop (CA-R1) is in a different VPN instance.
• 0.0.0.0/0 with a next hop of 10.3.1.3 (Firewall) in the shared VPN instance
The NextHop (Firewall) is also in the shared VPN instance.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-61

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Route leaking: Static route example (continued)

Figure 3-34: Route leaking: Static route example (continued)

Connectivity between the CustomerA VPN instance and the Shared-Internet


(shared) VPN instance can be verified by using ping for example.
In this case the Firewall is able to ping a server with IP address 10.2.0.2 in the
CustomerA VPN instance.
Tracert shows that the path from the Firewall in the shared VPN instance traverses
the Core device (both VPN instances) to reach the server in the CustomerA VPN
instance.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-62 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Route leaking - Static route restrictions

Figure 3-35: Route leaking - Static route restrictions

There are restrictions on static route leaking. Static routes can only be configured
for remote IP subnets and not for directly connected subnets. The is because a
next hop IP address must be configured as part of the static route command; and
that address cannot be the local device where the static route is applied.
Multiprotocol BGP (MBGP) is required for inter-VLAN routing of directly connected
subnets between VPN instances. In other words, the device with interfaces in
different VPN instances will need to run MBGP to advertise directly connected
subnets between the VPN instances.
In the sample network in Figure 3-32, the Core device would need to run MBGP to
route traffic from CustomerA to subnet 10.3.1.0/24, or to route from the Shared-
Internet VPN instance to subnet 10.2.1.0/24.
MBGP configuration is out of the scope of this course.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-63

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Lab Activity 3.2: Lab Topology

Figure 3-36: Lab Activity 3.2: Lab Topology

Please note that this topology only includes the relevant connections for this lab.
Additional links are available between the devices, but these are not used on this
lab. These links have been shut down in the previous lab.
This lab requires that lab 3.1 be completed. If you did not complete lab 3.1, follow
the instructions in the lab guide to restore a save configuration.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-64 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Lab Activity 3.2 Preview: Advanced VPN


instance configuration

Figure 3-37: Lab Activity 3.2 Preview: Advanced VPN instance configuration

In this lab, advanced Multi-Customer CE features will be configured. In the first


part of the lab, routing table limits will be implemented for a VPN Instance.
In the second part, route leaking will be configured to support controlled IP routing
between different VPN Instances.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-65

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Lab Activity 3.2 Debrief


Use the space below to record your key insights and challenges from Lab
Activity 3.2

Debrief for Lab Activity 3.2


Challenges Key Insights

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-66 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Management access VPN instance

Figure 3-38: Management access VPN instance

In this section isolated management access using a separate VPN instance will be
discussed.
Most data center switches have dedicated management Ethernet ports. On
chassis based devices, the management port is located on the management
processing unit (MPU). On fixed port devices, the out of band management port is
located either on the front or back of the device.
A management Ethernet interface uses an RJ-45 connector. It can be used to
connect a PC for software loading and system debugging, or connect to a remote
device, for example, a remote network management station, for remote system
management. It has the attributes of a common Ethernet interface, but because it
is located on the main board, it provides much faster connection speed than a
common Ethernet interface when used for operations such as software loading
and network management.
The display interface brief command displays this interface as a M-Ethernet
Interface. The Management Ethernet Interface is defined as a routed port in the
configuration and therefore these ports cannot be used for switching operations,
but only are for routed operations.
To configure a management Ethernet interface:
Step Command Remarks
1. Enter system system-view
view.
2. Enter interface interface-
management type interface-number
Ethernet interface
view
3. Set the description text Optional
description string

Rev. 14.41 3-67

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

By default, the description is


M-GigabitEthernet0/0/0
Interface

Management access VPN instance

Figure 3-39: Management access VPN instance

As the Management Ethernet port is a routed port, an IP address can be


configured directly on the interface. Routed ports including the Management
Ethernet port can also be bound to VPN instances.
Once the management interface is bound to a specific VPN instance, management
subnets are only available within the specific VPN instance and are no longer part
of the public routing table. Any routing configuration such as default gateway or
routing protocols would need to be configured for that VPN instance.
Once IP connectivity is established in the management VPN instance,
management protocols need to be configured to use the specified VPN instance.
All management protocols use the public routing table by default.
For example, a switch is configured to use RADIUS authentication for network
operators using server with IP address 10.1.2.100. The switch will attempt to
connect to the server using the public routing table by default. Even though the
RADIUS server IP address may be reachable via the Management Ethernet port,
RADIUS authentication would fail. The switch will not have 10.1.2.100 in the public
routing table and will thus not be able to reach the RADIUS server using RADIUS.
Management protocols including RADIUS need to be configured to use the correct
VPN instance instead of using the public routing table. This needs to be configured
on a per protocol basis (telnet, SSH, RADIUS etc).

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-68 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Management access VPN-Instance (1/2)

Figure 3-40: Management access VPN-Instance (1/2)

Overview
Figure 3-40 shows examples of SNMP, Syslog and NTP configured to use the
mgmt VPN instance instead of the public routing table.
The first command is an example of SNMP trap host configuration. Any warning or
informational messages or other events displayed on the switch can be copied to
an SNMP server (NMS management system) using an SNMP trap. Host
10.0.1.100 could be an IMC server configured to receive SNMP traps of another
NMS system. The snmp-agent target-host command specifies options such as
trap, UDP domain and host IP address. In addition in this example, the command
specifies the VPN instance to use when sending traps. If the vpn-instance mgmt
option is not specified, the switch will attempt to contact the host using the public
routing table.
The second example shows the configuration for a syslog server. Once again the
VPN instance mgmt option is used to specify that syslog messages are sent based
on the VPN instance routing table rather than the public routing table.
The third example configures NTP to use the correct VPN instance to reach the
NTP server.

Rev. 14.41 3-69

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

SNMP Agent
The SNMP Agent sends notifications (traps and informs) to inform the NMS of
significant events, such as link state changes and user logins or logouts. Unless
otherwise stated, the trap keyword in the command line includes both traps and
informs.
Enable an SNMP notification only if necessary. SNMP notifications are memory-
intensive and may affect device performance.
To generate linkUp or linkDown notifications when the link state of an interface
changes, you must enable linkUp or linkDown notification globally by using the
snmp-agent trap enable standard [ linkdown | linkup ] * command and on the
interface by using the enable snmp trap updown command.
After you enable a notification for a module, whether the module generates
notifications also depends on the configuration of the module. For more
information, see the configuration guide for each module.
To enable SNMP traps:
Step Command Remarks
1. Enter system system-view
view.
2. Enable snmp-agent trap enable [ By default, all the
notifications globally. bgp | configuration | ospf traps are enabled
[ authentication-failure | globally.
bad-packet | config-error
| grhelper-status-change |
grrestarter-status-change
| if-state-change | lsa-
maxage | lsa-originate |
lsdb-approaching-overflow
| lsdb-overflow |
neighbor-state-change |
nssatranslator-status-
change | retransmit |
virt-authentication-
failure | virt-bad-packet
| virt-config-error |
virt-retransmit |
virtgrhelper-status-change
| virtif-state-change |
virtneighbor-state-change
] * | standard [
authentication | coldstart
| linkdown | linkup |
warmstart ] * | system ]

You can configure the SNMP agent to send notifications as traps or informs to a
host, typically an NMS, for analysis and management. Traps are less reliable and
use fewer resources than informs, because an NMS does not send an
acknowledgement when it receives a trap.

3-70 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

When network congestion occurs or the destination is not reachable, the SNMP
agent buffers notifications in a queue. You can configure the queue size and the
notification lifetime (the maximum time that a notification can stay in the queue). A
notification is deleted when its lifetime expires. When the notification queue is full,
the oldest notifications are automatically deleted.
You can extend standard linkUp/linkDown notifications to include interface
description and interface type, but must make sure that the NMS supports the
extended SNMP messages.
To send informs, make sure:
• The SNMP agent and the NMS use SNMPv3.
• Configure the SNMP engine ID of the NMS when you configure SNMPv3
basic settings. Also, specify the IP address of the SNMP engine when you
create the SNMPv3 user.
Configuration prerequisites
• Configure the SNMP agent with the same basic SNMP settings as the
NMS. You must configure an SNMPv3 user, a MIB view, and a remote
SNMP engine ID associated with the SNMPv3 user for notifications.
• The SNMP agent and the NMS can reach each other.
To configure the SNMP agent to send notifications to a host:
Step Command Remarks
1. Enter system system-view
view.
2. Configure a (Approach 1) Send traps to the Use either approach.
target host. target host:
By default, no target
snmp-agent target-host host is configured.
trap address udp-domain {
ip-address | ipv6 ipv6- Current software
address } [ udp-port port- version does not
number ] [ vpn-instance support SNMPv1 and
vpn-instance-name ] params SNMPv2c. The v1
securityname security- and v2c keywords are
string [ v1 | v2c | v3 [ reserved at the CLI
authentication | privacy ]
only for future
]
support..
(Approach 2) Send informs to
the target host:
snmp-agent target-host
inform address udp-domain
{ ip-address | ipv6 ipv6-
address } [ udp-port port-
number ] [ vpn-instance
vpn-instance-name ] params
securityname security-
string { v2c | v3 [
authentication | privacy ]
}

Rev. 14.41 3-71

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Syslog
The info-center loghost command takes effect only after information center is
enabled with the info-center enable command.
The device supports up to four log hosts.
Use info-center loghost to specify a log host and to configure output
parameters.
Use undo info-center loghost to restore the default.
Syntax
info-center loghost [ vpn-instance vpn-instance-name ] { ipv4-
address | ipv6 ipv6-address } [ port port-number ] [ facility
local-number ]
undo info-center loghost [ vpn-instance vpn-instance-name ] { ipv4-
address | ipv6 ipv6-address }

vpn-instance
vpn-instance-name: Specifies an MPLS L3VPN by its name, a case-
sensitive string of 1 to 31 characters. If the log host is on the public
network, do not specify this option.
ipv4-address
Specifies the IPv4 address of a log host within the VPN instance.
ipv6 ipv6-address
Specifies the IPv6 address of a log host within the VPN instance.
port port-number
Specifies the port number of the log host, in the range of 1 to 65535.
The default is 514. It must be the same as the value configured on the
log host. Otherwise, the log host cannot receive system information.
facility local-number
Specifies a logging facility from local0 to local7 for the log host. The
default value is local7. Logging facilities are used to mark different
logging sources, and query and filer logs.
Examples
Output logs to the log host 1.1.1.1.
<Sysname> system-view
[Sysname] info-center loghost 1.1.1.1

3-72 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

NTP
When you specify an NTP server for the device, the device is synchronized to the
NTP server, but the NTP server is not synchronized to the device.
To synchronize the PE to a PE or CE in a VPN, provide vpn-instance vpn-instance-
name in your command.
If you include the vpn-instance vpn-instance-name option in the undo ntp-service
unicast-server command, the command removes the NTP server with the IP
address of ip-address in the specified VPN. If you do not include the vpn-instance
vpn-instance-name option in this command, the command removes the NTP
server with the IP address of ip-address in the public network.
Use ntp-service unicast-server to specify an NTP server for the device.
Use undo ntp-service unicast-server to remove an NTP server specified for
the device.
Syntax
ntp-service unicast-server { ip-address | server-name } [ vpn-
instance vpn-instance-name ] [ authentication-keyid keyid |
priority | source interface-type interface-number | version number
] *

undo ntp-service unicast-server { ip-address | server-name } [ vpn-


instance vpn-instance-name ]

ip address
Specifies an IP address of the NTP server. It must be a unicast
address, rather than a broadcast address, a multicast address or the
IP address of the local clock.
server-name
Specifies a host name of the NTP server, a case-insensitive string of 1
to 255 characters.
vpn-instance vpn-instance name
Specifies the MPLS L3VPN to which the NTP server belongs, where
vpn-instance-name is a case-sensitive string of 1 to 31 characters. If
the NTP server is on the public network, do not specify this option.
authentication-keyid keyid
Specifies the key ID to be used for sending NTP messages to the NTP
server, where keyid is in the range of 1 to 4294967295. If the option is
not specified, the local device and NTP server do not authenticate
each other.
priority
Specifies this NTP server as the first choice under the same condition.

Rev. 14.41 3-73

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

source interface-type interface-number


Specifies the source interface for NTP messages. For an NTP
message the local device sends to the NTP server, the source IP
address is the primary IP address of this interface. The interface-type
interface-number argument represents the interface type and number.
version number
Specifies the NTP version, where number is in the range of 1 to 4. The
default value is 4.
Examples
Specify NTP server 10.1.1.1 for the device, and configure the device to run NTP
version 4.
<Sysname> system-view
[Sysname] ntp-service unicast-server 10.1.1.1 version 4

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-74 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Management access VPN-Instance (2/2)

Figure 3-41: Management access VPN-Instance (2/2)

RADIUS
A RADIUS scheme specifies the RADIUS servers that the device can
communicate with. It also defines a set of parameters that the device uses to
exchange information with the RADIUS servers, including the IP addresses of the
servers, UDP port numbers, shared keys, and server types.
Switches support the defining of multiple RADIUS or TACACS schemes. A switch
could for example be configured with one RADIUS scheme for 802.1X
authentication and a different RADIUS scheme for management authentication.
The RADIUS servers referenced in each VPN instance could also be different. The
vpn-instance command is used within the radius scheme to specify which VPN
instance and RADIUS server is used for a particular scheme.
Customers could have their own RADIUS servers which they may want to use for
802.1X authentication. By configuring the relevant vpn-instance on each customer
RADIUS scheme, RADIUS packets would be sent within that VPN instance to the
relevant customer RADIUS server rather than via the public routing table.
A separate RADIUS scheme could also be configured for management
authentication within a VPN instance. Figure 3-41 shows an example of RADIUS
server configuration within the management VPN instance. This ensures that
RADIUS authentication uses the mgmt VPN instance rather than the public routing
table or a customer VPN.
Create a RADIUS scheme before performing any other RADIUS configurations.
You can configure up to 16 RADIUS schemes. A RADIUS scheme can be
referenced by multiple ISP domains.
To create a RADIUS scheme:
Step Command Remarks
1. Enter system system-view
view.
2. Enter RADIUS radius scheme radius-
scheme view. scheme-name

Rev. 14.41 3-75

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

3. Specify RADIUS Specify the primary RADIUS Configure at least one


authentication authentication server: command.
servers. primary authentication { By default, no
ipv4-address | ipv6 ipv6- authentication server
address } [ port-number | is specified.
key { cipher | simple }
string | vpn-instance vpn-
instance-name ] *
Two authentication
servers in a scheme,
Specify a secondary RADIUS primary or secondary,
authentication server: cannot have the
same combination of
secondary authentication { IP address, port
ipv4-address | ipv6 ipv6-
address } [ port-number |
number, and VPN.
key { cipher | simple }
string | vpn-instance vpn-
instance-name ] *

The VPN specified for a RADIUS scheme applies to all authentication and
accounting servers in that scheme. If a VPN is also configured for an individual
RADIUS server, the VPN specified for the RADIUS scheme does not take effect on
that server.
To specify a VPN for a scheme:
Step Command Remarks
1. Enter system system-view
view.
2. Enter RADIUS radius scheme radius-
scheme view. scheme-name

3. Specify a VPN for vpn-instance vpn-instance- By default, a RADIUS


the RADIUS name scheme belongs to
scheme. the public network.

sFlow/Netflow
The second command in Figure 3-41 shows an example of sFlow configured to
use the mgmt VPN instance when communicating with sFlow collector 10.0.1.100.
To configure the sFlow agent and sFlow collection information:
Step Command Remarks
1. Enter system system-view
view.
2. (Optional.) sflow agent { ip ip- By default, no IP address is
Configure an IP address | ipv6 ipv6- configured for the sFlow
address for the address } agent. The device
sFlow agent. periodically checks whether
the sFlow agent has an IP
address. If not, the device
3-76 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

automatically selects an
IPv4 address for the sFlow
agent but does not save the
IPv4 address in the
configuration file.
It is recommended that you
manually configure an IP
address for the sFlow
agent.
Only one IP address can be
configured for the sFlow
agent on the device, and a
newly configured IP
address overwrites the
existing one.
3. Configure the sflow collector By default, no sFlow
sFlow collector collector-id [ vpn- collector information is
information. instance vpn- configured.
instance-name ] { ip
ip-address | ipv6
ipv6-address } [ port
port-number |
datagram-size size |
time-out seconds |
description text ] *
4. (Optional.) sflow source { ip ip- By default, the source IP
Specify the source address | ipv6 ipv6- address is determined by
IP address of sFlow address } * routing.
packets.

OpenFlow
The third command in Figure 3-41 shows an example of OpenFlow configured to
communicate with an OpenFlow controller (10.0.1.100) using the mgmt VPN
instance.
The number of controller supported by an OpenFlow switch is switch dependent.
The OpenFlow channel between the OpenFlow switch and each controller can
have only one main connection, and the connection must use TCP or SSL. The
main connection must be reliable and processes control messages to complete
tasks such as deploying entries, obtaining data, and sending information.
To specify a controller for an OpenFlow switch and configure the main connection
to the controller:
Step Command Remarks
1. Enter system system-view
view.
2. Enter OpenFlow openflow instance
instance view. instance-id

Rev. 14.41 3-77

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

3. Specify a controller By default, an OpenFlow


controller and controller-id address instance is not configured
configure the main { ip ip-address | with any main connection.
ipv6 ipv6-address } [
connection to the
port port-number ] [
controller. ssl ssl-policy-name ]
[ vrf vrf-name ]

3-78 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

IMC Management Access using VPN-Instance

Figure 3-42: IMC Management Access using VPN-Instance

IMC can be used for network management in conjunction with VPN instances. No
additional configuration is required within IMC to support basic device
management, status reporting and SNMP polling of devices. IMC will simply try to
reach the device via the configured IP address and the device will respond from
that IP address. IMC is unaware that it has been configured within a VPN instance.
However, when using IMC device discovery, the option "Automatically register to
receive SNMP traps from supported devices" needs to be unchecked (turned off).
This IMC option is not VPN instance aware and will configure the devices to send
SNMP traps to the IMC server using the public routing table instead of the correct
VPN instance.
Figure 3-42 shows an example of the configured target-host command as
configured by IMC. No VPN instance has been configured and traps will not reach
the IMC server at IP address 10.0.1.100 because the public routing table is used
instead of the VPN instance.
Ensure that the IMC option is unchecked and that the correct command is
manually configured on the device with the correct VPN instance.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-79

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

IMC Management Access using VPN-Instance

Figure 3-43: IMC Management Access using VPN-Instance

A feature of IMC that must be configured when working with VPN instances is the
Intelligent Configuration Center.
The Intelligent Configuration Center is part of the basic IMC platform and provides
automated deployment of configurations as well as backup and restore
configuration options.
Backups of device configurations requires additional setup when used with VPN
instances. Failing to configure this will result in backups failing as can be seen in
the following screenshot:

The reason for the failure is that IMC is unaware of the VPN instance by default.
IMC will instruct the network device to backup the configuration to the IMC server
running a local TFTP server. IMC will use SNMP set commands to initiate the
TFTP backup from the device. Included in the SNMP set messages are the backup
filename to be used and the TFTP server IP address. IMC does not however
specify any VPN instance by default.
Since IMC has only included the TFTP server address and filename in the SNMP
set messages, when the device initiates the TFTP backup, it will use the public
routing table and thus the backup will fail (the IMC server is not reachable from the
public routing table).

3-80 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

IMC Management Access using VPN-Instance

Figure 3-44: IMC Management Access using VPN-Instance

As discussed on the previous page, the TFTP upload from the network device
needs to use the management VPN instance rather than the public routing table.
IMC can be configured to include the required VPN instance name when
instructing a device to backup its configuration. The SNMP set instructions sent to
the device will thus include the VPN instance in addition to the filename and TFTP
server IP address.
This option is configured by selecting the following options:
1. Configuration Center menu
2. Options menu
3. VPN instance tab
4. For each device, selecting the VPN Instance Name to use for that device.
When IMC instructs the device to initiate a backup, the SNMP set instruction will
include the specified VPN instance name.

Note
The specified VPN instance must be defined on the network device.

Output of a successful backup:

Rev. 14.41 3-81

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

IMC Management Access using VPN-Instance

Figure 3-45: IMC Management Access using VPN-Instance

Overview
A core network device may have multiple interfaces configured with IP addresses
in multiple VPN instances. Any one of these IP addresses could be used for the
management of the device and this includes IP addresses configured within
customer VPN instances. That means that a customer may attempt to telnet to a
core device or use snmp to configure the device. For security reasons, it is
undesirable to permit any management access to core network devices from
customer VPN instances.
Access control lists (ACLs) can be configured to only allow management access
from specific VPN instances such as the management VPN instance. The vpn-
instance option is available on Comware ACLs to only allow access from
specified VPN instances.
In Figure 3-45, an access list 2001 is configured with an entry that only permits the
IMC host (IP address 10.0.100) configured in mgmt VPN instance. The access list
is then bound to various management protocols such as telnet, SSH, HTTP and
others. The management protocols are therefore restricted to only allow access
from host 10.0.1.100 in the mgmt VPN instance.
The ACL should be applied to all management protocols on the device to ensure
customers are not able to connect the device.

Note
Comware device ACLs have an implicit permit by default when used as packet
filers. However, in this example, the ACL is used to limit management
protocols and in this case, the default action is deny. This is the opposite of the
behavior of packet filters that filter traffic passing through the device.

3-82 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

ACLs
An access control list (ACL) is a set of rules (or permit or deny statements) for
identifying traffic based on criteria such as source IP address, destination IP
address, and port number.
ACL categories:
Category ACL Number IP Match criteria
version
Basic 2000 to 2999 IPv4 Source IPv4 address
ACLs
IPv6 Source IPv6 address
Extended 3000 to 3999 IPv4 Source IPv4 address, destination
ACLs IPv4 address, packet priority,
protocols over IPv4, and other
Layer 3 and Layer 4 header fields
IPv6 Source IPv6 address, destination
IPv6 address, packet priority,
protocols over IPv6, and other
Layer 3 and Layer 4 header fields
Ethernet 4000 to 4999 IPv4 and Layer 2 header fields, such as
frame IPv6 source and destination MAC
header addresses, 802.1p priority, and link
ACLs layer protocol type
User- 5000 to 5999 IPv4 and User specified matching patterns
defined IPv6 in protocol headers
ACLs

Each ACL category has a unique range of ACL numbers. When creating an ACL,
you must assign it a number. In addition, you can assign the ACL a name for ease
of identification. After creating an ACL with a name, you cannot rename it or delete
its name.
For an IPv4 basic or advanced ACLs, its ACL number and name must be unique in
IPv4. For an IPv6 basic or advanced ACL, its ACL number and name must be
unique in IPv6.
The rules in an ACL are sorted in a specific order. When a packet matches a rule,
the device stops the match process and performs the action defined in the rule. If
an ACL contains overlapping or conflicting rules, the matching result and action to
take depend on the rule order.

Rev. 14.41 3-83

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

The following ACL match orders are available:


• config—Sorts ACL rules in ascending order of rule ID. A rule with a lower ID
is matched before a rule with a higher ID. If you use this approach,
carefully check the rules and their order.
• auto—Sorts ACL rules in depth-first order. Depth-first ordering makes sure
any subset of a rule is always matched before the rule. Table 1 lists the
sequence of tie breakers that depth-first ordering uses to sort rules for each
type of ACL.
The match order of user-defined ACLs can only be config.
Sort ACL rules in depth-first order::
ACL category Sequence of tie breakers
IPv4 basic ACL 1. VPN instance
2. More 0s in the source IP address wildcard (more 0s
means a narrower IP address range)
3. Rule configured earlier
IPv4 advanced ACL 1. VPN instance
2. Specific protocol type rather than IP (IP represents
any protocol over IP)
3. More 0s in the source IP address wildcard mask
4. More 0s in the destination IP address wildcard
5. Narrower TCP/UDP service port number range
6. Rule configured earlier

A wildcard mask, also called an inverse mask, is a 32-bit binary number


represented in dotted decimal notation. In contrast to a network mask, the 0 bits in
a wildcard mask represent "do care" bits, and the 1 bits represent "don't care" bits.
If the "do care" bits in an IP address are identical to the "do care" bits in an IP
address criterion, the IP address matches the criterion. All "don't care" bits are
ignored. The 0s and 1s in a wildcard mask can be noncontiguous. For example,
0.255.0.255 is a valid wildcard mask.

3-84 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Telnet server acl


Use telnet server acl to apply an ACL to filter Telnet logins.
Use undo telnet server acl to restore the default.
Only one ACL can be used to filter Telnet logins, and only users permitted by the
ACL can Telnet to the device. This command does not take effect on existing
Telnet connections. You can specify an ACL that has not been created yet in this
command. The command takes effect after the ACL is created.

Syntax
telnet server acl acl-number
undo telnet server acl

acl-number
Specifies an ACL by its number:
• Basic ACL—2000 to 2999.
• Advanced ACL—3000 to 3999.
• Ethernet frame header ACL—4000 to 4999.

Examples
Permit only the user at 1.1.1.1 to Telnet to the device.
<Sysname> system-view
[Sysname] acl number 2001
[Sysname-acl-basic-2001] rule permit source 1.1.1.1 0
[Sysname-acl-basic-2001] quit
[Sysname] telnet server acl 2001

Rev. 14.41 3-85

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learning Activity: MCE Review


Write the letter of the descriptions in the right-hand column in the space provided
under each numbered MCE term in the left-hand column.

MCE Term: Description:


1. Route table limits a. A VPN configuration option which allows
routing between VPN instances.
_______________
b. Added before a site ID to distinguish the
_______________
sites that have the same site ID but reside in
different VPNs.
2. Route leaking

_______________ c. A separate Label Forwarding Information


Base (LFIB).
_______________
d. A group of IP systems with IP connectivity
3. RD that does not rely on any service provider
network.
_______________

_______________ e. Used to limit the number of routes permitted


in a VPN instance.
4. Site
f. Connected to a provider network through
_______________ one or more CEs. May contain multiple CEs.
_______________
g. VPN instances are by design isolated from
each other. Routes of one VPN instance can
5. VPN Instance also be advertised into other VPN instances
to provide dynamic routing exchange
between routing protocols in different VPN
_______________ instances.

_______________ h. Used to ensure that the VPN-instance


routing tables do not consume all the
hardware resources of the underlying
platform.

i. A separate routing table.

3-86 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Answers
Route table limits (Answers: e, h)
Routing table limits are used to ensure that the VPN-instance routing tables do not
consume all the hardware resources of the underlying platform. This is done by
limiting the number of routes permitted in a VPN instance.

Route leaking (Answers: a, g)


Route leaking is a VPN configuration option which allows routing between VPN
instances. VPN instances are by design isolated from each other. However, in
certain cases, routing is required between VPN instances and routes can therefore
be "leaked" between isolated routing tables. Routes of one VPN instance can also
be advertised into other VPN instances to provide dynamic routing exchange
between routing protocols in different VPN instances.

Route distinguisher (Answers: b)


An RD is added before a site ID to distinguish the sites that have the same site ID
but reside in different VPNs. An RD and a site ID uniquely identify a VPN site.

Site (d, f):


A site has the following features:
• A site is a group of IP systems with IP connectivity that does not rely on
any service provider network.
• The classification of a site depends on the topological relationship of the
devices, rather than the geographical relationships, though the devices at
a site are, in most cases, adjacent to each other geographically.
• A device at a site can belong to multiple VPNs, which means that a site
can belong to multiple VPNs.
• A site is connected to a provider network through one or more CEs. A site
can contain multiple CEs, but a CE can belong to only one site.

VPN Instance (c, i):


A VPN instance has the following components:
• A separate Label Forwarding Information Base (LFIB).
• A separate routing table.
• Interfaces bound to the VPN instance.
• VPN instance administration information, including route distinguishers
(RDs), route targets (RTs), and route filtering policies.

Rev. 14.41 3-87

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Optional lab activity 3.3: Lab Topology

Figure 3-46: Optional lab activity 3.3: Lab Topology

The HP 5900 Series Switches have a dedicated Management Ethernet


connection, which operates as a routed port. This Management Ethernet
connection will be bound to a management VPN Instance.
The HP 5800 Series Switches in this lab do not have a dedicated Management
Ethernet connection. On this model, another interface will be converted to a routed
port, which will be bound to the Management VPN Instance.
The Management Ethernet connections of the 2x5900s and 2x5800s will be
connected to the HP 3800-1 (Host1) switch in a dedicated management VLAN.
The HP 3800-1 switch of each student POD switch is connected to a classroom
Management switch.
This will allow the setup of a single Management network for all HP 5900 and HP
5800 switches used by all students in the classroom.
The central instructor IMC server is connected to this central Management switch.
Each student POD also has a Management PC which is connected to the HP
3800-1.
Using this Management PC, each student POD can manage the local POD
devices through telnet/ssh and each student POD will be able to access the
common IMC system.
The central common IMC server will be able to manage all devices on their
Management Ethernet interfaces.
Please note that this topology only includes the relevant connections for this lab.
Additional links are available between the devices, but these are not used on this
lab. These links have been shut down in the previous lab.

3-88 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Lab Activity 3.3 Preview: Management VPN-


Instance

Figure 3-47: Lab Activity 3.3 Preview: Management VPN-Instance

In this lab, a VPN Instance will be created for device management purposes. This
VPN Instance will be used during the rest of the course labs to manage the
devices. The Management VPN Instance will connect to the common Classroom
management network, which also hosts the central IMC server.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 3-89

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Lab Activity 3.3 Debrief


Use the space below to record your key insights and challenges from Lab
Activity 3.3

Debrief for Lab Activity 3.3


Challenges Key Insights

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

3-90 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Summary

Figure 3-48: Summary

In this module, you learned about Multi-CE (MCE). MCE enables a switch to
function as a Customer Edge (CE) device of multiple VPN instances in a
BGP/MPLS VPN network, thus reducing network equipment investment.
You learned about MCE features and supported platforms. MCE use cases such
multi-tenant datacenters, overlapping IP subnets, isolated management networks
and others were discussed.
You learned the basic configuration steps for configuring MCE including:
i. Define a new VPN instance.
ii. Set Route-Distinguisher
iii. Bind a L3 interface to the VPN-instance
iv. Configure L3 interface IP address
v. Optionally, configure L3 dynamic or static routing
Advanced MCE configuration options were also discussed:
• Routing table limits
• Route leaking (both static and dynamic)
• Management access VPN instances

Rev. 14.41 3-91

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learning Check
After each module, your facilitator will lead a class discussion to capture key
insights and challenges from the module and any accompanying lab activity. To
prepare for the discussion, answer each of the questions below.
1. Is MBGP required to implement MCE on CE devices?
a. Yes, as route leaking requires MBGP.
b. Yes, otherwise routes are not advertised to PE devices.
c. No, MCE does not require MBGP except when routing for directly
connected subnets in different VPN instances.
d. No, MCE only uses static routing to route between subnets in different
VPN instances.
2. Which components are part of a VPN instance (Choose four)?
a. Separate LFIB
b. Global routing table
c. VPNv4 routes
d. Public routing table
e. Separate routing table
f. Interfaces bound to the VPN instance
g. RD
3. Which interface type cannot be allocated to a VPN instance?
a. Layer 3 VLAN interfaces
b. Routed ports
c. Loopback interfaces
d. Layer 2 VLAN interfaces
e. Routed subinterfaces
4. An administrator has configured IMC in a VPN instance with the name
"management". IMC is not receiving SNMP trap messages. How does the
administrator resolve this?
a. Move the IMC server out of the VPN instance as this is an unsupported
setup.
b. Ensure that the "Automatically register to receive SNMP traps from
supported devices" option is checked within IMC.
c. Configure and select the VPN instance within the IMC GUI interface.
d. Manually configure the SNMP target host for traps on the network device.

3-92 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Multi-CE (MCE)

Learning Check Answers


1. c
2. a, e, f, g
3. d
4. d

Rev. 14.41 3-93

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

3-94 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging
Module 4

Objectives
Using separate, single-purpose networks for data and storage can increase
complexity and cost, as compared to a converged network solution. Datacenter
Bridging (DCB) is a technology that enables the consolidation of IP-based LAN
traffic and block-based storage traffic onto a single converged Ethernet network.
This can help to eliminate the need to build separate infrastructures for LAN
systems that carry typical end-user data traffic, and SAN systems that carry
storage-specific communications.
You will learn about the individual standards-based protocols that enable DCB, and
how they enable communication between devices, provide for lossless Ethernet
transmissions, handle flow control, and support Quality of Service (QoS),

After completing this module, you should be able to:


 Describe the DCB Protocols
 Understand the DCBX protocol
 Understand and configure PFC
 Understand and configure ETS
 Understand and configure APP
 Understand Congestion Notification
 Describe datacenter use cases for CEE

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev.14.31 4-1

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

DCB Topics

Figure 4-1: DCB Topics

This module will introduce the concepts related to DCB, and review DCB
configuration parameters. Priority-based Flow Control (PFC) will be explored,
along with PFC configuration.
The operation and configuration of the Application TLV (APP) will discussed, along
with an overview of ETS. You will also learn how to configure ETS.

4-2 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

Datacenter Bridging – Introduction

Figure 4-2: Datacenter Bridging – Introduction

Data Center Bridging consists of a collection of standards that extend the


functionality of Ethernet. Various vendors have used different acronyms, such as
CEE, when discussing or promoting their own DCB-based solutions. However, the
IEEE standards group uses the term DCB to describe the suite of technologies
that enable FCoE to send Fibre Channel communications over Ethernet systems.
The motivation for DCB is to reduce the cost and complexity of running separate,
single-purpose networks for SANs and LANs. The consolidation of data center
infrastructure reduces the number of physical components, along with the
associated costs of rack space, device power, and cooling costs.
DCB offers advantages over previous technologies such as Fibre Channel
Protocol (FCP), InfiniBand (IB), and iSCSI, as described below.

Rev. 14.41 4-3

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

DCB vs Previous Technologies

Figure 4-3: DCB vs Previous Technologies

Fibre Channel Protocol (FCP) is a lightweight mapping of SCSI to the Fibre


Channels transport protocol. Fibre Channel can carry FCP and IP traffic to create a
converged network. However, the cost of FC prevented widespread use, except for
large data center SANs.
InfiniBand (IB) provides for a converged network using SCSI Remote Direct
Memory Access Protocol (SRP) or iSCSI Extensions for RDMA (iSER).
Widespread deployment was also limited due to cost, and the complex gateway
and routers needed to translate from IB to native FC storage devices.
Internet SCSI (iSCSI) provides a direct SCSI to TCP/IP mapping layer. Due to its
lower cost, iSCSI can appeal to small-medium sized deployments. However,
scaling the systems requires more complexity and cost in the form of iSCSI to FC
gateways, and so this solution is often avoided by larger enterprises.
FC over IP (FCIP) and FC Protocol (FCP) can map FC characteristics to LANs and
WANs. Again, these protocols were not widely adopted due to complexity, lack of
scalability, and cost.
Now that 10GbE is becoming more widespread, Fibre Channel over Ethernet
(FCoE) is the next attempt to converge block storage protocols onto Ethernet.
FCoE embeds FC frames within Ethernet frames, and relies on the Ethernet
infrastructure that has been enhanced by implementing IEEE Data Center Bridging
(DCB) standards. The individual protocols and components that enable FCoE
traffic to be supported over Ethernet are described below.

4-4 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

DCB Components

Figure 4-4: DCB Components

The standards-based protocols and components of DCB are introduced below.


 DCBX: The Data Center Bridging eXchange protocol is used to communicate
key parameters between DCB-capable devices. The information exchanged is
largely centered on PFC, APP, and ETS functionality.
 PFC: Priority-based Flow Control helps to ensure that Ethernet can provide
the lossless frame delivery that FCoE requires.
 APP: Provides instructions to CNA about application-to-CoS mapping.
 ETS: Enhanced Transmission Selection enables control over how much
bandwidth LAN, SAN, and other traffic types can use over a converged
Ethernet link.
 CNA: A Converged Network Adapter can support both Fibre Channel and
traditional LAN communications on a single interface.
 CN: Congestion Notification supports end-to-end flow control in an attempt to
localize the effects of congestion to the device that is causing it.

Rev. 14.41 4-5

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

DCB Feature Overview

Figure 4-5: DCB Feature Overview

DCBX enables devices to discover peers, detect configuration parameters and


configure peer CNAs. It is an extension to LLDP, adding new type-length-values
(TLVs) that enable the exchange PFC, APP, and ETS information.
In the figure, the server access switch sends DCBX frames to automatically
configure the server’s CNA. The APP TLV is used to inform the CNA that FCoE
frames are to be marked with an 802.1p value of 3.
ETS controls bandwidth utilization. In this example, the FCoE traffic (802.1p = 3)
shall be mapped to ETS queue 1, and have 60% of the bandwidth reserved
(during times of congestion). All other traffic (802.1p = 0-2, 4-7) shall be mapped to
ETS queue 0, and have access to 40% of the bandwidth.
PFC is an enhancement to the Ethernet Pause feature, which uses Pause frames
to pause all traffic on a link. It is as if PFC is logically dividing a single physical link
into multiple virtual links, reserving one such link for FCoE. Thus, the pause
mechanism can stop all traffic other than that specified as no-drop, ensuring that
FCoE frames are not dropped due to a short-lived burst of LAN traffic.
As shown in the figure, this PFC information is also passed (via DCBX) between
the server access switch and the storage switch. This ensures that the lossless
frame requirement for FCoE is enforced all the way between the Server CNA and
target SAN.

4-6 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

DCB - Supported Products

Figure 4-6: Supported Products

The features introduced thus far are available on all HP datacenter switches
running Comware 7, including both fixed configuration access switches and
chassis-based core switches.
Access switches
At the access layer, the 5900 Switch Series support DCB-compliant features. This
also includes the HP 5920 and 5930 switches.
Core switches
Chassis-based switches suitable for deployment at the datacenter core also
support DCB. This includes the 11900/12500/12900 Switch Series.
Full HP Supported configuration limited to select products
HP has gone to great lengths to fully ensure that various product combinations
support the features you need. HP has created the Single Point of Connectivity
Knowledge (SPOCK) as the primary portal for detailed information about HP
storage products. It is highly recommended that you consult SPOCK to ensure that
you are deploying systems that have been fully tested by HP. The current URL for
SPOCK is http://h20272.www2.hp.com/.

NOTES

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-7

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Design Considerations

Figure 4-7: Design Considerations

Migration from traditional storage to FCoE-based systems can be gradual. Deploy


FCoE first at the server-to-network edge. Then migrate further into
aggregation/core layers and storage devices over time. Transitioning the server-
to-network edge first to accommodate FCoE/DCB will maintain existing network
architecture, management roles, and the existing SAN and LAN topologies. This
approach offers the greatest benefit and simplification without disrupting the data
center architecture.
You should also consider implementing FCoE only with those servers requiring
access to FC SAN targets. Most data center assets only need a LAN connection,
as opposed to both LAN and SAN connections. You should use CNAs only with
the servers that actually benefit from them. Don’t needlessly change the entire
infrastructure.
ProLiant c-Class BladeSystem G7 and later blade servers come with HP
FlexFabric adapters (HP CNAs) as the standard LAN-on-Motherboard (LOM)
devices. This provides a very cost effective adoption of FCoE technology.
FlexFabric modules eliminate up to 95% of network sprawl at the server edge. One
device converges traffic inside enclosures and directly connects to LANs and
SANs.
During design and implementation, remember that Ethernet Maximum
Transmission Unit (MTU), or maximum frame size is 1518 bytes, while the MTU for
FCoE is 2240 bytes. This so-called baby jumbo frame size must be supported on
all devices between FCoE-capable servers and storage systems.
Also, FCoE uses specific MAC addresses, as listed in the figure. You must ensure
that these MACs are not blocked

4-8 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

DCBX – Data Center Bridging eXchange

Figure 4-8: DCBX – Data Center Bridging eXchange

DCBX is an extension to LLDP that facilitates connectivity between DCB-enabled


devices. As defined in the IEEE 802.1Qaz standard, DCBX accomplishes this by
adding TLVs to LLDP. This can be compared to LLDP-MED, which defines
extensions to LLDP to facilitate connectivity between Voice-over-IP (VoIP)
endpoints and switches.
Version 1.00 was the initial public version of DCBX. The main goal for version 1.00
was to enable automatic, priority-based flow control mapping. This included limited
support for L2 marking, allowing the switch to inform the CNA that FCoE frames
should be placed into a specific queue.
Version 1.01 enhances this marking capability. In addition to the L2-based
classification, L4-based classification is also supported. Thus, TCP and UDP ports
could be used as a basis for classification. This is critical for iSCSI, which is a
TCP-based storage protocol.
After v1.01, more refinements were made, and the IEEE 802.1Qaz standard was
ratified. A key enhancement concerns the CNA's operational settings. In previous
versions, a switch announced information toward the CNA, and the CNA only
announced its original settings back to the switch. This could make troubleshooting
difficult, since there is no definitive method to ensure that the settings were
actually accepted by the CNA. With the 802.1Qaz standard version, the LLDP
output reveals both the recommended settings, as announced by the switch, and
the operational settings actually in use on the CNA.
IEEE 802.1Qaz is the default version enabled in HP datacenter switches. While
you can manually configure the version in use, it is a best practice to allow the
switch to automatically detect which version to use. The switch will detect whether
an attached CNA only supports v1.00 or v1.01, and will adjust accordingly.

Rev. 14.41 4-9

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Steps for DCBX

Figure 4-9: Configuration Steps for DCBX

The figure introduces the steps to configure DCBX on HP Datacenter switches


running Comware 7.
1. Enable global LLDP
2. Enable the DCBX TLVs for LLDP on the interface
3. Verify
These steps will be detailed in the pages that follow.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-10 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

DCBX Step 1: Enable Global LLDP

Figure 4-10: DCBX Step 1: Enable Global LLDP

The first step to enabling DCBX is to ensure that LLDP is enabled globally on the
device. With many Comware-based devices, LLDP is globally enabled by default,
but some Comware-based devices require the command shown. The easiest
approach is simply to issue the command, and verify that the feature is enabled
with the “display lldp status” command.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-11

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

DCBX Step 2: Enable Interface LLDP DCBX


TLVs

Figure 4-11: DCBX Step 2: Enable Interface LLDP DCBX TLVs

The next step involves enabling the DCBX-specific TLVs on the interface to which
a CNA is attached. Assuming LLDP has been enabled globally, LLDP is enabled
by default at the interface level. However, the TLVs for DCBX must be manually
enabled.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-12 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

DCBX Step 3: Verify

Figure 4-12: DCBX Step 3: Verify

The figure above shows the syntax to verify LLDP has been configured on the
Comware switch to support the DCBX TLVs. You notice that the DEFAULT column
indicates that DCBX TLVs are not enabled by default, but that YES is in the
STATUS column for DCBX TLVs.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-13

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Ethernet Flow Control

Figure 4-13: Ethernet Flow Control

Priority-based Flow Control (PFC) is defined in the IEEE 802.1Qbb standard.


802.1Qbb allows the network to provide link-level flow control for different classes
of traffic. The goal is to provide lossless Ethernet, which is a strict requirement for
FCoE. This is because fibre channel assumes that frames will not be dropped
under normal circumstances.
Ethernet, however, assumes that frames can be dropped because higher level
protocols, such as TCP deal with this issue. Ethernet does include a “pause”
feature which can be used for flow control. As switch buffers begin to fill, and frame
drops will soon occur the switch can send a pause message on the link. This
causes the upstream device to stop sending frames.
The problem with this mechanism is that there is no way to pause only certain
types of traffic. Either all frames are paused, or none are paused. This is actually
fine for FCoE traffic. It is best to wait until the switch indicates that it can once
again accept frames, instead of risking dropped frames, which is unacceptable.
However, during this time, all TCP/IP traffic on a converged network would also be
paused. TCP/IP protocol stacks are designed to handle packet loss at upper
layers, such as TCP, and for some application-layer protocols that use UDP.
Therefore, pausing TCP/IP frames is creates unnecessary delays in transmissions,
reducing overall performance for LAN traffic. The solution is to have a priority-
based flow control mechanism, as described below.

4-14 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

PFC – Enhancing Ethernet Flow Control

Figure 4-14: PFC – Enhancing Ethernet Flow Control

Priority-based Flow Control, as the name implies, enhances the functionality of the
original Ethernet flow control mechanism. Specifically, 802.1Qbb only pauses
frames that are tagged with a certain 802.1p CoS value, such as FCoE traffic.
Meanwhile, LAN traffic, marked with different CoS values, would be unaffected.
Beyond the lossless requirement of FCoE, consider the typical network shown in
the figure above. It is possible that the downstream storage network may
experience high utilization, causing buffers to fill up. The PFC mechanism issues a
PAUSE frame for storage traffic, and so the CNA stops transmitting
Since the downstream data network is not experiencing congestion, there is no
reason to pause this traffic. As mentioned, TCP/IP stacks can handle FCoE
frames, allowing downstream SAN device buffers to handle frame drops, but in this
scenario, there is little danger of drops in the data network anyway.
In this manner, PFC provides a lossless Ethernet medium for FCoE traffic, without
negatively affecting LAN traffic.

Rev. 14.41 4-15

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

PFC – Configuration Modes

Figure 4-15: PFC – Configuration Modes

There two available configuration modes for PFC. Manual mode is used for switch-
to-switch links, since switches do not use DCBX to configure each other. Since
there is no negotiation frames sent between switches, each switch must be locally
configured with compatible PFC parameters.
Automatic mode is used for links connecting switches to endpoint devices, such as
servers and storage systems. For these links, only the switch need be configured
for PFC. This is because switches use DCBX to exchange configuration
information with the endpoint CNA, which can adopt the switches proposed
configuration.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-16 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

Configuration Steps for PFC Manual Mode

Figure 4-16: Configuration Steps for PFC Manual Mode

The figure above introduces the steps required to configure PFC for switch-to-
switch links. This includes enabling PFC on the interface in manual mode, and
specifying the 802.1p value to be used for lossless traffic.

Rev. 14.41 4-17

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

PFC Manual Step 1: Enable Interface PFC Mode

Figure 4-17: PFC Manual Step 1: Enable Interface PFC Mode

Step 1 involves enabling PFC on the interfaces that connect to other switches.
This configuration would of course be repeated at the switch on the other side of
the link.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-18 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

PFC Manual Step 2: Enable Lossless for Dot1p

Figure 4-18: PFC Manual Step 2: Enable Lossless for Dot1p

The second step is also configured at the interface, informing the switch which
802.1p value is to be used for lossless Ethernet. In the example, an 802.1p CoS
value of 3 is specified.
Although the 802.1Qbb standard supports multiple lossless values, most hardware
only supports a single lossless queue. For this reason, only one 802.1p value may
be specified for lossless, no-drop service. Since FCoE-based traffic is the reason
for using PFC, this does not create a practical limitation.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-19

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Steps for PFC Auto Mode

Figure 4-19: Configuration Steps for PFC Auto Mode for PFC Auto Mode

Before configuring PFC in auto mode, DCBX must first be enabled on the
interface. This was discussed in the previous section of this module.
The steps to configure PFC for switch-to-endpoint links includes enabling PFC on
the interface in auto mode, specifying the 802.1p value to be used for lossless
traffic, and then verifying that the configuration has been successful.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________
4-20 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

PFC Auto Step 1: Enable Interface PFC Mode

Figure 4-20: PFC Auto Step 1: Enable Interface PFC Mode

The configuration for PFC in auto mode is similar to that for configuring manual
mode. The difference is in the use of the keyword “auto”, as shown in the figure.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-21

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

PFC Auto Step 2: Enable Lossless for Dot1p

Figure 4-21: PFC Auto Step 2: Enable Lossless for Dot1p

The second step for PFC auto is identical to that for PFC manual mode. The
802.1p value to be used for lossless Ethernet is specified at the interface
configuration level. In the example, an 802.1p CoS value of 3 is specified.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-22 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

PFC Auto Step 3: Verify

Figure 4-22: PFC Auto Step 3: Verify

The final step involves validating the configuration. LLDP local information is
displayed for the specific interface configured, to verify which 802.1p value is
enabled for lossless Ethernet. In the figure, much of the initial output is not
displayed, in order to focus on pertinent DCBX PFC information. In bold, you can
see that PFC lossless Ethernet is enabled for 802.1p value of 3, as indicated by a
number 1. It is off for other 802.1p values, as indicate by a numeral 0.
In the second example in the figure LLDP neighbor information is displayed for the
specific interface configured. Notice that the use of the “verbose” keyword is
necessary to see detailed information, such as which PFC values have been
accepted and currently configured on the endpoint CNA.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-23

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

APP – Application TLV

Figure 4-23: APP – Application TLV

APP is defined by the DCBX standard, and allows the switch to program
application layer QoS rules on the CNA. For a typical switch configuration, the
admin must define access rules, to be used as match conditions for a QoS policy.
The policy is then applied to a specific interface, so when data exits the interface, it
will be checked against the classifier and takes some action. For example, the
action could be to elevate or decrease its priority, place it in a different queue, or
drop the packet.
This operation happens in an ASIC on the switch. The CNA has a similar ASIC,
and so also has the capability of performing traffic selection and queuing
operations. However, some server administrators are not fluent with these types of
rules.
The APP TLV allows the network administrator to configure the switch, which will
propose QoS policy to the CNA. In this way, the CNA dynamically learns QoS
rules, and uses them when it transmits frames to the network.
To implement this functionality, traditional QoS mechanisms must be defined, and
then applied to the APP TLV feature. This process starts by defining a classifier.
With APP, traffic can be classified based on the Layer 2 Ethertype field, or on the
Layer 4 TCP or UDP destination port number. For example, FCoE uses Ethertype
0x8906, and all iSCSI traffic uses TCP destination port 3260.

4-24 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

APP – Application TLV

Figure 4-24: APP – Application TLV

Traffic is classified by using advanced ACLs, which can permit or deny traffic
based on several different criteria. This includes protocol, source and destination
port, source and destination IP address, and more.
However, when used to select traffic for APP, we lack the wealth of rules of a
traditional switch. The hardware on both the switch and CNA can understand this
advanced functionality, but the APP TLV is a fairly simple, lightweight system that
is limited in its capability to deliver information. The APP TLV only accommodates
the exchange of Layer 2 protocol Ethertype, or Layer 4 TCP/UDP destination port
number.
This means that when you configure an ACL in order to classify traffic for APP, all
fields are ignored except Ethertype and destination port.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-25

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

APP – Application TLV

Figure 4-25: APP – Application TLV

To configure QoS on an HP switch running Comware, an ACL is defined, and then


bound to a traffic classifier. The classifier is an object that describes the traffic to
have certain behaviors, or treated in a specific way for queuing services. When
configuring QoS specifically for DCBX, only two types of ACLs may be used. The
only options are an Ethernet ACL or an advanced ACL.
Classifiers can have multiple conditions, and Boolean logic can be used to control
this match criteria. Should multiple criteria be specified as match conditions for a
classifier, it uses a logical AND operator by default. Therefore, all criteria would
have to match, in order for the traffic to be considered in the class.
For the APP TLV, a Boolean OR operator must always be used. For example, you
may create an Ethernet ACL to specify FCoE traffic, and an advanced ACL to
specify iSCSI traffic, and then apply both of these ACLs as match conditions in a
classifier object. If you use a logical AND in this case, the condition would never be
met, since packets cannot be both iSCSI AND FCoE. In that case, the classifier
will be ignored.
You define a classifier to select appropriate traffic, and then you define a behavior
to specify how that traffic is processed. A behavior defines what actions are to be
taken when the condition is matched.
In this example, we want to ensure that a certain class is marked with an 802.1p
CoS value. This remarked value is sent to the server CNA via the APP TLV, which
the CNA accepts and conforms to.

4-26 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

APP – Application TLV

Figure 4-26: APP – Application TLV

A QoS policy consists of a set of classifiers, which are bound to certain behaviors.
QoS policy classifiers can be defined for traditional usage, to locally modify the
switches own QoS mechanisms. Classifiers may also be defined for the APP TLV,
to modify the QoS mechanism on an attached server’s CNA. You must inform the
switch of this by using the “mode DCBX” syntax. Only the rules that have these
keywords will be sent to the CNA.
This configuration model is quite flexible. You can decide which rules are locally
significant, to modify traffic classification and behavior for the switch itself, and
which rules are to send toward the CNA to modify its behavior.
Once classifiers and behaviors are defined in a policy, that policy must be applied
in order to take effect. This QoS policy must be applied in the outbound direction,
either to an interface, or at the global configuration level.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-27

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Steps for APP

Figure 4-27: Configuration Steps for APP

The figure above shows the steps to configure the APP TLV feature. DCBX
configuration is a prerequisite to configuring the APP TLV feature. Then ACLs are
configured and applied to a classifier object. Behavior objects are then defined,
and the two are tied together into a QoS policy.
Finally, the QoS policy is activated, and the configuration is verified.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-28 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

APP Step 1: Configure Traffic ACLs for Layer 2

Figure 4-28: APP Step 1: Configure Traffic ACLs for Layer 2

The first step is to configure ACLs for Layer 2 traffic classes. Recall that Ethernet
ACLs are used to describe FCoE frames at Ethertype 0x8906.

The figure shows an example ACL, configured to specify Ethertype 0x8906 for
FCoE. The number 4000 is used in this example, since Ethernet ACL’s are
specified by the numbers 4000-4999. Optionally you can also create named ACLs.
Note that you must specify an exact match on this Ethertype by using an “all ones”
mask of 0xFFFF. This is not an inverse or wild card mask, so 0xFFFF specifies
that the entire specified pattern of 8906 must match exactly. The example also
shows how to add comments to an ACL for documentation purposes.
Remember that the ACL simply provides a description of the traffic for
classification purposes, not for security purposes. This helps to explain why the
use of permit or deny has no effect in this use case. Whether you use permit or
deny, the traffic indicated will be considered a match.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-29

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

APP Step 2: Configure Traffic ACLs for Layer 4

Figure 4-29: APP Step 1: Configure Traffic ACLs for Layer 2

As in the previous step, this step involves defining an ACL, to be used for traffic
classification. Instead of an Ethertype ACL for Layer 2 traffic, an advanced ACL is
used, typically to select iSCSI traffic.
As before, the permit or deny keyword is not relevant. Also, for the DCBX APP TLV
feature, only destination port info is analyzed. Source port, IP source address, and
the IP destination address fields are ignored.
The figure shows a typical example, used to specify iSCSI traffic at TCP port 3260.
The ACL number used is 3000, since advanced ACLs are in the range between
3000-3999.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-30 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

APP Step 3: Configure QOS Traffic Classifier

Figure 4-30: APP Step 3: Configure QOS Traffic Classifier

Once an ACL is created, it must be bound to a traffic classifier. QoS traffic


classifiers can group one or more ACLs. You must be careful to use the OR
operator, since the default operator is AND. Since it is impossible for a packet to
be both FCoE and iSCSI, no traffic would match your classifier.
The top example reveals how to create a single classifier with two criteria. You
might do this if you wanted to specify a lossless Ethernet service for both FCoE
and iSCSI traffic.
If you are only interested in specifying FCoE traffic, the bottom example could be
used. In this example, there is an only one match criterion. However, the OR
operator must still be used. Even with a single match criteria, the DCBX module
will ignore the classifier unless the OR operator is used.

Rev. 14.41 4-31

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

APP Step 4: Configure QOS Traffic Behavior

Figure 4-31: APP Step 4: Configure QOS Traffic Behavior

Now that classifiers are configured, traffic behaviors are defined. This describes
the action to be taken on a class. The DCBX module will only parse the dot1p
behavior. While the CNA ASIC may be capable of more advanced behaviors, the
APP TLV is limited to communicating this single behavior.
You may recall from the previous section that with PFC, traffic marked with a
specific dot1p value should receive no-drop, or lossless Ethernet service. The
purpose of configuring the APP feature is to ensure that appropriate traffic is
actually marked with this 802.1p value, so PFC can do its job.
Since we configured PFC to provide lossless Ethernet for anything marked with an
802.1p value of 3, we configure APP to mark appropriate packets with that value.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-32 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

APP Step 5: Configure QOS Policy

Figure 4-32: APP Step 5: Configure QOS Policy

In this step, the classifier and behavior are bound together in a QoS Policy. QoS
policies are processed much like many ACLs are processed, in a top-down
fashion. This means that the order of rules is critical.
There are no rule numbers that can be used to change the order, should you
accidentally enter rules in the wrong order. You must remove the policy rules and
reapply them in the proper order. The critical element here is the integration of the
QoS policy with DCBX. Remember, only QoS policy rules that include the “mode
dcbx” option will be handled by DCBX, and communicated from the switch to the
CNA.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-33

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

APP Step 6: Activate the QoS Policy

Figure 4-33: APP Step 6: Activate the QoS Policy

The final configuration step is to activate the QoS policy. This can be done at the
global or interface level. Both must be applied in the outbound direction for DCBX.
In this example Interface Ten-GigabitEthernet 1/0/49 connects to a server CNA, so
the policy is applied outbound to this interface.

4-34 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

APP Step 7: Verify

Figure 4-34: APP Step 7: Verify

The figure reveals how to validate your configuration efforts. The top example
shows how to verify what settings were proposed by the switch. The output has
been truncated to focus on pertinent information. You can see that frames with an
Ethertype of 0x8906 are assigned to a QoS map value of 0x8. As well see below,
this translates to a CoS value of 3.
The bottom example reveals what information the neighboring CNA announces
back to the switch. This validates whether the CNA has accepted the proposed
values.
Recall from our earlier discussion of DCBX that pre-standard versions v1.00 and
v1.01 of DCBX will only announce their originally configured version, with no
indication of whether or not they have accepted the proposed values. In that case,
you would only be able to infer that the values have been accepted by noting
successful operations of your system. The IEEE standard version of DCBX will
display the accepted, operational values, as shown in the example.

NOTES

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-35

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

APP Step 7: Verify Continued

Figure 4-35: APP Step 7: Verify Continued

The figure above shows the 801.p to CoS map values used for Comware 7
devices. This explains why a value of 0x8 was displayed in the previous LLDP
output. Comware switches map a CoS hex value of 0x8 to an 802.1p value of 3.

4-36 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

APP - Other examples

Figure 4-36: APP - Other examples

The figure shows some other pertinent examples related to APP TLV functionality.
In the top example, both iSCSI and FCoE is proposed to the CNA as being marked
with an 802.1p value of 3 (Comware CoS 0x8).
The second example in the figure hints at additional capabilities. As before, FCoE
Ethertype 0x8906 is assigned to CoS map 0x8 (802.1p = 3). Also ports 8000 and
6600 are assigned to CoS map 0x4 (802.1p = 2).
Port 6600 is the port used for VMWare vMotion, and port 8000 is used for some
other application of interest to the network administrator. Now that these
applications will be marked as indicated, QoS policy can be implemented to control
this traffic. For example, an administrator can reserve 1Gbps for vMotion traffic,
2Gbps for the data application, and 4Gbps for storage.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-37

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

ETS – Enhanced Transmission Selection

Figure 4-37: ETS – Enhanced Transmission Selection

While the APP TLV allows us to assign specific traffic classes to specific dot1p
values, it does not allow us to specify the bandwidth and queuing mechanisms
used by the CNA. So it is possible that all dot1p values would be assigned to the
same queue on the CNA. Marking traffic types with a specific 802.1p value will
have no effect if all the 802.1p values are processed by the same queue.
ETS, as defined by the 802.1Qaz standard, allows the switch to specify which
802.1p value should be processed by which queue on the CNA, and how much
bandwidth should be available for each queue.
Marking protocols, such as 802.1p at Layer 2, and DSCP at Layer 3, have been
standardized for years. However, actual queuing and scheduling mechanisms are
vendor defined. So, while we can mark packets in a standard way, how those
packets are actually processed can be unique for each vendor.
It is true that most vendors base their queue service on common mechanisms,
such as weighted fair queuing, weighted round-robin queuing, or strict priority
queuing. Still, the specifics of these mechanisms are not standardized. There was
no standard for specifying how to service packets with a specific marking.
802.1Qaz defines such a standard. It describes ASIC queue scheduling and
bandwidth allocation. The standard not only describes how scheduling should be
done, it also defines how the CNA can be programed from the network switch for
conformance. Thus, the switch can control how the CNA processes frames
outbound, back toward the switch.
The switch controls the number of queues (maximum 8) and CoS-to-queue
mapping on the CNA. So the switch can dictate which 802.1p values map to which
queues. For example the switch can indicate that packets marked with CoS value
3 shall be placed into queue number 2.
The scheduling algorithm can also be controlled. Weights, which essentially
translates to bandwidth, can be assigned to queues.

4-38 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-39

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

ETS – Enhanced Transmission Selection

Figure 4-38: ETS – Enhanced Transmission Selection

ETS can control the number of queues used on the CNA. While the adapter may
initially be configured to utilize two queues – one for data and one for storage
traffic, it can be configured to leverage more queues.
ETS allows us the option to assign particular dot1p values to specific queues. By
default a single dot1p class is assigned to a single queue. Most physical switches
support 8 queues per interface, so each of the eight 802.1p values gets its own
queue.
This mapping can be customized by modifying the switches dot1p-to-LP QoS map.
There is one such map for the entire switch, so you cannot have unique mappings
per interface. If this mapping is changed, it applies to how the local switch
processes frames, and how the CNA will be instructed to process frames.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-40 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

ETS – Enhanced Transmission Selection

Figure 4-39: ETS – Enhanced Transmission Selection

The default maps each 802.1p value to its own queue. In Comware terminology,
this means that each of the dot1p values is assigned to a unique local-precedence
map. This local-precedence map controls how the Comware switch processes
frames. If the local precedence value is 0, then the Comware switch places frames
in queue 0.
In this example, the default one-to-one mapping of 802.1p value to local
precedence has been changed. In this configuration, only the 802.1p value of 3 is
assigned to its own local-precedence value, and therefore its own queue. All other
802.1p values share queue 0. Essentially, this sets up a scenario where all the
data shares a single queue, and the storage traffic gets a queue of its own.

Rev. 14.41 4-41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

ETS – Enhanced Transmission Selection

Figure 4-40: ETS – Enhanced Transmission Selection

Several queuing options are provided by the ETS standard, as described here.
Strict priority can be a good option to use for voice traffic. When congestion
occurs, traffic in higher strict priority queues will be serviced first. Lower priority
queues will not be serviced unless higher priority queues are empty. The risk of
these mechanisms is that the strict priority queue can starve other queues.
For this reason, a credit-based mechanism has been introduced. The intention is
to provide strict priority queues, while mitigating the risk of queue starvation by
enforcing a rate limit.
This is a very good mechanism to use when there is a mixture of traffic types. For
example, VoIP traffic requires low delay and minimal variations in delay (jitter). The
strict priority mechanism ensures that the VoIP packets, placed in the strict queue,
will be serviced preferentially. However, the credit-based rate limit prevents these
packets from starving other queues. This mechanism has not been implemented
yet. For this reason, most implementations focus on the ETS queuing mechanism.
Enhanced Transmission Selection can be seen as both a standard to exchange
information, and a specific scheduling mechanism. This mechanism allows each of
traffic class to have its own minimum bandwidth or service level. But if that class
isn’t utilizing its bandwidth, it is available for other classes.
The generic nature of this ETS mechanism frees vendors to implement it into their
unique hardware platforms. For Comware, this definition matches the Weighted
Round Robin (WRR) scheme, and is implemented on an interface. Bandwidth
percentages are calculated based on this scheme, and those percentages are sent
to the CNA.
It is up to the CNA to receive these values, and configure its ASIC in such a way
that it respects these values.

4-42 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

Configuration Steps for ETS

Figure 4-41: Configuration Steps for ETS

As with PFS and APP TLV, ETS configuration information is transported using
DCBX. Therefore, the configuration of DCBX is a prerequisite to the configuration
of ETS.
The figure shows the three steps involved in configuring ETS. This includes
configuring CoS-to-Queue mapping, setting interface scheduling and weight
parameters, and then verifying the configuration. These steps are detailed in the
following pages.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-43

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

ETS Step 1: QoS Map dot1p-lp

Figure 4-42: ETS Step 1: QoS Map dot1p-lp

Modifying the QoS queue map is an optional step, since every switch already has
a mapping by default. The figure indicates how to modify the default configuration
on a Comware switch, for a two-queue configuration. You can see that an 802.1p
value of 3 is mapped to queue 1, while all other queues are mapped to queue 0.
So, based on this mapping, only two queues will be used to process all traffic.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-44 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

ETS Step 2: Interface Scheduling and Weights

Figure 4-43: ETS Step 2: Interface scheduling and weights

In this scenario, queue 0 and 1 are used to process all traffic. Based on this, the
ETS application will look at how those queues are actually configured on the
physical interface. This interface configuration will be translated to ETS values, to
be proposed to the server’s CNA. With this in mind, the first step is to configure the
type of queuing. Since queue starvation is not desirable, WRR is to be configured.
For WRR, two types of weights to be assigned – byte-count and weight value. For
ETS applications, specifying weights using byte-count is best, since it is a more
accurate way to specify bandwidth utilization. If a weight value is used instead,
packets are counted. Since packets are of a variable length, you have less
granularity of control over bandwidth. Ten 500-byte packets is much different
bandwidth utilization that ten 1500-byte packets.
In the example, queue 0 is assigned a byte-count of 4, and queue 1 is assigned a
byte-count of 6. If weight had been specified, six packets of 200 bytes each means
that 1200 bytes would be transmitted, while 4 packets at 1500 bytes results in
6000 bytes transmitted. So the weight values may not accurately reflect
bandwidth. To have more accurate control, therefore, you should use byte count.
On the physical interface in this example we are using byte count weight values of
four and six respectively. The configurable range is between 1 and 15.
4+6 = 10, so queue 0 gets 4 of 10 bytes (40%), while queue 1 gets 6 out of 10 or
60%. This is a simple calculation when there are only two queues.
The configured values in the example will be sent to the CNA. It is up to the CNA
to program its ASIC to actually support these values.

Rev. 14.41 4-45

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

ETS Step 2 Continued: A Weight Problem

Figure 4-44: ETS Step 2 Continued: A Weight Problem

The caveat for the example described above is that the switch will actually
calculate the percentage that it is announcing using all the queues assigned by
WRR. Since all the queues are enabled for WRR, and they all have a default
weight, we don’t see the expected percentages in the LLDP output above. We see
that each queue gets 11%, except for queue 3, which gets 17%. This is actually
fairly close to our intended targets.
As shown in the figure, 11+17 = 28, and 11/28=39%, and 17/28=61%. However
when we look at the output, it is not intuitive or obvious that we have achieved our
goal. Further, we intend to use 2 queues, but we see all eight queues are in play.
To rectify this issue, we must ensure that only our intended queues are in use.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-46 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

ETS Step 2 Continued: Assign Queues to SP

Figure 4-45: ETS Step 2 Continued: Assign Queues to SP

To ensure that only queue 0 and 1 are used, we can configure the other queues to
use the Strict Priority (SP) queuing mechanism. It is then vital that these other
queues are never actually used for anything. Otherwise, they could starve out the
other two queues on the interface. This is enforced with the local 802.1p to local-
precedence map. If this map doesn’t assign any traffic to the other queues, they
will remain idle.
The example in the figure indicates how to assign these idle queues to use SP.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-47

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

ETS Step 3: Verify Local Configuration

Figure 4-46: ETS Step 3: Verify Local Configuration

Now that the non-essential queues are assigned to use SP, only queues 0 and 1
are used for WRR. Now the LLDP output above shows that queue 0 receives 40%
of the bandwidth, and queue 1 is assigned 60% of the available bandwidth.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-48 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

Learning Activity: DCB Operation and


Component Review
Write the letter of the descriptions in the right-hand column in the space provided
under each numbered DCB component in the left-hand column.
DCB Component DCB Component Description
1. DCBX a. Helps to ensure that Ethernet can
_______________ provide lossless frame delivery
_______________
b. Requires special LLDP TLV’s to be
enabled on the interface
2. PFC
_______________ c. Switch-switch links use manual
_______________ configuration, while switch-endpoint
links use automatic configuration
3. APP d. Provides instructions to CNA about
_______________ application-to-CoS mapping
_______________
4. ETS e. Only accommodates exchange of
_______________ Ethertype or L4 destination port
_______________ numbers
f. Communicates key parameters
5. CNA between DCB-capable devices
_______________
_______________ g. Support FC and Ethernet on a single
interface
h. Controls bandwidth used by LAN
and SAN traffic over a converged
Ethernet link
i. Attempts to localize the effects of
congestion to the device that is
causing it
j. Defines queue scheduling
mechanisms
k. Allows network administrators to
control server QoS operation
l. Enables devices to discover peers
and detect configuration parameters
m. Can treat FCoE and LAN traffic
differently, based on CoS values
n. Maps specific 802.1p classes to
particular queues on the CNA

Rev. 14.41 4-49

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learning Activity: Answers


1. DCBX
Communicates key parameters between DCB-capable devices (f)
Enables devices to discover peers and detect configuration parameters (l)
Requires special LLDP TLV’s to be enabled on the interface (b)

2. PFC
Helps to ensure that Ethernet can provide lossless frame delivery (a)
Can treat FCoE and LAN traffic differently, based on CoS values (m)
Switch-switch links use manual configuration, while switch-endpoint links use
automatic configuration. (c)

3. APP
Provides instructions to CNA about application-to-CoS mapping (d)
Allows network administrators to control server QoS operation (k)
Only accommodates exchange of Ethertype or L4 destination port numbers (e)
4. ETS
Controls bandwidth used by LAN and SAN traffic over a converged Ethernet link
(h)
Maps specific 802.1p classes to particular queues on the CNA (n)
Defines queue scheduling mechanisms (j)
5. CNA
Support FC and Ethernet on a single interface (g)

4-50 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

Lab Activity 4: Lab Topology

ClassCore1 ClassCore2
5900CP-1 5900CP-2

T1/0/23 T1/0/24
Core1
5920-1 Core2
T1/0/13 T1/0/13 5920-2

CNA-P1 CNA-P2

Figure 4-47: Lab Activity 4: Lab Topology

In this lab, the Data Center Bridging feature will be configured on the 5900
devices.
DCB is managed through LLDP extensions and will provide lossless Ethernet
packet delivery.
This lossless Ethernet can be used to deploy Fibre Channel over Ethernet or
iSCSI storage protocols.
This lab is in preparation of the next Lab: Fibre Channel and Fibre Channel over
Ethernet.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-51

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Lab Activity Preview: Data Center Bridging

Figure 4-48: Lab Activity Preview: MDC Overview

In this lab, you will configure DCB on inter-switch links and configure and verify
DCB on Server access ports with Converged Network Adapters (CNA).

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-52 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

Lab Activity 4 Debrief


Use the space below to record your key insights and challenges from Lab
Activity 4.

Debrief for Lab Activity 4


Challenges Key Insights

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 4-53

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Summary

Figure 4-479: Summary

In this module, you learned about the simplicity, cost, and feature benefits of Data
Center Bridging. You also learned about how DCB compares to previous attempts
at converging data and storage networks.
You learned about the specific protocols that define DCB, starting with DCBX. This
is the communication protocol used between converged network switches and
storage server CNA adapters.
You learned that PFC helps to ensure that a lossless Ethernet service is provided
for FCoE traffic. It does this by enhancing the standard Ethernet Pause
mechanism, enabling it to pause frames for frames marked with a specific 802.1p
value.
The APP TLV ensures that both the switch and its attached server CNA are
properly marking frames. This ensures that both PFC and ETS can function
properly.
While APP is responsible for marking frames, ETS controls how to treat frames
marked with a specific value. ETS standardizes how frames are queued for
transmission, how much bandwidth each queue receives, and the queuing
mechanism used for each queue.
Lastly, the CN protocol was discussed. You learned how CN differs from PFC in
that it is an end-to-end protocol, as opposed to a link-local protocol.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

4-54 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
DCB Datacenter Bridging

Learning Check
Answer each of the questions below.
1. Select four DCB protocol components (Choose four)?
a. DCBX – the Datacenter Bridging exchange protocol.
b. PFC: Priority-based Flow Control
c. ETS: Enhanced Transmission Selection
d. EVI: Ethernet Virtual Interconnect
e. CN: Congestion Notification.
2. DCBX is an extension to LLDP that facilitates connectivity between DCB-
enabled devices.
a. True.
b. False.
3. What is the name for the feature that provides a PAUSE mechanism per
802.1p priority value to help ensure that storage traffic receives lossless
service without negatively impacting data traffic?
a. ETS
b. Congestion Notification
c. Standard Ethernet Flow Control
d. DCBX
e. PFC.
4. Which of the statements below accurately describe the Application TLV, or
APP (Choose three)?
a. The APP TLV allows the network administrator to configure a switch,
which will then automatically propose QoS policy to the CNA.
b. To implement APP, traditional QoS mechanisms must be defined, and
then applied to the APP TLV feature.
c. A special set of QoS mechanisms are provided to deploy the APP TLV
feature.
d. The APP TLV only accommodates the exchange of Layer 2 Ethertype
value, or Layer 4 TCP/UDP destination port number.
e. The APP TLV can accommodate all of the classification mechanisms
supported by a typical switch.
5. ETS allows the switch to specify which 802.1p value should be processed by
which queue on the CNA.
a. True.
b. False

Rev. 14.41 4-55

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learning Check Answers


1. a, b, c, e
2. a
3. e
4. a, b, d
5. a

4-56 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet
Module 5

Objectives
In this module, you will learn about the fundamental concepts surrounding native
Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE) based SAN fabrics.
This includes a discussion of fabric components, connectivity and operation.
Specific topics related to fabric addressing, security, reliability, and redundancy are
to be covered, as well as how to perform initial configuration functions.
Additional concepts and configurations involve FCoE host access, Fabric
expansion, zoning for security, and N_Port Virtualization (NPV). After completing
this module, you should be able to:
 Describe Fibre Channel basic operations
 Understand the roles and ports in a FC Fabric
 Configure a 5900CP for native FC connectivity
 Configure FCoE functionality for Server access
 Configure Fabric Extension
 Understand and configure Storage Area Networking (SAN) Zoning
 Describe and configure NPV mode

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-1

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FC and FCoE Overview

Figure 5-1: FC and FCoE Overview

5-2 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

What is a SAN?

Figure 5-2: What is a SAN?

A Storage Area Network (SAN) is a separate Infrastructure used for storage


components. It is a network designed specifically for storage access Because of
the critical nature of data storage, and requirement for absolute fidelity, a SAN
design must be very resilient and redundant.
Historically, the SAN was segregated from LAN traffic. This was largely due to the
limited bandwidth of 100Mbps and 1Gbps Ethernet, along with a lack of any
standardized means to converge the two networks.
Most SANs leverage the Fibre Channel protocol to transmit data between servers
and storage systems, with additional capabilities for long-haul links, in case there
was a need for inter-site replication services.

Rev. 14.41 5-3

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

SAN Components

Figure 5-3: SAN Components

The components that make up a SAN infrastructure are introduced below.


 Switches: create the fabric that interconnects SAN devices, in a similar way
that Ethernet switches enable connectivity for LAN devices. A “switch fabric” is
simply a group of switches that are interconnected to provide a scalable
solution. While scalability is often desired, SANs require a very low-latency
system. For this reason, as few switches as possible should be deployed to
meet an organization’s SAN requirements.
 Routers, bridges and gateways: Devices typically used to extend the SAN
over long distances. SAN Fibre Channel systems use flow-control
mechanisms that were designed for low-latency, short-haul networks.
Specially designed routers, bridges, and/or gateways have the ability to
extend the reach of SAN technology over long distances, while satisfying the
requirement for quick responses to SAN signaling frames. These devices can
have more advanced features, such as integrating multi-protocol systems,
improving fault isolation, and more.
 Storage devices: This is the disk subsystems used to actually store data,
available with a wide variety of capacities and capabilities. Often, storage
systems are deployed as a Redundant Array of Independent Disks (RAID), or
in a “Just A Bunch of Drives” (JBOD) configuration. Various virtualization
technologies can be leveraged with storage systems.
 Servers: The devices that connect to a SAN with either a Host Bus Adapter
(HBA) or Converged Network Adapter (CNA).
 Cabling and connectors: The medium over which digital signaling is
transmitted and received. As with LANs, both fibre optic and copper solutions
are available.

5-4 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

HP Disk Storage Systems Portfolio

Figure 5-4: HP Disk Storage Systems Portfolio

The figure reveals some of the many solutions that HP provides, as it relates to
SAN systems. This includes systems for the SMB market, such as the StoreVirtual
400. For the midrange market, HP offers the 3PAR StoreServ 7000 and P600 EVA.
Enterprise class storage solutions, such as the 3PAR StoreServ 7450, 10000, and
XP P9500 are also available.
This list of available products and features is rapidly evolving. It is recommended
that you consult with HP’s Storage Single Point Of Connectivity Knowledge
(SPOCK) for detailed information about solutions, compatibility, and capability.
The current URL for HP Storage SPOCK is
http://h20272.www2.hp.com/Index.aspx?lang=en&cc=us&hpappid=hppcf

Rev. 14.41 5-5

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Converged Networking - Cookbooks

Figure 5-5: Converged Networking - Cookbooks

HP storage and server groups work in a very strict configuration mode with regard
to storage systems. The goal is to minimize all possible risks with regards to
platform interoperability, firmware upgrades, version capabilities, and more. This is
why the storage and server group creates validated configurations by building a
complete system of switches storage arrays, servers with Converged Network
Adapters (CNAs) and Host buss Adapters (HBAs), using specific firmware
versions.
Various combinations are fully tested and validated to ensure the smoothest
possible deployment experience. As mentioned before platforms, versions and
firmware upgrades are quickly evolving. As such, this course does not focus on
specific product combinations for deployment. The focus is instead placed on
understanding how these systems work, and how to configure them.

5-6 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Host (Initiator) – (Originator)

Figure 5-6: Host (Initiator) – (Originator)

Consider a legacy, stand-alone server that is not connected to, nor using a SAN,
instead using internally installed disk systems. Communication is initiated by the
server – it needs to either store or retrieve data from the disks, which passively
wait to receive and respond to these requests.
This aspect of data storage does not change after migration to a SAN-based
solution. Modern SAN systems simply move the storage out of the server’s
physical enclosure, such that they are connected via some infrastructure, instead
of being connected by a local cable inside the server’s enclosure. At a basic level,
servers still request data to be stored or retrieved from storage as before. This is
why the host is referred to as the Initiator, or originator of SAN service requests.

Note
There are some specialized replication conditions in which the storage system
initiates communication with another storage system. Most of the time, the
host server is the initiator, as described above

The actual file or storage system’s independence from the servers is based on
Logical Unit Numbers (LUNs). A LUN is how each logical disk or volume is
identified by the SAN.
Inside the storage system target, many terabytes of raw data may be available.
The storage administrator can logically separate this physical storage array into
unique volumes, or LUNs. The administrator can now decide which LUNs are
available to be presented to specific hosts.
The figure shows a host connected to a SAN with two Host Bus Adapters (HBAs).
This provides for redundant, multi-fabric connectivity. SAN-A and SAN-B are
completely isolated from each other, providing two separate paths between the

Rev. 14.41 5-7

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

host initiator and the storage target. The SAN system could select a path to use, or
it might be configured to load-balance between the two paths.
In this scenario, SAN-A and SAN-B are physically isolated. This means that the
SAN’s have completely isolated control, management, and data planes. It also
means that firmware updates to the SAN-A fabric will have no effect on the other
fabric, SAN-B.
In order to take advantage of these two, separate SAN fabrics, hosts must connect
to the SAN with adapters that are configured with Multi-Path I/O functionality. This
is because without MPIO, the server would not recognize the two paths to the
same disk system, thinking they were two unique disks, and therefor send different
read/write commands to what is actually one system. Of course, this would create
a serious corruption of data.
NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-8 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Disk Array (Target) – (Responder)

Figure 5-7: Disk Array (Target) – (Responder)

In a SAN solution, a Disk Array is referred to as the target, or responder. It is the


target of host read/write requests, and responds to those requests. Typically the
Disk Array will have multiple interfaces and controllers for increased throughput,
availability and redundancy.
Like the host, the target system may have an interface connected to SAN-A and
another connected to SAN-B. The storage system most likely will have two
separate internal controllers, with each one responsible for communicating with its
separate SAN fabric.
Disk arrays are typically protected with controller cache memory system with
battery backup. In case of power outage, this “write-back” caching mechanism
helps to ensure data integrity.
Management software is typically deployed to do replication functions, often over
multiple locations, so the disk array can replicate/ make back-ups to remote
locations for disaster recovery purposes.

Rev. 14.41 5-9

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Nodes, Ports, and Links

Figure 5-8: Nodes, Ports, and Links

Specific terminology is used to refer to the Nodes, ports, and links that
interconnect SAN subsystems. The initiator and target interfaces to the fabric is
called an “N_Port” or Node Port. These N-Ports connect to a SAN switches
“F_Port” or Fabric Port. F-ports are therefore at the edge of the SAN fabric.
If your SAN consists of a single switch, you would only have F_Ports on the switch
connected to host and target N-ports. If you extend your SAN fabric to include
multiple switches, E_Ports, or Expansion Ports will be used to connect them.
While other port types are available, they are not relevant to the discussion here.
These port types will be reviewed in later modules.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-10 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

FC Frame and Addressing

Figure 5-9: FC Frame and Addressing

As previously described, hosts initiate read/write requests to storage targets over a


SAN fabric. These request messages are carried inside a Fibre Channel (FC)
frame. This section will discuss FC frames, header format, packet flow, FC World
Wide Name (WWN), and the FC_ID.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-11

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Fibre Channel Frame

Figure 5-10: Fibre Channel Frame

The FC frame begins with a Start of Frame (SOF) delimiter, and concludes with an
End-of-Frame (EOF) delimiter. The frame header contains addressing and other
information, as discussed in the next page.
There are optional headers that could be used to assist in things like encryption,
for example. The data payload field contains the actual data to be transmitted.
Notice that the payload field can be up to 2048 bytes, which is larger than the
standard 1500-byte maximum payload of an Ethernet frame. On a converged
network FCoE is used to carry FC frames inside Ethernet frames. For this to be
successful, jumbo frames must be enabled on the Ethernet infrastructure.
The FC frame also includes a CRC to validate data integrity between host and
target.
The figure shows a data frame. There are also link control frames that are used to
acknowledge frame receipt, and also for link responses (Busy or Reject).

NOTES

_______________________________________________________________

_______________________________________________________________

5-12 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Fibre Channel Frame Header

Figure 5-11: Fibre Channel Frame Header

The figure shows the individual fields of the FC frame header, as described below.
 R_CTL: Indicates frame type (data, ACK, or Link Response) and data type.
 D_ID: The destination identifier indicates the destination of the frame. An
initiator must determine the D_ID of a target so it can originate a request.
There are several methods of determining the D_ID, as will be discussed
below.
 CS_CTL: The Class Specific Control field is used for QoS.
 S_ID: The source identifier indicates the originator of the frame. It can either
be assigned by the fabric controller or administratively set.
 TYPE: Indicates the upper-layer protocol being carried. In other words, it
indicates what is carried in the Data Payload.
 F_CTL: Frame Control indicates various options, such as sequence
information.
 SEQ_ID: The sequence identifier is assigned by the sequence initiator, and is
unique within a given exchange
 DF_CTL: Indicates presence and size of optional header information.
 SEQ_CNT: the Sequence Count is a 16-bit number that gets incremented on
each frame in a sequence. Storage data must be fragmented into pieces for
transmission and reassembled in the proper order upon arrival at the
destination. SEQ_ID and SEQ_CNT facilitate this process. A file being stored
may be broken up into several sequences, each with a unique SEQ_ID. That
sequence is further fragmented into 2112-byte pieces to fit into an FC frame.
With these numbers the destination can determine that it has received frame n
of sequence SEQ_ID.
 OX_ID: the Originator Exchange ID is filled in by the originator. This is used to
group related transmission sequences.

Rev. 14.41 5-13

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

 RX_ID: This value is set to 0xFFFF by the originator. Along with the OX_ID,
these values constitute a kind of nickname for any given exchange.
 Parameter: The parameter field has multiple purposes. One of the most
common is to be used like the IP Header’s offset field to indicate a relative
offset location for data, or for link control information.

5-14 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Fibre Channel Terminology

Figure 5-12: Fibre Channel Terminology

The figure above highlights the characteristics of a frame, a sequence, and an


exchange.
An exchange can be compared to a high-level SCSI read or write operation.
Servers need to read information from, or write information to a disk storage
system. The server sends information to the disk, and the disk responds back. It is
possible that the information sent to the disk subsystem is a very small request,
such as “read the 10MB file named “MyDoc.pdf”, for example. Of course, the
response is that the entire 10MB file is then transmitted from the disk subsystem to
the server, which requires several frames.
The complete group of frames that belong to a single request is called an
exchange. So, an exchange is a bidirectional communication which can be
compared to a traditional SCSI read or write operation. It is the complete process
of a server sending a read request to a disk, and the resultant frames to deliver
that file to the server. The server confirms that successful receipt of requested
data.
This is also true for a write operation. The host sends a simple write request, the
disk confirms that it is ready to perform this operation, and then server transfers
the file.
An exchange consists of a number of sequences. This is a communication of one
or more frames in a single direction. So, a simple read request would be a single
frame, and the response could be, say, 50 frames. The 50 frames sent from the
disk to the server is a sequence.
At the lowest level is a single frame. This is the description of what should be read
from the disk or the actual payload that has been read from the disk and is now
being transmitted to the server. The frame carries Upper-Layer Protocol (ULP)
data. For storage traffic, this is the SCSI protocol, which is encapsulated in a Fibre
Channel frame.

Rev. 14.41 5-15

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

SCSI (FCP) write operation

Figure 5-13: SCSI (FCP) write operation

The figure illustrates the relationship between SCSI operations and the Fibre
Channel Protocol (FCP) implementation (at the upper layer protocol level). Also
described is how those operations translate to Fibre Channel and how the FC
layer packages them for transmission.
For a write operation, a server initiator starts with a write command. This is a
sequence that contains only a single frame. A transfer ready response confirms
that the disk is ready to fulfill this operation.
Then the server transmits five frames in a row, all part of sequence number 3.
When all data has been sent, the target system confirms the reception of all data
by sending a response frame that with the sequence field set to a value of 4. This
indicates that all of sequence 3 data was received successfully, and so sequence
4 is expected next.
This scenario used five frames as an example. It is possible that fifty, one hundred,
or more frames could be transmitted in sequence, before an acknowledgement
frame is sent. The important thing is that all frames are received in order for the
transmission to successful. Otherwise, the data would be corrupted
This behavior explains why FC has very strict expectations for a lossless network.
The protocol has no elegant mechanism to recover from frame loss. In this
scheme, if a packet is lost, there is no selective retransmission. The entire
sequence must be retransmitted. For example, if frame 5 of 50 was lost, 45 frames
would have to be retransmitted.

5-16 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

FC World Wide Name (WWN)

Figure 5-14: FC World Wide Name (WWN)

The Fibre Channel WWN is a unique identifier for each device in the fabric. Each
HBA, CNA, and switch port must have a unique WWN. This is akin to the BIA of an
Ethernet adapter. However, the WWN is not used for addressing in FC, or for
frame delivery. Recall that the FC_ID is used for this. The FC_ID is a 24-bit
address used to define source/destination pairs inside an FC frame.
The WWN is used to unambiguously identify systems independent of the FC_ID.
The FC_ID is assigned dynamically by the fabric when the host connects, using a
kind of “first come, first serve” method. This means that when the host is rebooted,
or loses connectivity to the fabric for some time, another system could come on
line, and acquire the FC_ID that was originally assigned to the disconnected host.
For this reason, it is important to use some identifier that will remain constant, and
the WWN services this purpose.
This WWN proves useful at two levels. At the fabric level, the WWN can be used
for zoning. The figure shows two servers connected to a SAN fabric, along with
two storage systems. Zoning can key on the assigned WWNs to control which
servers can see which storage system. In this way, zonings acts as a type of
access filter, limiting storage access to only appropriate, trusted hosts. There
should typically be one initiator per zone, as a best practice.
The second use case for WWN is known as LUN masking. This is a feature that
enhances the security of the storage system itself, as opposed to the FC fabric.
The figure shows three logical disks (LUNS) defined inside the storage system.
The system decides which LUN is visible to which initiator. LUN A could be visible
to the server at the top of the figure, with the WWN ending in b6. Meanwhile, LUN
B may be visible only to the bottom server, with a WWN ending in b7.
To summarize, zoning controls which targets are visible, while on that target, LUN
masking controls which LUNs are visible to a specific WWN. Zoning can also
contain Registered State Change Notification (RSCN) messages, which are
generated when nodes or switches join or leave the fabric, or when a switch name
is changed.

Rev. 14.41 5-17

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FC WWN Structure

Figure 5-15: FC WWN Structure

The WWN is a 64-bit address defined by the IEEE. A portion of this address
contains vendor specific information, with another portion used as a type of serial
number, to distinguish between ports from the same vendor.
The WWN is used only inside the fabric to which the adapter is connected. This
means that the WWN does not need to globally unique. It must only be unique on
the connected fabric.
The IEEE has defined two formats for the WWN:
 Original format: Addresses are assigned to manufacturers by the IEEE
standards committee, and are built into the device at build time, similar to
Ethernet MAC address. First 2 bytes are either hex 10:00 or 2x:xx (where the
x's are vendor-specified) followed by the 3-byte vendor identifier and 3 bytes
for a vendor-specified serial number
 New addressing schema: the most significant four bits is either 0x5 or 0x6
followed by a 3-byte vendor identifier and four-and-a-half bytes for a vendor-
specified serial number

5-18 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Fibre Channel ID Addressing (1 of 3)

Figure 5-16: Fibre Channel ID Addressing (1 of 3)

As previously stated, the WWN is not used for actual frame delivery, since the 24-
bit FC_ID serves this purpose. This FC_ID is assigned by the fabric when the node
connects, via a formal registration process.
Parallels can be drawn between FC_ID and IP addressing. Like IP addresses,
FC_ID is an end-to-end address, and does not change as the packet traverses
routed systems.
It is also hierarchical. Each switch in the fabric has a unique domain ID. A switch’s
domain ID serves as the first octet of the FC_ID assigned to any connected node.
The switch can also assign an area ID, and a vendor specific portion of the
address. Forwarding is based on this hierarchical address. As with IP, routing
tables can be summarized with masks, placing /8 or /16 prefix-based entries in the
routing table.
Unlike IP, FC_ID’s do not have an underlying Layer 2 address. With an Ethernet/IP
infrastructure, the MAC address is dynamically learned as needed, and mapped to
the IP address in an ARP cache. With FC, a node must go through a formal
registration process called fabric login or FLOGI. This process handles host
registration, and ensures that the switch fabric has correct addressing information.
The figure shows an example of the original Brocade address structure for the
FC_ID. The first octet contains the Domain ID. This is a number from 1 to 239,
which is the maximum number of domain ID’s possible in a single FC fabric.
The next octet contains the area, which typically represents the switches port
number. The final octet is vendor specific. It could be 0 for the first host on an
interface, and increment from there.
As an example of this schema, if a switch was assigned to domain ID 1, and a host
connected to port 5 on that switch, then the FC_ID could be 010500.

Rev. 14.41 5-19

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Fibre Channel ID Addressing (2 of 3)

Figure 5-17: Fibre Channel ID Addressing (2 of 3)

The Domain ID is a term that refers to the actual switch and all of N_Ports that it
groups together. The domain ID needn’t be globally unique. It must only be unique
within a fabric. For Comware devices, there is a flat FC_ID assignment schema.
The area and ports are grouped together as Port IDs. Therefore, 16 bits are
available for ports, which improves scalability for a Comware-based solution
The FC_ID’s are logically assigned, and are not based on connectivity to a
particular switch port. The first host that comes online, connected to a switch
using domain ID 1, will be assigned an FC_ID of 0x010001. The second host to
come on line will be assigned an FC_ID of 0x010002, and so on, in a first come,
first serve fashion.
This grouping based on a switch’s domain ID can improve and simplify FC routing
functions. A switch with a domain ID of 01 can create an FC route for 01000/8,
grouping all FC_ID’s of domain 01 in one class. Routing concepts will be explored
later in this module.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-20 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Fibre Channel ID Addressing (3 of 3)

Figure 5-18: Fibre Channel ID Addressing (3 of 3) for FC Auto Mode

In this example, there are two switches, each with a unique domain ID. Switch 02
on the left has three servers connected. The switch on the right has been assigned
to domain 01, with two storage systems attached. All end ports (initiators or
targets) gets a unique FC_ID, assigned by the switch. So servers are all 0x02xxxx,
all storage systems are 0x01xxxx.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-21

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Fabric Domain IDs

Figure 5-19: Fabric Domain IDs

Each switch can be statically configured with a domain ID, or it can be configured
to support a dynamically assigned domain ID. To avoid unpredictable results, you
should configure all switches in a fabric to use the same method.
When you use static domain IDs, you must configure a unique domain ID per
switch. Assigning static domain IDs is currently recommended as a best practice.
With dynamically assigned domain IDs, a switch is assigned an ID by an existing
switch as it joins the fabric. One switch in the fabric, called the principal switch,
carries this responsibility. The principal switch assigns the next unused ID out of
the 239 available numbers.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-22 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Principal Switch Election

Figure 5-20: Principal Switch Election

Principal switch election takes place at startup, within the first 10 seconds after
connections are enabled. The election criteria is simply based on a priority value
that you can configure. The switch with the highest priority wins the election, and
becomes the principal switch. If there is a tie in priority values, the switch with the
lowest WWN wins the election.
The principle switch is responsible for assigning a local domain ID to all other
switches. During this process, a concept called a “desired ID” is supported. This
means that all FC switches can request a preferred ID. If available, the principal
switch assigns that value. If there is a conflict, (perhaps because static
configuration was applied to some switch), then the other FC switch will shut down
the link. This ensures that new switches will not disrupt an existing fabric.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-23

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FC Interswitch Forwarding

Figure 5-21: FC Interswitch Forwarding

This section is focused on the forwarding of Fibre Channel traffic between


switches. Native FC flow control mechanisms are explored, along with bandwidth
aggregation capabilities, the FC routing table, and other available fabric services.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-24 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Learning Activity: FC Review


Refer to the figure. Write the letter pointing to the component in the figure next to
the appropriate component name listed below. Provide a brief description of each
component in the space provided.

 Initiator:

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________
 Responder:

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________
 N_Port:

_______________________________________________________________
 F_Port:

_______________________________________________________________
 E_Port:

_______________________________________________________________
 WWN:

Rev. 14.41 5-25

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

_______________________________________________________________

_______________________________________________________________
 FC_ID:

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________
 Domain ID:

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-26 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Learning Activity: Answers


 Initiator: (c) Typically a server that originates read/write requests, and can
identify logical disks or volumes with a LUN
 Responder: (h) A disk array target that receives initiator read/write requests.
The physical disk array can be logically divided into volumes or LUNs
 N_Port: (f) The interface on endpoint initiators and targets that connects to
the SAN fabric
 F_Port: (e) The interface on a switch that connects to endpoint initiators and
targets
 E_Port: (d) The interfaces on a switch that connects to another switch
 WWN: (a) A unique identifier for each device. No used for addressing or frame
delivery, only used to identify systems independently from the FC_ID. Can be
used for zoning, to control which targets are visible to a server. On that target,
can be used for LUN masking, to control which LUNs are available to a
specific WWN
 FC-ID: (b) 24-bit address used for frame source/destination. Assigned by
fabric to nodes as they register via FLOGIN.
 Domain ID: (g) A unique number assigned to each switch, serves as the high-
order portion of the FC-ID for all devices connected to that switch.

Rev. 14.41 5-27

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FC Flow Control Overview

Figure 5-22: FC Flow Control Overview

Fibre Channel provides a lossless network, as required by the SCSI protocol. This
is necessary due to the SCSI protocol’s lack of good recovery mechanism, as
previously discussed.
Native Fibre Channel uses a so-called buffer-to-buffer credit mechanism. This
means that during initial peer connection, the peer grants credits, which control
how many frames may be transmitted. Credits deplete as frames are transmitted,
and when all credits are depleted, transmission must cease.
The receiving peer normally sends continuous credit updates during a
communication session, as long as buffers are available. This is a very safe
mechanism, since transmitters may not send data unless the receiving peer
indicates that it is capable of processing inbound frames.
Compare this with our discussion of FCoE mechanisms in the previous chapter.
FCoE uses the DCBX protocol with PFC. PFC allows transmission until the peer
sends a PAUSE frame. FCoE assumes there is ample bandwidth, using PAUSE
frames to stop transmissions during the occasional overload. When the pause
frame expires, traffic may be sent again. So, PAUSE frames are normally not sent.
To summarize, native FC B2B assumes it should not transmit, unless it receives
credits from the receiver. The FCoE PFC mechanism assumes it is free to
transmit, unless it receives PAUSE frames.

5-28 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

FC Classes and Flow control

Figure 5-23: FC Classes and Flow control

Native FC defines three classes of service to differentiate traffic. Start_of_Frame


Connect Class 1 (SOFc1) provides a dedicated, guaranteed connection, which is
best for sustained, high-throughput sessions. Class 2 provides a connectionless
service, appropriate when very short messages are sent. Class 3 is similar to
Class 2, but with a variation in flow control. This is appropriate for real-time traffic
broadcasts.
The figure summarizes the characteristics of BB and EE flow control. The different
classes leverage these mechanisms in different ways. Class 1 frames use EE flow
control, (with one exception). Class 2 uses both BB and EE, while Class 3-based
sessions uses B2B flow control exclusively.
The BB flow control mechanism is used between an N_Port and an F_Port, and
between N_Ports that are directly connected in a point-to-point topology. Since BB
flow control lacks the overhead of sending Acknowledgement frames, it is well-
suited for time-sensitive transmissions. A special type of SOFc1 “initial connect”
frame also uses BB flow control.
The End-to-End (EE) flow control mechanism provides another option for ensuring
reliable communications between N_Ports. Class 1 and 2 FC traffic uses EE, in
which an ACK frame from the receiver assures the transmitter that the previous
frame has been successfully received. Therefore, the next frame can be sent.
If there are insufficient buffers to receive additional frames, a busy message is sent
to the transmitter. A corrupted or otherwise malformed frame will cause a Fabric
frame Reject (F_RJT) message to be sent to the transmitter.

NOTES

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-29

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FC Class 2 Flow Control Scenario

Figure 5-24: FC Class 2 Flow Control Scenario

The figure depicts an FC transmission session for Class 2 traffic, which uses both
B2B flow control, based on buffer credits, and EE flow control, based on ACK
frames.
The server’s N_Port sends a data frame to the target, and decrements its F_Port
credit count by one. The switch’s F_Port receives this frame, and sends an R_RDY
frame to the server’s N_Port, thus incrementing its credit.
The switch then transmits this frame out its target-connected F_Port, decrementing
its credit count for that target by one. The disk subsystem’s N_Port receives this
data frame, and sends an R_RDY to the switch’s transmitting F_Port to increment
its buffer count back up by one. This disk responder also sends and EE ACK frame
through the fabric and on to the initiator, confirming successful receipt of the frame.
For B2B, the receiver continuously updates the transmitter by sending additional
credits, as long as it has buffers available. So the server may send only as much
as its credit allows. The storage system sends additional credits if it can handle the
load. This occurs at wire speed, so it is a very fast mechanism. The combination of
both credit counts between each F_Port-to-N_Port connection, and the EE ACK
mechanism provides a very robust, fail-safe transmission.

NOTES

_______________________________________________________________

_______________________________________________________________

5-30 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

ISL Bandwidth Aggregation

Figure 5-25: ISL Bandwidth Aggregation

To increase the available bandwidth between devices, physical inter-switch links


can be bundled into a single, logical medium. Different vendors use their own
terminology for this feature. Brocade refers to this as a trunked link, Cisco calls it a
port channel, while Comware calls this a SAN aggregation link.
Comware supports layer 2 bridge aggregation and layer 3 route aggregation. This
allows the bundling of multiple FC ports into a single high-bandwidth logical link.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-31

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FC Forwarding

Figure 5-26: FC Forwarding

Each FC switch has an FC routing table. Each entry in this table contains the
destination FC_ID, a mask, and an outgoing port.
Each directly connected node has host, or node route entry in the table. Each
node has a unique 24-bit FC_ID, and so a /24 table entry indicates a route to a
single device. This can be likened to a /32 route in an IP route table.
The figure shows a switch with domain ID 01, with two storage systems attached
and on line. The first target host to connect has been assigned the FC_ID of
0x010001, and has /24 full match entry in the FC routing table. This is a directly
connected route, connected on port FC 1/0/1. The entry for device with FC_ID
0x010002 is similarly recorded in the table.
During the initial registration of this device to the fabric, the FC_ID was assigned,
and the route was entered to the route table. If the host goes off line, the
associated route table entry becomes unavailable.
This mechanism ensures that a switch knows how to reach local hosts. If a fabric
consists of a single switch, there is no need to update the routing table. All hosts
can find each other, being directly connected to the same switch, and host routes
are entered automatically, as targets and initiators connect.

5-32 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

FC Forwarding

Figure 5-27: FC Forwarding

When multiple switches are in use, their unique domain ID’s describe connectivity
to remotely attached devices. The first octet of any node’s FC_IDs is set to its
attached switch’s domain ID. All nodes attached to switch with domain ID 0x02
have an FC_ID that begins with 0x02, and so on.
The switch with domain ID 0x01 needn’t have an entry for every host connected to
switch 0x02. It simply needs a single entry of 0x020000 /8, denoting the outgoing
interface attached to switch 0x02. In the figure, this is interface FC1/0/41
Similar to IP, both static and dynamic routing can be utilized. Static routes are
manually configured on each switch by the administrator. Dynamic routing uses
the Fibre Channel Shortest Path First protocol (FSPF).
FSPF is a link-state routing protocol, like OSPF or IS-IS. This protocol is on by
default, so when two FC switches connect, they automatically exchange routes.
FSPF can support link costs to accommodate complex routing scenarios. Also, the
graceful restart feature is available to support In-Service Software Update (ISSU).
This allows Comware switch firmware to be upgraded without downtime.

NOTES

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-33

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Fabric Services

Figure 5-28: FC Forwarding

This section will discuss Fabric login, Simple Name Service, state change
notification, and zoning.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-34 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Fabric Login (FLOGI)

Figure 5-29: Fabric Login (FLOGI)

When a node powers on, it activates its attached switch port link. Before
communicating, the node must login to the fabric. Unlike complex security login
mechanisms, which are based on usernames and passwords, this is a relatively
simple, yet formal registration process.
Both initiator and target nodes must register, enabling the fabric to learn about the
nodes. Thus, the fabric has no need to dynamically learn a Layer 2 address, like
Layer 3 TCP/IP systems must learn MAC addresses. This information is gleaned
as each device connects, through an explicit control-plane process called FLOGI.
During FLOGI, the Fibre Channel fabric assigns an FC_ID to the node. In the
example, the server is attached to a switch with domain ID 02, and it is the first
device to activate a port on this switch. Therefore, it gets an FC_ID of 0x020001.
The switch associates this address with the outgoing port that connects to the
server.
NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-35

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Simple Name Service Database

Figure 5-30: Simple Name Service Database

As previously discussed, FC_IDs are assigned on a first-come, first-serve basis


and can change if devices lose connectivity. This makes the FC_ID less suitable
for use with Fibre Channel security services. This drives the need for a more
stable identifier, called the WWN. This is a number that remains constant,
regardless of reboots and outages. Since FC_IDs are still used for actual data
transmission, the fabric must maintain a map of each WWN and its associated
FC_ID.
This WWN-to-FC_ID mapping service is provided by the Simple Name Service
database. This database is exchanged between switches, so every switch in the
fabric has a copy. The database shown in the figure lists each WWN, along with its
associated FC_ID and node type. The nodes types are listed as either Target or
Initiator. When an initiator sends a query to this database, it asks to see all
possible target storage systems.
The Name Service can respond with the list of all FC_IDs. The server can then
send a logical request to each storage system to ask if any LUNs are available.
However, queries can also be filtered based on which device initiated a request.
This allows the fabric to show a different set of available targets to different initiator
hosts.
In this example, suppose the server at the top (WWN ending in 0xb6) is requesting
all possible targets. The fabric can filter its response for this initiator, perhaps only
revealing targets with a WWN that ends with 0x10 (only target with FC_ID
0x010002, in this example).
This is called soft zoning, since it does not actually enforce hardware security
rules, merely how responses are filtered for certain initiator queries. It may be
technically possible for a host to send a request to a known FC_ID, bypassing the
soft-zoning capability of the name service, and reach other storage systems.

5-36 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

VSANs – Virtual SAN/Fabrics

Figure 5-31: VSANs – Virtual SAN/Fabrics

VSANs, also known as virtual fabrics, provide the ability to implement multiple
fabrics on a single physical infrastructure. This is typically used when isolation is
required between fabrics. This isolation can be at different levels.
Data isolation ensures that no unintentional data transfer can occur between
VSANs. The physical links are shared among VSANs, but kept logically separate
through the use of VSAN tagging.
Control isolation provides independent instantiations of the fabric for each VSAN.
All Fibre Channel services are isolated for each VSAN. Each VSAN has its own
name service and zone service. It is therefore impossible to see information from
one fabric in another.
If an administrator relied solely on the soft zoning feature previously described, a
simple error in zone configuration could reveal classified targets to unauthorized
initiators. The creation of VSANs eliminates this issue, since all aspects of a VSAN
are isolated. An initiating host can only access targets in the same VSAN.
Fault isolation is also achieved in the data, control, and management planes, since
misconfigurations in one VSAN should not impact other VSANs.

NOTES

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-37

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VSAN vs Physical SAN

Figure 5-32: VSAN vs Physical SAN

The figure compares separate physical SANs on the left to a VSAN solution on the
right. On the left, each of the three departments has its own storage infrastructure,
deployed as separate physical fabrics, denoted as red, blue, and green. Deploying
separate physical infrastructure for each department creates additional cost, and
increases rack space and power utilization due to the larger number of devices to
manage.
Instead, these systems could share a single physical infrastructure, as shown on
the right. This infrastructure can then be logically separated into VSANs. A
common storage pool can be shared among these VSANs. This reduces the
number of switches to be managed, thereby lowering costs for initial deployment,
rack space, power, and cooling.
Another benefit is that unused ports can be easily moved to an appropriate VSAN
without disrupting a production environment.

5-38 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

VSANs – Virtual SAN/Fabrics on Comware

Figure 5-33: VSANs – Virtual SAN/Fabrics on Comware

VSANs have historically been used for service isolation, but not necessarily for
redundancy. To accommodate redundancy, two physical fabrics (SAN A and SAN
B) are deployed with physically separate fabrics. In the previous example, only
SAN A is depicted, with the three departments virtually separated via VSANs. To
add redundancy, a separate physical SAN B fabric could be deployed, with
identical VSAN configuration. Each host can then be connected to both
infrastructures for redundancy.
Comware 7 switches can improve this scenario. With the Fibre Channel switch
functionality on Comware 7 devices, FC frames are moved through the internal
switch architecture using FCoE. This is because internally, Comware switches
always use Ethernet technology. This is true even for HP switches like the
Comware 5900 CP, which provides native FC ports. Since native FC traffic is
internally switched via FCoE, each Fibre Channel VSAN requires an associated
Ethernet transport VLAN.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-39

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VSANs – Virtual SAN/Fabrics on Comware

Figure 5-34: VSANs – Virtual SAN/Fabrics on Comware

For a typical SAN A/SAN B concept, a Comware 5900CP-A and a Comware


5900CP-B is deployed, each with a single VSAN. The figure shows a 5900CP-
Core1 and Core2. C1 hosts VSAN 11, while Core 2 hosts VSAN21. These
switches are not part of an IRF group, nor is there any other physical
interconnection between these two switches. Thusly the fabric separation is
maintained.
However, the 5900AF top-of-rack switches need to use IRF for redundancy. To
maintain a logical fabric separation, VSAN 11 is configured, and the network
administrator ensures that only physical ports on unit 1 are assigned.
Similarly, VSAN 21 defined, and is only associated with physical interfaces from
unit 2 of the IRF. This ensures that neither VSAN 11 nor VSAN 21 traffic will cross
the IRF links, and the concept of physically separated fabrics is maintained.
Although they are managed and configured on the same IRF system, each VSAN
is separately processed by individual IRF members.
Along with the dedicated FC uplinks, the Top-of-rack switches also have a bridge
aggregation group configured, with physical uplinks to traditional data core
switches. These are the HP Comware model 12900-series switches indicated in
the figure.

5-40 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

VSANs - Tagging

Figure 5-35: VSANs - Tagging

VSAN tagging for an FC fabric is quite similar to VLAN tagging for Ethernet
switches. Some Ethernet switch ports can be configured as a member of a single,
untagged VLAN for endpoint connectivity, while others may be configured to use
802.1q tagging to support multiple VLANs for switch-to-switch links. Fibre Channel
also supports two types of tagging – Native FC tagging and FCoE tagging.
With native FC communications, Ethernet is not involved. In this case, FC frames
need a specific, special VSAN tag inside the FC frame. The native FC port can be
an access port, connected to a node, and using native FC frames.
Connections to switches may need to support multiple VSANs. This is where a
trunk link is used. The VSAN ID is tagged inside the FC frame.
FCoE uses a transport VLAN, which has an 802.1q tag. Even when sending
normal FC frames, FCoE uses an 802.1q tag. So for FCoE, there is no access port
type. All ports always have an 802.1q tag. This is an implicit tag that is also used
for the VLAN. This is why these will always be tagged, even if only a single VSAN
is permitted on the port.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Basic Configuration Steps

Figure 5-36: Basic Configuration Steps

The figure introduces the steps to configuring FC infrastructure. This starts with
configuring the switch working mode and FCoE operating mode. Then VSANs can
be defined, along with a transport VLAN for FCoE, bound to the VSAN.
Support for native Fibre Channel can be configured on the physical interface. The
FC port type is set, and the FC interface is assigned to a VSAN. Initially, a simple
default zone can be configured that permits everything.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-42 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 1: System-working-mode

Figure 5-37: Configuration Step 1: System-working-mode

The switch must be operating in advanced mode to have access to any and all
FCoE configuration syntax. This setting requires a system reboot to take effect. As
depicted in the figure, once the system working mode is set to advanced, this
configuration must be saved, and then a reboot command is issued.
After the reboot, the working mode is verified via the “display system-working-
mode’ command.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-43

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 2: Define FCoE Operating


Mode

Figure 5-38: Configuration Step 2: Define FCoE Operating Mode

The FCoE operating mode depends on how the system is deployed. The operating
mode can be configured as a Fibre Channel Forwarder (FCF), an N_Port
Virtualization (NPV), or as a transit mode switch.
Only one mode is supported per switch or IRF. Also, remember that this command
is only available after the switch has been configured to operate in advanced
mode, as previously described in Step 1.
For this discussion, an FCF is to be configured. NPVs will be covered in a later
section. Once set, the FCoE operating mode can be verified with the “display fcoe-
mode” command, as shown in the figure above.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-44 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 3: Define VSAN

Figure 5-39: Configuration Step 3: Define VSAN

The definition of a VSAN creates a virtual fabric. This virtual fabric can provide
complete isolation of services for different VSANs sharing the same physical
infrastructure, providing a logical fabric separation for IRF top-of-rack systems.
VSAN 1 is defined in the system by default, so any new Virtual FC or FC interfaces
are assigned to VSAN 1 by default.
In the figure, VSAN10 is created from global configuration mode on the switch.

Rev. 14.41 5-45

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 4: Transport VLAN and Bind


VSAN

Figure 5-40: Configuration Step 4: Transport VLAN and Bind VSAN

Now that a VSAN is defined, a VLAN must be created and dedicated for the
purpose of FCoE transport. No other hosts or Layer 2 functions are permitted for
this VLAN, which has a one-to-one relationship with the VSAN.
You cannot have one VLAN that services multiple VSANs. Nor can you have one
VSAN that is serviced by multiple VLANS. If the intended design requires multiple
VSANs, then a VLAN must be defined for each one.
In the figure, VLAN 10 is defined from global configuration mode. This VLAN is
defined as being dedicated to service VSAN 10 with the “fcoe enable vsan 10”
command.
In this example the VLAN and VSAN numbers match. Although this can be a good
idea to minimize confusion and ease documentation, it is not a technical
requirement. Any VLAN number can be configured to support any VSAN.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-46 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 5: Configure FC Interface

Figure 5-41: Configuration Step 5: Configure FC Interface

Fibre Channel interface functionality must be configured, since the HP 5900CP


switch supports converged ports. Each port can be a 1 or 10gbps port, depending
on whether an SFP, or SFP+ device is installed. Alternatively, the port could be a 4
or 8gbps native Fibre Channel port. Again, this depends on the adapter installed in
a particular port.
In the configuration, the port can be configured as either an Ethernet or as an FC
port. However, it is important to remember that the optic interface installed for that
port must match the intended configuration.
For example, if an SFP+ 10Gbps Ethernet interface has been inserted, and that
interface is configured for FC, then the port will remain inoperative, in a down
state. Ethernet configurations require physical adapters with Ethernet optics, and
FC configurations require FC optic-based adapters.

Note
16Gbps FC interface optics can be installed in an HP 5900CP. However, this
switch only supports a maximum of 8Gbps for FC, and so the port will only
operate at 8Gbps.

HP has released converged optics that can support both 8Gbps FC and 10Gbps
Ethernet. If those are deployed, the administrative configuration determines the
operational status of the interface.
This example shows an interface initially operating as a 10Gbps interface. When
the “port-type fc” command is issued, this interface becomes an FC port.

Rev. 14.41 5-47

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 5: Validate Interface Status

Figure 5-42: ETS Step 2: Interface scheduling and weights

The interfaces operational status can be verified with the “display interface brief”
command. In the example above, the interface is operating as a native FC port.
Interface FC1/0/1 is a member of VSAN 1 by default, and is currently in a non-
operational, or DOWN state.

5-48 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 6: FC Interface Port Type


(1 of 2)

Figure 5-43: Configuration Step 6: FC Interface Port Type (1 of 2)

Now that the interface has been configured to operate as an FC port, its FC port
type can be configured. This includes being configured as one of the following:
 E_Port: Expansion port – connects to another switch’s E_Port
 F_Port: Fabric port – connects to a node’s N_Port
 NP_Port: NPV (virtual) enabled port

In this example, interface FC1/0/1 is configured as an F_Port, since this port is to


be connected to a server or storage system’s N_Port.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-49

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 6: FC Interface Port Type


(2 of 2)

Figure 5-44: Configuration Step 6: FC Interface Port Type (2 of 2)

The command “display interface brief” can be used to validate the configuration. In
the example, interface FC1/0/1 is still in VSAN 1, but the mode column shows that
it is configured to operate as an F_Port.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-50 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 7: Assign FC Interface to


VSAN

Figure 5-45: Configuration Step 7: Assign FC Interface to VSAN

In this scenario, the port should be a member of VSAN 10. To configure this port to
no longer be a member of the default VSAN 1, the “port access vsan 10”
command is used.
Only native FC interfaces can be configured as an access port. FCoE Virtual Fibre
Channel (VFC) interfaces use a transport VLAN, which serves as a VSAN trunk
protocol.
Again, the “display interface brief” command validates that interface FC1/0/1 has
been configured as a member of VSAN 10.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-51

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 8: Set Default Zone Permit

Figure 5-46: Configuration Step 8: Set Default Zone Permit

The default zoning configuration on a Comware switch denies everything. To


change this, a simple “zone default-zone permit” command. This command is
analogous to the “permit any” statement in an access list. The topic of zones and
zoning will be covered in a later section of this module.
For verification, the “display zone status vsan 10” command reveals that the
default zone has been configured to permit all access.

5-52 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 9: Status Review

Figure 5-47: Configuration Step 9: Status Review

Upon completion of a basic configuration, several commands are available to


validate Fibre Channel operation and configuration.
 Display interface brief
 Display interface FC packet pause (or drops)
 Display transceiver info
 Display vsan port-member
 Display vsan login
 Display vsan name service

You will explore these commands during lab activities.

Rev. 14.41 5-53

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Optional Debugging

Figure 5-48: Optional Debugging

Optionally, you may use the debug commands shown above.


 Debug FC interface
 Debug FLOGI
 Debug FDISC

5-54 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Lab Activity 5.1: Lab Topology

Figure 5-49: Lab Activity 5.1: Lab Topology

In this lab, the classroom core HP 5900CPs will be configured with native FC
interfaces to the native FC storage system.
In this current topology, 4 isolated storage systems are available, which means
that 2 student PODs will be working in team on a single SAN system.
Since the SAN topology will consist of a SAN A/B topology (2 SAN Fabrics to
provided isolation and redundancy), each Classroom core will have a local SAN
configuration.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-55

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Lab Activity Preview: FC/FCoE - Native FC


Setup

Figure 5-50: Lab Activity Preview: FC/FCoE - Native FC Setup

In this lab, you will configure a native FC SAN and the native SAN interfaces

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-56 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Lab Activity 5.1 Debrief


Use the space below to record your key insights and challenges from Lab
Activity 5.1.

Debrief for Lab Activity 5.1


Challenges Key Insights

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-57

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FCoE Overview

Figure 5-51: FCoE Overview

The figure introduces topics to be covered in this section. This includes the
following:
 Consolidation
 Terminology
 CNA
 FCoE Stack compared to OSI/FC Stack
 FCoE Frame Format
 FIP: Fibre Channel Initialization Protocol
 FPMA: Fabric Provided MAC-Address

5-58 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

FCoE I/O Consolidation

Figure 5-52: FCoE I/O Consolidation

The main goal of FCoE is to achieve network consolidation. A typical deployment


could fill a rack with a large number of devices, cables, and adapters, due to the
separate infrastructure for LAN and SAN.
The figure highlights the savings in equipment:
 50% fewer switches in each server rack—Only two CN Top-of-Rack switches,
compared with four (two LAN and two FC) switches per rack with separate
Ethernet and FC switches
 50% fewer adapters per server
 75% fewer cable connections

Rev. 14.41 5-59

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FCoE Goals

Figure 5-53: FCoE Goals

FCoE is tasked with maintaining the latency, security, and traffic management
attributes of Fibre Channel, while integrating with and preserving investment in
existing FC environments. The protocol must not be disruptive to any standard
Fibre Channel functions and capabilities. FC must continue to function as always,
using the same FC_IDs, WWNs, zoning, FSPF routing, and so on.
Interoperability between native Fibre Channel and FCoE based systems should be
very easy. This is because there is no device required to “convert” between native
FC and some other protocol. The native FC functionality is simply encapsulated in
an Ethernet frame, and then decapsulated prior to transmission to a native FC
device. This ability to integrate Ethernet and native FC without need for a separate
protocol simplifies the deployment of storage environments.
As explained above, all the capabilities of native FC technology are extended over
Ethernet systems through the use FCoE. Toward this end, it is vital to ensure that
Ethernet provides the lossless transmission that Fibre Channel requires. This
capability was previously described in the module about DCBX and PFC.

5-60 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

FCoE Terminology

Figure 5-54: FCoE Terminology

The figure introduces the terminology surrounding FCoE concepts and


configuration.
 FCoE: Fibre Channel over Ethernet carries native FC frames inside a
standard Ethernet frame.
 CEE: Converged Enhanced Ethernet, also known as Data Center Bridging
(DCB), describes a suite of protocols and capabilities necessary to support FC
technology over Ethernet infrastructure.
 VFC: Virtual Fibre Channel Interfaces provide an FC abstraction layer over a
traditional Ethernet connection. This enables all of the traditional port types
supported on native Fibre Channel, including:
• VN_Port: This provides the virtual equivalent of an FC N_Port, for end
node connectivity.
• VF_Port: Provides the virtual equivalent of an FC F_Port, for switch fabric
connectivity.
• VE_Port: This is the virtual equivalent of an FC E_Port, for switch-to-
switch links.
 Enode: this is an FCoE device that supports FCoE VN_Ports. This includes
both server initiators and storage system targets.
 FCF: an FC Forwarder is a device that provides FC fabric services with
Ethernet connectivity
 FIP: the Fibre Channel Initialization Protocol acts as a type of helper protocol
for initial link setup.

Rev. 14.41 5-61

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Converged Network Adapters (CNA)

Figure 5-55: Converged Network Adapters (CNA)

Each FCoE-capable server must be equipped with a Converged Network Adapter


(CNA). As the name implies this adapter supports both Ethernet services for
standard LAN connectivity, and Fibre Channel services for SAN fabric
connections.
Traditional HBA’s only support native FC, while NIC’s only support Ethernet. The
CNA converges these two functions into a single device. This adapter presents
itself to the server OS as two separate devices – an Ethernet NIC and an HBA.
Therefore, the OS is not aware that convergence is taking place, continuing to
perceive two separate fabrics for LAN and SAN. This aspect of CNAs makes it
easy to migrate from separate legacy systems to a converged solution.

5-62 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

HP CNA Products

Figure 5-56: HP CNA Products

The figure shows some of the CNA products that HP supports. New products and
capabilities are being added by HP on a regular basis. Please check current
documentation.

Rev. 14.41 5-63

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FCoE Server Access

Figure 5-57: FCoE Server Access

A hardware-based CNA provides FCoE capability that is integrated with traditional


Ethernet services. Although both services are provided by a single device, the
server OS perceives a separate HBA and NIC. This not only makes network
convergence transparent to the server OS, but also to server administrators, who
can continue to configure these “separate” adapters as always.
The adapter will leverage FIP and the DCB suite (DCBX, PFC, and ETS) to
facilitate SAN fabric connectivity. These protocol suites run independent of each
other. If you configure FCoE to use VLAN 10, it is the network administrator’s
responsibility to ensure that VLAN 10 is assigned the correct 802.1p mapping, and
that PFC and ETS are properly deployed to provide lossless service for VLAN 10.
In the figure, the server has two CNAs installed. For Fibre Channel, CNA-Port1 is
connected, and will use FLOGI, and acquire FC_ID in VSAN 11, while CNA-Port2
does FLOGI and get a unique FC_ID on VSAN 21.
Meanwhile the Ethernet functionality of the two CNAs can be aggregated in a
traditional NIC teaming configuration to enhance bandwidth utilization and
redundancy for LAN communications. The network administrator may choose how
and whether to team these NICs, just as they did with separate HBAs and NICs.

5-64 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

FCoE Stack Overview

Figure 5-58: FCoE Stack Overview

The figure compares the classic OSI model’s protocol concepts with FCoE and
native Fibre Channel. Notice that Upper Layer Protocol (ULP) services are
identical between FCoE and native FC, as are FC layers 2 through 4.
It is only the physical and data link layer protocols of FC that have been replaced
by Ethernet. An FCoE mapping layer presents itself to FC-2 as a native FC
interface stack. It encapsulates the FC frames in Ethernet for transmissions, and
decapsulates it before passing received traffic up through the stack.

Rev. 14.41 5-65

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FCoE Encapsulation

Figure 5-59: FCoE Encapsulation

The native FC frame is encapsulated in a typical Ethernet frame. In the figure, you
can see the standard Ethernet source and destination MAC addresses, the Ether
Type field, the IEEE 802.1Q tag, and the 4-bit version field. FCoE data frames
have an Ether Type of 0x8906.
The standard Ethernet FCS or Frame Check Sequence serves as a frame trailer,
and aids in detection of corrupted frames.
Contained inside this Ethernet frame is a native, unmodified FC frame.

5-66 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

FIP: FC Initialization Protocol

Figure 5-60: FIP: FC Initialization Protocol

With native FC connections there is a direct physical link between HBA and fabric,
so when the physical link is down, the FC link is of course down.
FCoE uses virtual links. While the Ethernet link may be up, a logical FC connection
must be established and maintained between the CNA and the FCF switch.
For example, the Ethernet link and all associated physical connections may be up,
but the Virtual FC interface could be manually shut down. FIP notifies the peer of
this condition, ensuring that it understands the lack of connectivity. FIP provides a
mechanism to accurately reflect the logical status of the FCoE connectivity.
Other functions provided by FIP include:
 FCoE VLAN Discovery: Ensures that the CNA learns from the FCF which
802.1q VLAN tag it should use.
 FCF Discovery: Enables the CNA to find its attached FCF
 FLOGI: Fabric Login must occur for any FC device to acquire an FC_ID and
communicate over the fabric. Since this is FCoE, a fabric MAC address will
also be allocated, called the FPMA.
 FPMA: The Fabric Provided MAC-Address enables the FCoE transmissions
 Link Keep-alive: With the above functionality complete, FCoE communications
are now possible. That status of the link is continuously validated with link
keep-alive messages.

Rev. 14.41 5-67

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FIP: VLAN and FCF Discovery

Figure 5-61: FIP: VLAN and FCF Discovery

FCoE data frames are denoted by Ethertype 0x8906, while FIP frames use an
Ethertype value of 0x8916. Initial FIP frames are sent using the BIA MAC address
from the Ethernet portion of the CNA. This is the same as any typical Ethernet
frame would be sent.
The first step of the FIP protocol is to perform VLAN discovery. Since the VLAN
has yet to be discovered, the appropriate 802.1q tag is unknown. Therefore, these
discovery frames are sent as untagged, native Ethernet frames. The FCF
recognizes VLAN discovery messages, and responds with the FCoE VLAN ID, as
configured for that interface. VLAN discoveries are the only FIP frames that are
sent untagged. All other frames are tagged per the FCF VLAN discovery response.
The next step is to perform FCF discovery, in which the node sends a Discovery
Solicitation message. The FCF responds with a Discovery Advertisement, which
contains an FCF Priority value. If multiple forwarders exist on the same VLAN,
they would all respond to the solicitation message. The node selects the FCF with
the highest priority, which is the lowest numerical value.

5-68 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

FIP: FLOGI and FPMA

Figure 5-62: FIP: FLOGI and FPMA

Once the FCF is selected, the host must login to that system, using FLOGI. You
may recall from a previous section that FLOGI results in the assignment of an
FC_ID. For FCoE, an FPMA is also assigned. The node’s CNA will use the FPMA
as its source MAC address for all FCoE frames (Ethertype 0x8906). Prior to
FLOGI, the CNA's BIA is used.
The FPMA is constructed of two 24-bit pieces – the Fibre Channel MAC Address
Prefix (FC-MAP) and the FC_ID. The default FPMA prefix is 0xEFC00, but can be
manually configured. The second portion of the FPMA is equal to the assigned
FC_ID. Since this is unique per VLAN, there is typically little motivation to modify
the FC-MAP. The FPMA need only be unique within the VLAN, and the unique
FC_ID ensures this is the case. This is because there is a one-to-one relationship
between VLAN and VSAN.
The example shows an FPMA of 0x0EFC00010004. This was constructed from the
default FC-MAP, and an assigned FC_ID of 0x010004.

Rev. 14.41 5-69

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FCoE Design considerations

Figure 5-63: FCoE Design considerations

FCoE was designed to move data inside a single data center environment, and
was not designed for long-distance WAN communications. This is primarily due to
the timers involved in the PFC and PAUSE functions of DCBX, and associated
buffer calculations.

5-70 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Steps for FCoE Host Access

Figure 5-64: Configuration Steps for FCoE Host Access

The prerequisites are similar to previous configurations. The server and storage
nodes must be configured to support appropriate DCBX functionality. Switches
must support FCF mode and have a VSAN defined, along with unique Domain ID
assignment and a default zone permit, as a minimum.

Rev. 14.41 5-71

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Steps for FCoE Host Access

Figure 5-65: Configuration Steps for FCoE Host Access

The figure introduces the steps to configure FCoE host access. These steps are
detailed in the following pages.

5-72 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 1: Create Virtual FC


Interface

Figure 5-66: Configuration Step 1: Create Virtual FC Interface

A new virtual FC interface is created in the top portion of the example. The second
example reveals how to verify this configuration. You can see that VFC 2 has been
created.

Rev. 14.41 5-73

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 2: VFC FC Port Type

Figure 5-67: Configuration Step 2: VFC FC Port Type

The switch port in this scenario is intended to connect to some end host.
Therefore, it must be configured as an F_Port. The examples in the figure reveal
the syntax to configure and verify this requirement.

5-74 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 3: Bind VFC to Interface


(1 of 2)

Figure 5-68: Configuration Step 3: Bind VFC to Interface (1 of 2)

To function, the previously configured virtual interface must be associated with a


physical interface. This could be either a single physical interface, or logical link-
aggregation interface.
When a single VFC is bound to an Ethernet link-aggregation, the FCoE traffic will
be distributed over the link-aggregation member ports using the traditional hash
mechanisms. Since the FCoE frame does not contain an IP header, the hashing
algorithm will use the source/destination Ethernet MAC address for the calculation.
Since all communication between FCoE devices will be using a stable MAC
Address, the communication between any 2 FCoE devices is guaranteed to use a
single link of the link-aggregation. This ensures that link aggregation will not
introduce problems such as out-of-order delivery.
Out-of-order delivery is not an issue for traditional IP networks. Multiple packets of
a single flow can be sent over different physical links or paths with different
latency. Sequencing information contained in the headers allow packets to be
reassembled in the proper order, regardless of the order in which they arrived.
Binding a virtual Fibre Channel interface to a logical aggregation interface is not
applicable for a server-facing port-group. This is because servers typically have
two physical CNA adapters. The reason for having two Fibre Channel connections
from a server is to connect each one to a separate fabric, and so they cannot be
aggregated.
The use of bridge aggregation is appropriate for inter-switch links, thus providing
ample bandwidth for the multiple host communications traversing these links.
In the figure VFC 2 is bound to physical interface ten-gigabitethernet 1/0/2.

Rev. 14.41 5-75

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 3: Bind VFC to Interface


(2 of 2)

Figure 5-69: Configuration Step 3: Bind VFC to Interface (2 of 2)

The binding can be verified with the “display interface vfc brief” command. The
example reveals that VFC2 is bound to interface XGE1/0/2.

5-76 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 4: Assign VFC Interface to


VSAN

Figure 5-70: Configuration Step 4: Assign VFC Interface to VSAN

As with native FC interfaces, the virtual interface must be assigned to a VSAN.


You have learned that the VSAN traffic is transported over a VLAN, and that FCoE
uses 802.1q VLAN tagging. Since the tagging allows for multiple VLANs, multiple
VSANs are implicitly supported. To achieve this functionality, the FCoE VFC
interface must be configured as a VSAN trunk port, as in the example above.

Rev. 14.41 5-77

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 5: Physical Interface VLAN


Assignment

Figure 5-71: Configuration Step 5: Physical Interface VLAN Assignment

The virtual interface has been configured to use VSAN 10, which is using VLAN 10
as a transport. The virtual interface has been bound to a physical interface. This
physical interface must therefore be configured to support VLAN 10.
The example shown completes this scenario by configuring the physical interface
to be a trunk port that allows VLAN 10.

5-78 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Lab Activity 5.2: Lab Topology

Figure 5-72: Lab Activity 5.2: Lab Topology at the end of this lab

In this lab, you will configure a switch in FCF mode, configure a FC domain and
configure the Virtual FC interface for FCoE
NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-79

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Lab Activity Preview: FC/FCoE - FCoE Server


Access

Figure 5-73: Lab Activity Preview: FC/FCoE - FCoE Server Access

In this lab, you will configure a native FC SAN and the native SAN interfaces

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-80 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Lab Activity 5.2 Debrief


Use the space below to record your key insights and challenges from Lab
Activity 5.2.

Debrief for Lab Activity 5.2


Challenges Key Insights

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-81

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Fabric Expansion

Figure 5-74: Fabric Expansion

In previous sections you learned about FCoE connectivity between Top-of-Rack


switches and server hosts. The example scenarios have revealed how to connect
to isolated FC switches. In these scenarios, one switch is an FCF which connects
to the server CNA, using FCoE, and another switch is a 5900CP that connects to
storage systems via native Fibre Channel.
The focus now shifts to interconnecting the FCoE Top-of-Rack and native FC
switches using a fabric expansion. This involves understanding and configuring
E_Ports, the FSPF routing protocol, and validating the resultant Fibre Channel
routing table.

5-82 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

FabricExpansion: E_Port

Figure 5-75: FabricExpansion: E_Port

For FC switch-to-switch connections, each side of the link is to be configured as an


E_Port. Once configured, the switches discover each other, and fabric services are
thus extended across multiple switches. The E_Port in an HP5900 or other
Comware device must be facing another Comware E_Port.
This includes Fabric link services, such as the FSPF routing protocol, which
populates the FC routing table by exchanging domain information.
Name service database exchange will also occur, providing a consistent, fabric-
wide name service. All targets and initiators can be aware of all devices,
regardless of physical switch connections.
For security purposes, Zone databases information is exchanged between
switches. All switches have access to all zone information.
VSAN tagging support will also be configured consistently across all switches in
the fabric.
For FCoE, the E_Port is a virtual construct, and so is referred to as a VE_Port.
This has the same functionality as a native Fibre Channel E_Port. Unlike most
FCoE-based functions, DCBX protocol functionality is not required for VE_Ports.
Instead, the manual configuration of simple PFC commands is sufficient.
For Server CNAs, FIP fulfills all of the initial connection requirements. However,
FIP does not serve this purpose with switch-to-switch links. Instead, it is simply
assumed that the network administrator properly configures these connections.
The FIP keep-alive mechanism is used to determine ongoing link status.

Rev. 14.41 5-83

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Fabric Expansion: Routing Table Exchange

Figure 5-76: Fabric Expansion: Routing Table Exchange

Once the switches become aware that they are part of an expanded fabric, they
will exchange routing table information. These routing tables can be constructed
using static routes or via the Fibre Channel Shortest Path First protocol.
On Comware switches, FSPF is enabled by default to ensure that /8 switch
domain ID routes are automatically exchanged.

5-84 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration steps for Fabric Expansion with


FCoE

Figure 5-77: Configuration steps for Fabric Expansion with FCoE

Prior to configuring fabric expansion, you must manually configure the physical
interface to support PFC.
Then you can create a new VFC interface, set its port type to be an E_Port, and
verify the configuration.

Rev. 14.41 5-85

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 1: Create New VFC Interface

Figure 5-78: Configuration Step 1: Create New VFC Interface

The first step is to prepare a new VFC interface. In this scenario, interface
Ten1/0/4 is to be connected to another switch, thereby serving an E_Port role. The
example shows this interface being configured as a trunk port, with VLAN 10
enabled to traverse the link.
Next, VFC 4 is created, bound to the interface, and made a member of VSAN 10.

5-86 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 2: Set FC Port Type to


E_Port

Figure 5-79: Configuration Step 2: Set FC Port Type to E_Port

The virtual port is configured as an E_Port, and then the configuration is validated.
The example shows that VFC 4 is created, defined as an E_Port, and bound to
interface XGE1/0/4.

Rev. 14.41 5-87

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 3: Verify Status

Figure 5-80: Configuration Step 3: Verify Status

Several display commands are available to verify successful FIP peering, FSPF
route peering and routing table information, as well as the name service database.
You will explore these commands in the lab activities.

5-88 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Multi-path - Concepts

Figure 5-81: Multi-path - Concepts

Multi-path deployments connect a host system to both SAN A and SAN B for
redundancy. Storage systems that support path redundancy could have dual
adapters to connect to both SANs. The figure depicts a storage system with CTRL-
1 and CTRL-2, and each controller is connected to both SAN A and SAN B.
If these controllers were configured in an Active-Active mode, then the server
would see LUN-A four times. Using HBA-P1, connected to SAN A, the server
would see the target FC_IDs for CTRL-1 and 2, and each would show LUN-A. The
server would have a similar view via HBA-P2, via SAN B.
If the server is not aware that it is seeing the same disk 4 times, it could write
different data to the same disk, leading to file system corruption. To prevent this
issue, a Multi-Path I/O (MPIO) driver is required for host HBAs. The MPIO feature
ensures that each of the four paths are be identified, and recognized as separate
connections to the same LUN. If one path fails, the MPIO automatically switches to
a different path, enabling continuous service in the face of hardware or connection
failures.
MPIO also makes load-sharing options available. Various algorithms could be
used to split the load among different paths, some based on connections, some
based on perceived load. Load balancing may require special load-balancing
software installations on the server, and must be configured by the server
administrator. The fabric has no control over these load-sharing functions.

Rev. 14.41 5-89

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Multi-path – Automatic Failover

Figure 5-82: Multi-path – Automatic Failover

MPIO facilitates automatic failover functionality. In the figure, the active link
between HBA-P1 and the SAN A switch has failed. The MPIO driver will
immediately detect this failure, and use HBA-P2 for continued service.
This failover feature is transparent to the server, and no special fabric configuration
is required to support this feature. The network administrator must simply ensure
that both fabrics support the same services and connections. If a certain storage
target was only connected to SAN A, there is obviously no failover capability
available to this target via SAN B.
A similar requirement relates to fabric zone configuration, which must be
configured identical on both fabric A and B. If Fabric B’s zone configuration filters
server visibility to a target, failover functionality is broken.

5-90 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Lab Activity 5.3: Lab Topology


FC Storage system
VSAN 3ZZ1 – VLAN3ZZ1 VSAN 3ZZ2 – VLAN 3ZZ2
ClassCore1 ClassCore2
5900CP-1 5900CP-2
FC1/0/YY FC1/0/YY
POD ODD POD EVEN
01/03/05/ VFC 1YA VFC 1YB VFC 2YA VFC 2YB 02/04/06/
07/09/11 08/10/12

T1/0/23 T1/0/24 T1/0/23 T1/0/24


Core1
Core2
5920-1 Core2 Core1
5920-2
T1/0/13 T1/0/13 5920-2 5920-1
T1/0/13 T1/0/13
VFC 113 VFC 213 VFC 213
VFC 113

CNA-P1 CNA-P2 CNA-P1 CNA-P2

Figure 5-83: Lab Activity 5.3: Lab Topology at the end of this lab

In this lab, the local POD HP 5900 switches will be configured with FCoE
interfaces to the Classroom Core HP 5900CP switches.
In the first part of the lab, the Classroom Core1 and Classroom Core2 will be
configured.
In the second part of the lab, each student will configure the local POD 5900-1
uplink to the Classroom Core1 and the local POD 5900-2 uplink to Classroom
Core2.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-91

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Lab Activity Preview: FC/FCoE - FC Fabric


Extension

Figure 5-84: Lab Activity Preview: FC/FCoE - FC Fabric Extension

In this lab, you will configure a native FC SAN and the native SAN interfaces

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-92 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Lab Activity 5.3 Debrief


Use the space below to record your key insights and challenges from Lab
Activity 5.3.

Debrief for Lab Activity 5.3


Challenges Key Insights

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-93

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Fabric Zoning

Figure 5-85: Fabric Zoning

Fabric zoning provides for access restrictions inside a VSAN, and so is configured
separately for each VSAN. This zoning configuration controls with nodes may
communicate with each other.
The objective is to ensure that host initiators can only discover intended targets.
This control is implemented as a set of permit and deny statements, similar to a
TCP/IP-based ACL.
The figure shows two storage systems. One is a Tier-1 production system for ESX
hosts, named “3Par”. The other is a Tier-2 system named “MSA”. The server
named ESX-1 is placed in a zone named ESX, along with the 3Par storage
system. The archive server is placed into the archive zone with the MSA storage
system.
Zones can be configured such that only devices in the same zone may discover
each other. Although all systems share the same fabric, their access scope is
limited by zoning. It is quite easy for the network administrator to modify this
behavior at will. Existing servers can be granted additional access, or have stricter
filtering controls applied, and new servers and zones can be added or modified.

5-94 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Fabric Zoning Concepts

Figure 5-86: Fabric Zoning Concepts

A zone member simply refers to a node, either by FC_ID or the port WWN
(pWWN). For reasons soon to be described, it is recommended that the actual
FC_ID or pWWN be abstracted in the zone configuration through the use of Zone
aliases.
Zones are defined in order to group members together. Traffic between all
members of the same zone is allowed. It is often best to avoid having a large
number of members in a single zone, as this will impact the number access rules
to be created. Instead, consider creating small zones for point-to-point
connections. It is also recommended to have only one initiator per zone. For
example, a zone may be created to allow host ESX-1 to access the 3Par storage
target. A second zone could be created to allow host ESX-2 to access 3Par, and
so on. This is often preferable to creating a single zone with several server and
storage system members.
With two hosts and a target in the same zone, the switch needs a rule to permit
ESX-1 to ESX-2, and to 3Par, and rules in the other direction, from 3Par to ESX-1,
along with rules from ESX-2 to ESX-1 and 3Par, and back. A zone with 10
members requires an exponential number of rules, since you must allow each
member to see every other member. Creating zones with only two members is
recommended, since it preserves hardware resources and reduces configuration
efforts.
Defined zones can be grouped into a zone set. The zone database supports
multiple zone sets, but only one zone set can be active at any time. The zone
database is distributed to all FC switches, and can be configured to share all zone
sets, or only the active zone set.

Rev. 14.41 5-95

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Zone Members

Figure 5-87: Zone Members

Zone members can be identified based on FC_ID or WWN. Remember that an


FC_ID is dynamically assigned, and can change over time. This lack of
permanence makes the FC_ID a less reliable identifier for security purposes.
However, switches also support static FC_ID assignment, thereby eliminating this
concern.
FC_ID-based zoning is often referred to as “hard” zoning, since it is enforced at
the hardware level. This makes it a more secure method of zoning, especially in
conjunction with fabrics that contain untrustworthy nodes, or nodes not under your
direct administrative control.
WWN-based zoning is often called soft zoning, since it is enforced by the name
server. When servers query the name service, zoning can filter the response, so
that initiators will only learn about authorized targets.

5-96 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Zone Members

Figure 5-88: Zone Members

Since FC_ID can change, WWN-based zoning is considered a more stable


method of access control. For example, if a VM is configured with a virtual HBA
(vHBA), giving it direct Fibre Channel access, the VM will have its own FC_ID and
WWN. If this VM is moved to a different host, its FC_ID would change, but the
WWN is maintained and would therefore have consistent SAN access.
Zone Aliases are logical names that the administrator can assign, and to which the
hosts FC_ID or WWN can be bound. If a CNA or HBA must be replaced, only the
zone alias configuration need be updated. The rest of your zone configuration
remains valid, since it only references the alias.
Zone Aliases also ease the process of copying zone configuration from SAN-A to
SAN-B. The WWNs are unique between SAN A and B, but since your zone
configuration only references aliases, it can simply be copied it from SAN A to SAN
B.

Rev. 14.41 5-97

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Zone Enforcement

Figure 5-89: Zone Enforcement

Hard zoning is the default zoning method for Comware devices. This means
permitted source and destination address information is programmed at the ASIC
level, creating a hardware-enforced ACL, permitting and denying traffic based on
information in the transmitted frames.
Since ASICs have a limited number of resources, an overly large zone set may
force the switch to use soft zoning. This is especially true for zones that have been
configured with many members. The switch from hard to soft zoning will occur
automatically, when hardware resource limits have been reached.
The switch to soft zoning means that filtering is no longer enforced at the packet
level. Instead, filtering occurs when the switch responds to name service requests.
For example, when the archive server queries the name service for targets, its
response only includes the MSA target. Since the 3Par storage target is not
included, the archive server is unaware of that target.
Access to that target is technically possible, but would require a relatively skilled
hacker to determine the FC_ID for 3Par, and reprogram the HBA to transmit
frames directly to this FC_ID, without use of standard discovery mechanisms.

5-98 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Zoning Configuration Prerequisites

Figure 5-90: Zoning Configuration Prerequisites

Prior to zone configuration, an operational VSAN must be configured, and the port
WWNs for hosts must be documented.

Rev. 14.41 5-99

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Steps for Zoning

Figure 5-91: Configuration Steps for Zoning

The figure introduces the steps to configure zoning, as detailed in the following
pages.

5-100 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 1: Prepare Zone Member


Alias

Figure 5-92: Configuration Step 1: Prepare Zone Member Alias

Zones can directly reference FC_ID or pWWNs. However, zone member aliases
can provide ongoing administrative advantages. While multiple members can be
configured with the same alias, it is a best practice to configure a unique alias per
member to support more granular security controls in the future.
In the example shown above, zone aliases are analogous to objects used in an
ACL. Two arbitrary but administratively meaningful zone alias names are
configured, and the associated pWWNs are assigned to them.
All zone configuration is VSAN specific. You must deploy separate zone
configurations for each one. However, with consistent zone aliases, you can simply
copy and paste the rest of your zone configuration among VSANs.
In other words, the configuration indicated in the figure above is the only portion of
the zoning deployment that will be unique between VSANs. All of the zoning
configuration described in the following pages can simply be configured once, on
VSAN 10, and the copy/pasted to your other VSAN.

Rev. 14.41 5-101

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 2: Define Zones

Figure 5-93: Configuration Step 2: Define Zones

Zone definitions are analogous to individual lines (ACE’s) in an access list. Zone
members are allowed to communicate with each other.
In the example above, a zone named esx1-3par1 is created and members are
specified, based on the aliases created previously. Since aliases were used, this
and all remaining zone configuration syntax can simply be copied and pasted into
the other VSAN. Also, this example follows the previously mentioned best-practice
of creating small, point-to-point zones, to ensure ASIC resources are not unduly
taxed.
Another example is also shown, to reveal the syntax used to base zone
membership on FC_IDs and pWWNs. This is also shown to point out that mixing
WWN and FCID is not considered a best practice, and should be avoided.

5-102 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 3: Define a Zone Set

Figure 5-94: Configuration Step 3: Define a Zone Set

Now that zones have been defined, they can be grouped together into a zone set.
Similar to how an ACL groups individual ACE’s into an applicable entity, a zone set
group zones together into a single entity.
The example shows a set named Zoneset1 being created, with the previously
defined zones specified as members.

Rev. 14.41 5-103

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 4: Distribute and Activate


Zone Set

Figure 5-95: Configuration Step 4: Distribute and Activate Zone Set

Continuing with the access list analogy, an ACL will group individual ACE’s into an
entity that can then be applied to an interface. The defined zone set collects
groups together into an entity, which can be distributed and activated in the fabric.
This entity will indeed be distributed to all switches in the fabric, including both
native Fibre Channel and FCoE-based switches. Recall that while multiple zone
sets can exist in the database, only one can be active in the fabric. As the network
administrator, you can configure the distribution to include all zone sets, or only the
active zone set.
The figure shows how to configure the distribution for all zones, by using the “full”
option, and then configuring which zone is to be activated.

5-104 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 5: Verify

Figure 5-96: Configuration Step 5: Verify

You will explore several validation commands during lab activities. This includes
the following:
 Display zoneset vsan
 Display zone name esx1-3par1
 Display zone-alias
 Display zone member fcid 010001
 Display zoneset active vsan 10

Rev. 14.41 5-105

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

NPV – NPIV Overview

Figure 5-97: NPV – NPIV Overview

This section provides insight into the N_Port Virtualization and the N_Port Virtual
ID. Terms will be defined and related concepts will be explored. You will then learn
how this can improve multi-vendor interoperability, and how to configure the
N_Port Virtualization role on a fibre channel device.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-106 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Server Virtualization with NPIV

Figure 5-98: Server Virtualization with NPIV

The goal of N_Port ID virtualization is to enable Hypervisors such as VMware ESX


or Microsoft Hyper-V to extend physical HBA functionality to Virtual Machines
(VMs).
The figure shows VMWare ESX server 1 with a physical HBA (pHBA). This
physical HBA will perform a traditional FLOGI to the fabric, and so the physical
ESX server gains access to LUNs.
For a VM deployment, a virtual HBA is added to each VM’s device list. On the
storage system, a LUN is defined for this virtual HBA to access. For example, the
VM might be a Microsoft Exchange Mail server, able to directly access the SAN
fabric through the virtual HBA, and use the defined LUN to store and retrieve data.
Multiple VMs run on one physical host, and each VM requires SAN access. This
means that each VM must perform an FLOGI to the fabric, and be assigned a
unique FC_ID and WWN. Since a single physical server is hosting multiple VMs,
the FC fabric perceives a single physical port performing multiple logins, with
multiple addresses assigned.
The FC switch fabric must have the ability to support this scenario, in the form of a
feature called N_Port ID Virtualization. Support for NPIV is enabled by default on
the Comware Fibre Channel switches. Both the ESX host and the physical HBA
must also support NPIV, since VMs are not aware of this concept.
The VM’s virtual HBA is simply performing a traditional N_Port role, using standard
FLOGI communications. Inside the ESX host, a virtual port is created towards
each VM. The example in the figure depicts two VMs deployed in a single ESX
host, so virtual ports 1 and 2 are created. Each virtual port operates as F_Port. As
in physical fabrics, the VM’s virtual N_Port connects to this virtual F_Port.
However, ESX hosts are not actually switches, and so are not capable of
processing the virtual HBA’s FLOGI request. This request is forwarded by the
physical host to its upstream switch connection. The VM perceives that it is
communicating with the virtual F_Port for FLOGI, while the physical server actually
forwards this on to the physical switch.
Rev. 14.41 5-107

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

With NPIV, the host server’s physical HBA performs FLOGI and receives the first
available FC_ID. The physical HBA will then proxy the VM’s virtual HBA login
toward the upstream FC switch. The VM’s virtual HBA receives the next available
FC_ID.
For data forwarding, it is important to understand that all storage traffic must leave
the physical HBA. For example if VM 1 is running FC target software, it can
operate as a disc storage system and accept incoming connections. VM 2 is
configured as a typical server, and so could use VM 1 as a target. However, this
traffic cannot stay inside the ESX environment because it is not a fibre channel
switch. It has no knowledge of FC routing and zoning.
This is why the physical ESX host must always forward traffic upstream to an
actual switch, which can route the traffic to an appropriate destination. This could
be back over the same physical interface, in this example.
This is a very unlikely scenario because most VMs are used as initiating hosts.
Still, the possibility does exists, and the scenario serves to highlight the
relationship between virtual and physical components.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-108 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

FC Switch with NPV Mode

Figure 5-99: FC Switch with NPV Mode

A Fibre Channel switch can be configured to operate in NPV mode, to take


advantage of the NPIV concept. Essentially, this moves the functionality of ESX
internal virtual ports (described previously) out of the virtual realm toward the ESX
physical server and it’s directly attached physical switch ports.
The NPV mode switch has an uplink to another physical switch, assigned a
domain ID of 0x01 in this example. This uplink is configured as an N_Port
Virtualization (NP_Port), which indicates that it will proxy FLOGI requests to the
upstream switch.
The downlink ports connected to hosts are configured as F_Ports, since the
attached ESX hosts connect with traditional HBA N_Ports. Since no fabric services
are provided by this switch, all fabric service requests received from the ESX hosts
are proxied to the upstream fibre channel switch at Domain ID 0x01. All FLOGI
sessions are proxied to this upstream switch, which will assign FC_IDs.
In the figure, the NPV switch logs into switch 0x01, receiving the first available
FC_ID of 0x010001. Assuming the ESX servers are the next to login, they will be
assigned FC_IDs of 0x010002 and 0x010003.
When one of the physical servers performs a name service lookup, this is also
proxied through the NPV switch, on to FC switch 0x01.
Also, the same data forwarding rules apply as in the previous example. All traffic
must exit the NPV switch. If host ESX 1 requires a storage connection to ESX 2,
this request must be forwarded upstream to switch 0x01, which will then send that
request out the same physical interface, back through the NPV, and on to target
0x0003. As before, this is an unlikely scenario, since hosts are typically organized
behind the NPV switch, while storage systems would be connected to the fibre
channel switch.

Rev. 14.41 5-109

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

FC Switch NPV Mode - Considerations

Figure 5-100: FC Switch NPV Mode - Considerations

The figure compares advantages and disadvantages of using a Fibre Channel


switch configured to operate as an N_Port Virtualization.
NPV can simplify fabric services because there’s no need to distribute the zone
information to other switches. Switches operating in NPV mode do not take part in
zone enforcement, because all of this communication flows through the NPV mode
switch to be processed and enforced by the real fibre channel switch.
Similarly, there is a reduced number of name service and routing updates, as both
of these services are also maintained solely by the real fibre channel switches.
This means that name service databases are smaller, while routing tables and
topologies are simplified.
Meanwhile, redundancy capabilities remain, since NPV mode switches are
capable of link aggregation to the native FC switch fabric. Also, the concept of
redundant SAN A and SAN B is available for redundancy by simply configuring two
NPV mode devices for the two different fabrics.
An advantage for larger deployments is that NPV mode reduces the number of
domain ID’s in use, which is limited to 239. Since NPV mode switches do not
consume a domain ID, greater scalability is available.
Another advantage relates to greater vendor Interoperability. There is no real
standardize Interoperability for the fabric services. Concepts are well described but
most vendors have different features and methods of implementing these
concepts. Practically speaking, there is little to no actual Interoperability between
the vendors.
NPV switches do not take part in actual fabric services. They simply emulate a
traditional node (or multiple nodes), and this is a very standardized mechanism
which will work fine with other vendors.
For example, a server could be connected to a 5900 NPV switch, which in turn is
connected to a Brocade FC fabric. This effectively integrates a Comware 5900
switch into an existing Brocade fabric.

5-110 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Another example involves using a virtual connect flex fabric in NPV mode. Blade
servers connect to the virtual connect flex fabric device, and the virtual connect
flex fabric device, configured as an NP_Port connects to a full Comware based
5900 fibre channel fabric.
One perceived disadvantage is that the traffic must leave the NPV switch and
travel to an actual FC switch to get forwarded to a target. Practically speaking, this
is not an issue since most designs place initiators behind the NPV switch, and
targets behind the FC fabric switches. Traffic must traverse this path anyway.
Another possible disadvantage relates to link oversubscriptions. Since multiple
servers may be connected through a reduced number of uplinks, you must ensure
that that sufficient bandwidth is available on uplinks.
NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-111

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Prerequisites to Configure NPV Mode

Figure 5-101: Prerequisites to Configure NPV Mode

Before configuring a switch to operate in NPV mode, the system working mode
must be set to advanced, which requires a system reboot. Also, you should verify
that no existing FCoE mode configurations have been applied.

5-112 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Steps for NPV Mode

Figure 5-102: Configuration Steps for NPV Mode

The steps to configure NPV mode involve globally enabling this mode on the
switch, and then configuring the VFC or FC Interfaces. Uplink interfaces must be
configured as NP_Ports, and downlink interfaces must be configured as F_Ports.
Finally, the configuration should be verified.
Notice that step 2 involves configuring the port as either a virtual or native FC port.
This means that the NPV mode can act as a convenient migration and
interoperability mechanism between native Fibre Channel and FCoE systems. This
is because a 5900 CP could use FCoE over Virtual FC interfaces to connect to the
servers, while using native FC interfaces to connect to a traditional Cisco, HP, or
Brocade fabric.
If there are legacy servers with native Fibre Channel, they can be connected to the
downstream native FC interfaces of the NPV switch. This switch can connect
upstream to native fibre channel storage system, while simultaneously connecting
via FCoE to other storage systems.

Rev. 14.41 5-113

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 1: Configure global NPV


mode

Figure 5-103: Configuration Step 1: Configure global NPV mode

The first step is to enable global NPV mode. This done as shown above, with the
“fcoe-mode npv” command.
This FCOE mode command supports a single configuration option only. A single
switch cannot function in both an NPV mode and fibre channel forwarding mode at
the same time.

5-114 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 2: Configure FC or VFC


Interfaces

Figure 5-104: Configuration Step 2: Configure FC or VFC Interfaces

The second step is to configure FC or virtual FC interfaces, similar to previous


configurations which we have seen. You can configure a native fibre channel
interface, ensuring that the correct optics have been installed.
The top example shows an interface that started under the assumption that it
would operate as 10Gbps Ethernet. The command “port-type fc” was issued,
converting this port to a native fibre channel port.
The bottom example illustrates the configuration of a virtual fibre channel interface
for FCoE, assuming that DCB is already configured. Interface ten-gigabit 1/0/4 is
configured as a trunk link, and configured to allow VLAN 10 to traverse this link.
Next, interface VFC 4 is created, bound to the interface ten1/0/4, and made a
member of VSAN 10.

Rev. 14.41 5-115

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 3: Uplink Interface NP_Port

Figure 5-105: Configuration Step 3: Uplink Interface NP_Port

The uplink interface should be configured as the N_Port Virtualization port. This is
the interface that will connect to an available port on fibre channel switch. This
would typically be a Fibre Channel Forwarder (i.e., a native Fibre channel switch).
However, it could be another NPV switch’s F_Port.
For example, a blade server could be connected to a virtual connect flex fabric.
The virtual connect flex fabric, operating in NPV mode, connects to a 5900 CP,
also operating in NPV mode, which is connected to a Brocade SAN switch.
In this scenario, all the fabric logins will be handled by the Brocade SAN switch
with a single domain ID. As long as this is a typical deployment, and storage
systems targets are not running on the blade server, the system will work fine.

5-116 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Configuration Step 4: Downlink Interfaces


F_Port

Figure 5-106: Configuration Step 4: Downlink Interfaces F_Port

The downlink interfaces will connect to the N_Port of the server’s HBA or CNA.
The hosts will see F_Port and initiate FC link setup. This host FLOGI session will
be proxied by the NPV switch to the upstream Fibre Channel switch.
This is accomplished with the “fc mode f” command.

Rev. 14.41 5-117

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Step 5: Verify Status

Figure 5-107: Configuration Step 5: Verify Status

The NPV switch status should be verified. Using the “display npv login” command,
you can see the actual logins. Even though they are not processed by NPV mode
switch, it still keeps track of logins, enabling you to see which FC devices are
logged in to which port.
This is important. When the NPV mode switch receives data for a specific FC_ID,
it must know which downstream interface should receive this traffic.
Use the “display NPV status” command to validate operational status.
With the “display npv traffic-map” command, you can see which downstream ports
are currently using which upstream ports. If multiple upstream ports are available,
the NPV switch can perform a kind of load distribution.
It does this by assigning some downstream ports to uplink 1 and other
downstream ports to uplink 2, for example. This can be seen by displaying the
traffic map.

5-118 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Lab Activity 5.4: Lab Topology

Figure 5-110: Lab Activity 5.4: Lab Topology at the end of this lab

In this lab, the local POD HP 5900 switches will be configured as N Port ID
Virtualization switches.
This means they will not provide fabric services anymore, but simply proxy the
FLOGI requests to the upstream FC Fabric Switch (classroom core1/2).

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-119

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Lab Activity Preview: FC/FCoE - NPV

Figure 5-111: Lab Activity Preview: FC/FCoE - NPV

In this lab, you will configure NPV mode on a Comware switch and review the FC
Login and Name service database changes in NPV mode

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-120 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Lab Activity 5.4 Debrief


Use the space below to record your key insights and challenges from Lab
Activity 5.4.

Debrief for Lab Activity 5.4


Challenges Key Insights

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 5-121

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Summary

Figure 5-112: Summary

In this module, you learned about how various infrastructure components create
an integrated SAN fabric. This included discussions about HBAs, CNAs, native
Fibre Channel switches, and FCoE switches.
You learned that FC fabrics deploy various numbering schemas, such as FC_ID
addresses for data transmissions, static and FSPF-based routing, and WWNs with
zoning to control the targets that specific server initiators are allowed to use.
Key services provided by the SAN fabric include the Fabric Login service, which
formalizes the connection of hosts to the fabric, and the Simple Name Service
used to map FC_IDs to WWNs. VSANs enable separate logical storage fabrics to
share a common physical infrastructure, which can lower costs and improve
security.
You also learned that MPIO is required to leverage the improved reliability and
performance provided by multi-path redundancy and load-sharing.
Finally, NPIV and NVP were discussed as methods of enabling hypervisors such
as VMWare’s ESX or Microsoft’s Hyper-V to avail HBA or CNA functionality to
internally hosted Virtual Machines.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

5-122 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Learning Check
After each module, your facilitator will lead a class discussion to capture key
insights and challenges from the module and any accompanying lab activity. To
prepare for the discussion, answer each of the questions below.
1. Which statement below accurately describes an FCoE deployment
consideration?
a. FCoE protocols and systems are easily deployed as a multi-vendor
solution.
b. Nearly any HP switch and storage system combination can be used to
deploy an FCoE solution
c. There is no need to deploy a specific version of firmware to the switches
in an FCoE deployment
d. HP has devised a set of converged networking cookbooks to ensure you
are deploying a validated set of storage arrays, servers, CNAs, and
switches, using specific firmware versions.
2. Choose three correctly described components of a typical FCoE deployment
(Choose three).
a. A host is a server system that initiates disk read or write requests
b. A disk array is a target devices that responds to disk read or write
requests from a host.
c. N_Ports are used to connect hosts nodes to the fabric, while T_ports
connect target disk end systems to the fabric.
d. F_ports are those fabric ports that connect to either host initiators or
target disk arrays.
e. E_Ports are used to expand the fabric to multiple switches.
3. Which four statements below accurately describe FCoE naming and
forwarding conventions (Choose four)?
a. A WWN provides a unique identifier for each FCoE device to enable
frame delivery
b. The WWN is somewhat like a BIA for a Layer 2 network interface that can
identify systems independently from the FC_ID
c. The FC_ID is a dynamically assigned address that is used as the source
and destination address of a frame.
d. With Comware devices, the FC_ID uses 16 bits to identify ports,
increasing addressing scalability.
e. Each switch is assigned a unique domain ID. This ID must be manually
assigned
f. The domain ID can be statically assigned or manually assigned.
4. Which two statements below accurately describe FC classes and flow control
(Choose two)?
a. BE flow control uses an R_RDY message for flow control.

Rev. 14.41 5-123

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

b. BB flow control use a frame credit mechanism to help provide lossless


frame transmission.
c. All flow control mechanisms can be used by all FC classes.
d. EE flow control uses an ACK frame to let the transmitter know that the
previous frame was successfully received.
e. BB flow control is not well suited for time-sensitive applications.
5. What are to prerequisites to configuring NPV mode on a switch (Choose two)?
a. System-working-mode must be set to advanced, which immediately takes
effect
b. No existing fcoe-mode configurations should be in place
c. System-working-mode must always be left to its default value
d. System-working-mode must be set to advanced, which takes effect after a
reboot
e. The correct fcoe-mode must be configured before NPV mode is activated

5-124 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Fibre Channel over Ethernet

Learning Check Answers


1. d
2. a, b, d, e
3. b, c, d, f
4. b, d
5. b, d

Rev. 14.41 5-125

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

5-126 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
To learn more about HP Networking, visit
www.hp.com/networking
© 2014 Hewlett-Packard Development Company, L.P. The information contained herein is
subject to change without notice. The only warranties for HP products and services are set
forth in the express warranty statements accompanying such products and services. Nothing
herein should be construed as constituting an additional warranty. HP shall not be liable for
technical or editorial errors or omissions contained herein.

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.

You might also like