You are on page 1of 97

#CLUS

ACI Multi-Site
Architecture and
Deployment
Max Ardica – Principal Engineer
FLPACI-2125

#CLUS
Session Objectives
At the end of the session, the participants should be able to:
• Articulate the different deployment options to interconnect
Cisco ACI networks (Multi-Pod vs. Multi-Site)
• Understand the functionalities and specific design
considerations associated to the ACI Multi-Site architecture
Initial assumption:
• The audience already has a good knowledge of ACI main
concepts (Tenant, BD, EPG, L2Out, L3Out, etc.)

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Agenda • ACI Network and Policy Domain Evolution
• ACI Multi-Pod Quick Review
• ACI Multi-Site Deep Dive
• Overview and Use Cases
• Introducing the ACI Multi-Site Orchestrator (MSO)
• Inter-Site Connectivity Deployment Considerations
• Control and Data Plane
• Connecting to the External Layer 3 Domain
• Network Services Integration
• Virtual Machine Manager (VMM) Integration
• Multi-Pod and Multi-Site Integration
• Migration Scenarios

• Conclusions and Q&A

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
ACI Network and Policy
Domain Evolution
Introducing: Application Centric Infrastructure (ACI)
Web App DB
Outside QoS QoS QoS
(Tenant Service
Filter Filter
VRF)

APIC

Application Policy
ACI Fabric Infrastructure Controller
Integrated GBP VXLAN Overlay

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
Cisco ACI: Industry Leader

4,800+ 46+% 65+


ACI Customers ACI Attach Rate Ecosystem Partners

Ecosystem Partners

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
Cisco ACI
Fabric and Policy Domain Evolution

ACI Single Pod Fabric ACI Multi-Site ACI Multi-Cloud


ISN
Fabric ‘A’ Fabric ‘n’

MP-BGP - EVPN


ACI 2.0 - Multiple Netw orks ACI 3.1/3.2 - Remote Leaf
(Pods) in a single Av ailability and v Pod ex tends an
Zone (Fabric) Av ailability Zone (Fabric) to
remote locations

ACI 1.0 - ACI Multi-Pod Fabric ACI 3.0 – Multiple Av ailability ACI Remote Leaf Future – ACI Ex tensions
Leaf/Spine Single Zones (Fabrics) in a Single to Multi-Cloud
Pod Fabric IPN Region ’and’ Multi-Region
Pod ‘A’ Pod ‘n’ Policy Management
MP-BGP - EVPN


APIC Cluster

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
Regions and Availability Zones
OpenStack and AWS Definitions
OpenStack
• Regions - Each Region has its own full OpenStack
deployment, including its own API endpoints, networks
and compute resources
• Availability Zones - Inside a Region, compute nodes can
be logically grouped into Availability Zones, when launching
new VM instance, we can specify AZ or even a specific
node in a AZ to run the VM instance

• Regions – Separate large geographical areas, each Amazon Web Services


composed of multiple, isolated locations known as
Availability Zones
• Availability Zones - Distinct locations within a region
that are engineered to be isolated from failures in other
Availability Zones and provide inexpensive, low latency
network connectivity to other Availability Zones in the
same region
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
Terminology
• Pod – A Leaf/Spine network sharing a common control plane (ISIS, BGP,
COOP, …)
 Pod == Network Fault Domain
• Fabric – Scope of an APIC Cluster, it can be one or more Pods
 Fabric == Availability Zone (AZ) or Tenant Change Domain
• Multi-Pod – Single APIC Cluster with multiple leaf spine networks
 Multi-Pod == Multiple Networks within a Single Availability Zone (Fabric)
• Multi-Fabric – Multiple APIC Clusters + associated Pods (you can have
Multi-Pod with Multi-Fabric)*
 Multi-Fabric == Multi-Site == a DC infrastructure region with multiple AZs

* Available from ACI release 3.2


#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
Typical Requirement
Creation of Two Independent Fabrics/AZs

Fabric ‘A’ (AZ 1)

Fabric ‘B’ (AZ 2)

Application
workloads deployed
across availability
zones
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
Typical Requirement
Creation of Two Independent Fabrics/AZs

Multi-Pod Fabric ‘A’ (AZ 1)

‘Classic’ Active/Active

Pod ‘1.A’ Pod ‘2.A’

ACI Multi-Site

Multi-Pod Fabric ‘B’ (AZ 2)

‘Classic’ Active/Active

‘1.B’Application
Pod workloads deployed
Pod ‘2.B’
across availability
zones
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
ACI Multi-Pod Quick
Review
ACI Multi-Pod For More Information on
ACI Multi-Pod:
Overview BRKACI-2003
VXLAN
Inter-Pod
Pod ‘A’ Network
Pod ‘n’

MP-BGP - EVPN


Up to 50 msec RTT

APIC Cluster
IS-IS, COOP, MP-BGP IS-IS, COOP, MP-BGP

Availability Zone

• Multiple ACI Pods connected by an IP Inter-Pod L3 • Forwarding control plane (IS-IS, COOP)
network, each Pod consists of leaf and spine nodes fault isolation
• Up to 50 msec RTT supported between Pods • Data Plane VXLAN encapsulation between
• Managed by a single APIC Cluster Pods
• Single Management and Policy Domain #CLUS End-to-end
• FLPACI-2125
BRKACI-2003 © 2018 Ciscopolicy enforcement
and/or its affiliates. All rights reserved. Cisco Public 14
Single AZ with Maintenance & Configuration Zones
Scoping ‘Network Device’ Changes
Maintenance Zones – Groups
of switches managed as an
“upgrade” group Inter-Pod
Network

ACI Multi-Pod
Fabric

APIC Cluster

Configuration Zone ‘A’ Configuration Zone ‘B’


• Configuration Zones can span any required set of switches, simplest approach may be to map a
configuration zone to an availability zone, applies to infrastructure configuration and policy only

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Single AZ with with Tenant Isolation
Isolation for ‘Virtual Network Zone and Application’ Changes

Inter-Pod
Network

ACI Multi-Pod
Fabric

APIC Cluster

Tenant ‘Prod’ Configuration/Change Domain Tenant ‘Dev’ Configuration/Change Domain

• The ACI ‘Tenant’ construct provide a domain of application and associated virtual
network policy change
• Domain of operational change for an application (e.g. production vs. test)

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
ACI Multi-Pod
Supported Topologies
Intra-DC Two DC sites directly connected

1G/10G/40G/100G
10G*/40G/100G 10G*/40G/100G
POD 1 10G*/40G/100G 10G*/40G/100G
POD n POD 1 Dark fiber/DWDM POD 2
(up to 50** msec RTT)

APIC Cluster APIC Cluster

3 (or more) DC Sites directly connected Multiple sites interconnected by


1G/10G/40G/100G a generic L3 network
10G*/40G/100G
P O D 1 10G*/40G/100G POD 2
Dark fiber/DWDM 10G*/40G/100G 10G*/40G/100G
(up to 50 msec RTT)
L3
10G*/40G/100G
10G*/40G/100G (up to 50msec RTT) 10G*/40G/100G

POD 3 * 10G only with QSA adapters #CLUS


on EX/FX and 9364C spines © 2018
FLPACI-2125 ** 50 msec
Cisco and/orsupport
its affiliates.added in SW release
All rights reserved. 2.3(1)17
Cisco Public
ACI Multi-Site Deep
Dive
Overview and Use Cases
ACI Multi-Site VXLAN
ACI 3.0 Release
Overview Inter-Site
Network

MP-BGP - EVPN

Multi-Site Orchestrator

Site 1 REST
Site 2
GUI
API Availability Zone ‘B’
Availability Zone ‘A’

• Separate ACI Fabrics with independent APIC clusters • MP-BGP EVPN control plane between sites
• No latency limitation between Fabrics • Data Plane VXLAN encapsulation across
• ACI Multi-Site Orchestrator pushes cross-fabric sites
configuration to multiple APIC clusters providing • End-to-end policy definition and
scoping of all configuration changes enforcement
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
ACI Multi-Site
Main Use Cases

Scale-Up Model to Build a Large Data Center Interconnect (DCI)


Intra-DC Network

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
ACI Multi-Site
Network and Identity Extended between Fabrics

Network information carried across Identity information carried across


Fabrics (Availability Zones) Fabrics (Availability Zones)

VTEP IP VNID Class-ID Tenant Packet


No Multicast Requirement
in Backbone, Head-End
Replication (HER) for any
Inter-Site Network Layer 2 BUM traffic)

MP-BGP - EVPN

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
ACI Multi-Site
Inter-Site Policies and ‘Mirrored’ EPGs

ISN

• Inter-Site policies defined on DP-ETEP A DP-ETEP B


the ACI Multi-Site Orchestrator S1 S2 S3 S4 S5 S6 S7 S8

are pushed to the respective


APIC domains
 End-to-end policy consistency
 Creation of ‘mirrored’ EPGs to EP1 EP2
Site 1 Site 2
locally represent the policies
EP1
Policies enforced at the
EP1 C EP2 EPG
• EPG
C EP2 EPG
‘Mirrored’
‘Shadow
EPG

ingress leaf node at steady Objects’


EPGs

state
EP1 EP2
EPG C EPG

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
23
ACI Multi-Site
Namespace Normalization VNID  16678781
Translation of Class-ID, VNID
Class-ID: 49153
Inter-Site (scoping of name spaces)
VNID  16678781 Network
Spine Translation Table
Class-ID: 49153
R em. Site L oc al Site

VNID 16678781 16547722


MP-BGP - EVPN Class-ID 49153 32770

VNID  16547722 EP1


C EP 2 EP G
EPG
Class-ID: 32770

VNID  16678781
Class-ID: 49153 EP1 EP2 Site 2
Site 1 EPG C EPG
Leaf to Leaf VTEP, Class-ID is local to the Fabric
Leaf to Leaf VTEP, Class-ID is local to the Fabric
VNID Class-ID T e n ant Packet
VNID Class-ID T e n ant Packet VNID Class -ID T e n ant Packet

• Maintain separate name spaces with ID translation performed on the spine nodes
• Requires specific HW on the spine to support for this functionality
• Multi-Site Orchestrator instructs local APIC to program translation tables on
spines #CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
ACI Multi-Site
Software and Hardware Requirements

• ACI Multi-Site introduced from release 3.0


• Support all ACI leaf switches (1st Generation, -EX and -FX)
Can have only a subset
Inter-Site of spines connecting to
• Only –EX spine (or newer) to Network the IP network
connect to the inter-site network
1st Gen 1st Gen
• New 9364C non modular spine -EX -EX

(64x40G/100G ports) supported for


Multi-Site from ACI 3.1 release (shipping)
• 1st generation spines (including 9336PQ)
not supported
• Can still leverage those for intra-site leaf
to leaf communication

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
ACI Multi-Site Networking Options
Per Bridge Domain Behavior
Layer 3 only across sites
1
ISN
Site Site
1 2

 Bridge Domains and subnets not


extended across Sites
 Layer 3 Intra-VRF or Inter-VRF
communication (shared services
across VRFs/Tenants)

MSO GUI
(BD)

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
ACI Multi-Site Networking Options
L3 Only across Sites

Why not using just normal routing across independent fabrics?

Independent ACI Fabrics (no ACI Multi-Site)

L3Out L3Out

Need to apply a contract WAN Need to apply a contract


between internal EPG and between internal EPG and
Ext-EPG associated to the Ext-EPG associated to the
L3Out in Fabric 1 L3Out in Fabric 2
Mandates the use of a multi-VRF
capable backbone network (VRF-
Lite, MPLS-VPN, etc.) to extend
multiple VRFs across fabrics
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
ACI Multi-Site Networking Options
Per Bridge Domain Behavior
Layer 3 only across sites IP Mobility without BUM flooding Layer 2 adjacency across Sites
1 2 3
ISN ISN ISN
Site Site Site Site 2
Site Site Site
1 2 1 2 1 2

 Bridge Domains and subnets not  Same IP subnet defined in separate  Interconnecting separate sites for
extended across Sites Sites fault containment and scalability
reasons
 Layer 3 Intra-VRF or Inter-VRF  Support for IP Mobility (‘cold’ and
communication (shared services ‘live’* VM migration) and intra-  Layer 2 domains stretched across
across VRFs/Tenants) subnet communication across sites Sites, support for application
clustering
 No Layer 2 BUM flooding across
sites  Layer 2 BUM flooding across
sites

MSO GUI MSO GUI MSO GUI


(BD) (BD) (BD)

*’Live’
#CLUS migrationFLPACI-2125
officially supported
© 2018 Ciscofrom
and/orACI release
its affiliates. 3.2
All rights reserved. Cisco Public 28
ACI Multi-Site
Scalability Values Supported in 3.2 Release
Scale Parameter Stretched Objects Site Local Objects
Sites 10 N/A

200 in a single Pod, 400


Leaf scale 1200 across all sites
in a Multi-Pod Fabric
Tenants 300 2500 Stretched Object
Scale
VRFs 800 3000

IP Subnets 3000 10000

BD 3000 10000

EPGs 2000 10000

Endpoints 50000 100000

Contracts 3000 2000 APIC Domain 1 APIC Domain 2


Scale Scale
L3Outs External EPGs 500 (prefixes) 2400

IGMP Snooping 8,000 8,000

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
ACI Multi-Site
Continuous Scale Improvements
ACI Release 3.0 ACI Release 3.1 ACI Release 3.2

MSO 1.0 MSO 1.1 MSO 1.2


Number Of Sites 5 8 10

Max Leafs (across sites) 250 800 1200

Tenants 100 200 300

VRF 400 400 800

BD 800 2,000 3,000

EPGs 800 2,000 3,000

Contracts 1,000 2,000 3,000

L3Out (External EPGs) 500 500 500

Isolated EPGs N/A 400 400

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
ACI Multi-Site ACI 3.2 Release
Spines in Separate Sites Connected Back-to-Back

Inter-Site E-W (Direct Cable or Dark Fiber)

• Back-to-back connections only supported between 2 sites from ACI 3.2 release
• Support for full mesh and ‘square’ topologies
• Support for more than 2 sites scoped for a future ACI release
 Current restriction is that a site cannot be ‘transit’ for communication between other sites

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
ACI Multi-Site
CloudSec Encryption for VXLAN Traffic
Encrypted Fabric to Fabric Traffic
[ GCM-AES-128 (32-bit PN), GCM--AES-
CloudSec = “TEP-to-TEP MACSec” 256 (32-bit PN), GCM-AES-128-XPN (64-
bit PN), GCM-AES-256-XPN (64-bit PN)])

VT EP IP MACSec VXLAN Tenant Packet

VTEP Information
in Clear Text
Inter-Site Network

MP-BGP - EVPN

Support planned for ACI 4.0 release for FX line cards and 9364C platform

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
Introducing the ACI Multi-Site Orchestrator
ACI Multi-Site
Multi-Site Orchestrator (MSO)
• Micro-services architecture
 Three MSO nodes are clustered and run concurrently
(active/active)
REST  vSphere VM only form factor initially (physical
GUI
API appliance planned for a future ACI release)
• OOB Mgmt connectivity to the APIC clusters
ACI Multi-Site Orchestrator deployed in separate sites
VM VM VM
 Not supported to establish MSO-APIC connectivity
‘inband’ in the ACI fabric
Hypervisor  Recommended not to connect the MSO nodes to the
ACI fabric
• Main functions offered by MSO:
…..  Monitoring the health-state of the different ACI Sites
Site 1 Site 2 Site n
 Provisioning of day-0 configuration to establish inter-
site EVPN control plane
 Defining and provisioning policies across sites
 Day-2 operation functionalities
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
ACI Multi-Site
MSO Deployment Considerations
Intra-DC Deployment Interconnecting DCs over WAN

New York
Site3
IP Network

WAN

Milan Rome
Hypervisor Hypervisor Hypervisor Site1 Site2
VM VM VM

ACI Multi-Site Orchestrator


Hypervisor Hypervisor
ACI Multi-Site
VM VM Orchestrator VM

• Hypervisors can be connected directly to the DC OOB • Up to 150 msec RTT latency supported between MSO nodes
network • Higher latency (500 msec to 1 sec RTT) between MSO nodes
• Each MSO node has a unique routable IP (can be part of and managed APIC clusters
separate IP subnets) • If possible deploy MSO nodes in separate sites for availability
• Async calls from MSO to APIC #CLUS
purposes (network
FLPACI-2125
partition scenarios)
© 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
ACI Multi-Site
MSO Dashboard

• Health/Faults for all


managed sites
• Easily way to identify
stretched policies across
sites
• Quickly search for any
deployed inter-site policy
• Provide direct access to
the APIC GUIs in
different sites

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
ACI Multi-Site
MP-BGP/EVPN Infra Configuration

• Configure Day-0 infra


policies
• Select spines establishing
MP-BGP EVPN peering
with remote sites
• Site/Pod Data Plan TEPs
(DP-ETEPs)
• Spine Control Plane TEPs
(CP-ETEPs)

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
ACI Multi-Site
MSO Schema and Templates

 Template = APIC policy definition Schema


POLICY
(App & Network) TemplateDEFINITION
Template
 Template is the scope/granularity EP1
EPG
C EP2
EPG

of the policies that can be pushed to


one (or more) sites
SITE
 Schema = container of Templates LOCAL

sharing a common use-case


• As an example, a schema can be
dedicated to a Tenant Site 1 Site 2
 Scope of change: policies in different EFFECTIVE
POLICY
EFFECTIVE
POLICY
templates can be pushed to separate
sites at different times

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
ACI Multi-Site
Scope of Changes

• Each Site has assigned a


unique ID
Tenant-A Tenant-B
Scope: Site 1 and 2 Scope: Site 3 • Configuration created on the
Multi-Site Orchestrator
(Template) has associated a
Site 1 Site 2 Site 3 scope
• The scope identifies the list of
sites where the configuration is
pushed
• Configuration changes are
staged on the ACI MSO before
being pushed to the sites
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
ACI Multi-Site ACI 3.2 Release
Day-2 Operations: Full-Stack Consistency Checker

• Multi-Site Infra: Unicast, Multicast, BGP TEPs


and Tunnel state
• Multi-Site Tenant and EPG granularity:
 Inspect and validate full-stack programming:
MSO, APICs and Spine translations
 Validate the consistency of local and remote
inter-site EPGs, BD, VRF, External EPG, policies,
MP-BGP EVPN
etc.
 Root cause configuration programming issues
Spines VXLAN Spines without calling TAC
• GUI and APIs will both be supported

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
APIC vs. Multi-Site Orchestrator Functions

• Complementary to APIC
• Central point of management and
configuration for the Fabric • Provisioning and managing of “Inter-Site
Tenant and Networking Policies”
• Responsible for all Fabric local functions
• Scope of changes
• Fabric discovery and bring up
• Fabric access policies • Granularly propagate policies to multiple APIC
• Domains creation (VMM, Physical, etc.) clusters
• … • Can import tenant configuration from APIC
• Maintains runtime data (VTEP address, VNID, cluster domains
Class_ID, GIPo, etc.) • End-to-end visibility and troubleshooting
• No participation in the fabric control and data • No run time data, configuration repository
planes
• No participation in the fabric control and data
planes

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
Inter-Site Connectivity Deployment
Considerations
ACI Multi-Site
Inter-Site Network (ISN) Requirements

Inter-Site Network

MP-BGP - EVPN

• Not managed by APIC, must be independently configured (day-0 configuration)


• IP topology can be arbitrary, not mandatory to connect to all spine nodes, can extend long distance
(across the globe)
• Main requirements:
 OSPF on the first hop routers to peer with the spine nodes and exchange E -TEP reachability
 Increased MTU support
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
ACI Multi-Site and MTU
Different MTU Meanings

1. Data Plane MTU: MTU of the traffic


generate by endpoints (servers,
routers, service nodes, etc.) 2 ISN
connected to ACI leaf nodes MP-BGP EVPN

• Need to account for 50B of overhead


(VXLAN encapsulation) for inter-site
communication
2. Control Plane MTU: for CPU
generated traffic like EVPN across 1 1

sites
• The default value is 9000B, can be
tuned to the maximum MTU value
supported in the ISN

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
ACI Multi-Site and MTU
Tuning MTU for EVPN Traffic across Sites

Configurable MTU

ISN

M P-BGP - EVPN

 Control Plane MTU can be set leveraging the


“CP MTU Policy” on APIC Modify the default
9000B MTU value
 The required MTU in the ISN would then
depend on this setting and on the Data Plane
MTU configuration
Always need to consider the VXLAN encapsulation
overhead for data plane traffic (50/54 bytes)

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
ACI Multi-Site and QoS
Intra-Site QoS Behavior

• ACI Fabric supports six classes of services


• Traffic is classified only in the ingress leaf
 The CoS value in the iVXLAN packet is set based
Class of Traffic Type Dot1p Marking in
on this table
Service/QoS-group VXLAN Header
• Three user configurable classes of services
0 Level3 user data 0
for user data traffic
1 Level2 user data 1
 Level3 is the default class (CoS value 0)
2 Level1 user data 2
• Three reserved classes of service for control
3 APIC controller traffic 3
traffic and SPAN
4 SPAN traffic 4  APIC controller traffic
5 Control Traffic 5  Control traffic for traffic destined to the supervisor
5 Traceroute 6  SPAN traffic
 Traceroute traffic
• Each class is configured at the fabric level and
mapped to an hardware queue

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
ACI Multi-Site and QoS
Inter-Site QoS Behavior
• Traffic across sites should be consistently prioritized (as it happens intra-site)
• To achieve this end-to-end consistent behavior it is required to perform
DSCP-to-CoS mapping on the spines (Spines-to-ISN and ISN-to-Spines)
• Important: must ensure that no traffic marked with DSCP value CS6 is received
by the spines from the ISN (spines always drop this traffic)
• The traffic can then be properly treated inside the ISN (classification/queuing)

Traffic classification
and queuing
Spines set the outer Spines set the iVXLAN
DSCP field based on the CoS field based on the
configured mapping configured mapping

ISN
Pod ‘A’ Pod ‘B’
MP-BGP - EVPN

CS5 CS5

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
Control and Data Plane
ACI Multi-Site
BGP Inter-Site Peers
• Spines connected to the Inter-Site Network
perform two main functions:
Inter-Site
Network 1. Establishment of MP-BGP EVPN peerings with
Anycast VTEP Addresses: spines in remote sites
DP-ETEP & HER-ETEP
 One dedicated Control Plane ETEP address (CP-
ETEP) is assigned to each spine running MP-BGP
EVPN
2. Forwarding of inter-sites data-plane traffic
CP-ETEP4
 Anycast Data Plane ETEP (DP-ETEP): assigned to
CP-ETEP1 all the spines connected to the ISN and used to
CP-ETEP2 CP-ETEP3
receive L2/L3 unicast traffic
 Anycast HER* ETEP (HER-ETEP): assigned to all
the spines connected to the ISN and used to
receive L2 BUM traffic

• CP-ETEP, DP-ETEP and HER-ETEP


addresses must be routable across the ISN
*HER: Head-End Replication #CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
ACI Multi-Site
Exchanging TEP Information across Sites IP Network Routing Table

DP-ETEP-1, HER-ETEP-1
CP-ETEP S1-S4
DP-ETEP-2, HER-ETEP-2
Filter out the
CP-ETEP S5-S8
advertisement of internal
• OSPF peering between spines and TEP pool into the ISN

Inter-Site network Inter-Site


• Exchange of External Spine TEP OSPF Network OSPF
addresses (CP-ETEPs, DP-ETEP
and HER-ETEP) across sites S1 S2 S3 S4
IS-IS to OSPF
S5 S6 S7 S8

Internal TEP Pool information not needed mutual redistribution


to establish inter-site communication TEP Pool 1 TEP Pool 2
(should be filtered out on the first-hop
ISN router)
In any case, best practice is to avoid
using overlapping TEP Pools across sites Site 1 Site 2

Leaf Routing Table Leaf Routing Table


IP Prefix Next-Hop IP Prefix Next-Hop
DP E-TEP B Pod1-S1, Pod1-S2, DP E-TEP A Pod2-S1, Pod2-S2,
Pod1-S3, Pod1-S4 Pod2-S3, Pod2-S4
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
ACI Multi-Site
Inter-Site MP-BGP EVPN Control Plane

S3-S4 Table S5-S8 Table


• MP-BGP EVPN used to
EP1 Leaf 1 Leaf 4
communicate Endpoint (EP) MP-BGP EVPN
EP2

information across Sites EP2 DP-ETEP B EP1 DP-ETEP A

MP-iBGP or MP-EBGP peering


options supported Inter-Site
Network
Remote host route entries (EVPN
DP-ETEP A
Type-2) are associated to the remote DP-ETEP B

site Anycast DP-ETEP address S1 S2 S3 S4 S5 S6 S7 S8

• Automatic filtering of endpoint COOP COOP


information across Sites
Host routes are exchanged across
EP2
sites only if there is a cross-site EP1
Site 1 Site 2
contract requiring communication
between endpoints Define and push inter-site policy
EP1 EP2
EPG C EPG
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 51
51
ACI Multi-Site Policy information (EP1’s Class-ID)
Inter-Sites Unicast Data Plane carried across Pods

VTEP IP VNID Class-ID Tenant Packet


3
S2 has remote info for EP2 S6 translates the VNID
EP1 Leaf 4 and encapsulates traffic to Inter-Site and Class-ID to local
EP2 Leaf 4
EP2 DP ETEP B EP1 DP ETEP A
remote DP ETEP B Network values and sends traffic to
Address the local leaf
Site 1 4 Site 2
DP-ETEP A DP-ETEP B
All VXLAN Inter-Site unicast traffic
S1 S2Proxy A S3 S4 always sourced from DP-ETEP-A and S5 S6Proxy B S7 S8
destined to DP-ETEP-B
EP2 e1/1
EP1 e1/3 EP1 DP ETEP A

5 * Proxy B
* Proxy A
Leaf learns remote Site
EP2 unknown, traffic is 2 location info for EP1
EP1 EP2
encapsulated to the local Proxy
A Spine VTEP (adding S_Class 1 6
information) EP1 sends traffic destined If policy allows it, EP2
to remote EP2 receives the packet
EP1 EP2
EPG
C EPG = VXLAN Encap/Decap
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
ACI Multi-Site Policy information (EP1’s Class-ID)
Inter-Sites Unicast Data Plane (2) carried across Pods

VTEP IP VNID Class-ID Tenant Packet

S3 translates the VNID


and S_Class to local EP1 Leaf 4
DP ETEP A Inter-Site S6 rewrites the S-VTEP
values and sends traffic to EP2
Network to be DP ETEP B
the local leaf
10 9
Site 1 Site 2
DP-ETEP A DP-ETEP B

S1 S2 S3 S4 S5 S6 S7 S8
EP1 e1/3
EP2 DP ETEP B EP1 DP ETEP A

** Proxy A
8 * Proxy B
11 Leaf applies the policy and, if
Leaf learns remote Site
allowed, encapsulates traffic to
location info for EP2 EP1 EP2 remote DP ETEP address
12 7
EP1 receives the packet EP2 sends traffic back to
remote EP1
EP1 EP2
EPG
C EPG = VXLAN Encap/Decap
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
ACI Multi-Site
Inter-Sites Unicast Data Plane (3)

From this point EP1 to EP2 communication is encapsulated Leaf to Remote Spine DP ETEPs in both directions

Inter-Site
Network

Site 1 Site 2
DP-ETEP A DP-ETEP B

S1 S2 S3 S4 S5 S6 S7 S8

**

EP1 EP2

EP1 EP2
EPG
C EPG = VXLAN Encap/Decap
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
ACI Multi-Site
Layer 2 BUM Traffic Data Plane

S3 is elected as M ulti-Site forwarder for GIPo 1 S7 translates the VNID and the
BUM traffic  it creates an unicast VXLAN GIPo values to locally significant
packet with DP-ETEP A as S_VTEP and Inter-Site ones and associates the frame to
Multicast HER-ETEP B* as D_VTEP Network an FTAG tree
3 4
DP-ETEP A HER-ETEP B
All VXLAN Inter-Site BUM traffic
S1 S2 S3 S4 always sourced from DP-ETEP-A and S5 S6 S7 S8
destined to DP-HER-ETEP-B BUM frame is flooded along the
tree associated to GIPo. VTEP
2 5 learns VM 1 remote location
*
*
EP1 DP-ETEP A
BUM frame is associated to
GIPo1 and flooded intra-site via Proxy B
*
the corresponding FTAG tree EP1 EP2
1 6
GIPo1 = Multicast Group EP1 generates a BUM EP2 receives the BUM
associated to EP1’s BD
frame frame

*This is a different ETEP address than the one used for inter-site L3 unicast communication

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
Connecting to the External Layer 3 Domain
Connecting to the External Layer 3 Domain
‘Traditional’ L3Outs on the BL Nodes

Client

L3Out WAN

• Connecting to WAN Edge devices at


Border Leaf nodes
• VRF-Lite hand-off for extending L3 multi-
tenancy outside the ACI fabric
Border Leafs • Up to 400 L3Outs/VRFs currently supported on
the same BL nodes pair
• Support for host routes advertisement out
of the ACI Fabric planned for a future ACI
release
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
Connecting to the External Layer 3 Domain For More Info on ACI
and GOLF Integration:
‘GOLF’ L3Outs LABACI-2101

= VXLAN Encap/Decap
Different WAN
Hand-Off options:
VRF-Lite, MPLS-
VPN, LISP*
Client

WAN

OTV/VPLS
• Connecting to WAN Edge devices at Spine
GOLF Routers
(ASR 9000, ASR 1000,
nodes (directly or indirectly)
Nexus 7000)  VXLAN data plane with MP-BGP EVPN control
plane
• High scale tenant L3Out support
• Simplified and automated tenant L3Out
configuration with OpFlex
• Support for host routes advertisement out
of the ACI Fabric
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
ACI Multi-Site and ‘GOLF’ L3Outs
Deployment Options
Distributed GOLF Routers Shared GOLF Routers (from ACI 3.1)

WAN WAN
GOLF Routers GOLF Routers GOLF Routers

MP-BGP MP-BGP
MP-BGP ISN MP-BGP EVPN ISN EVPN
EVPN EVPN

 Each ACI sites utilizes a separate pair of GOLF routers for  Common pair of GOLF routers shared by all sites for
communication with the WAN communication with the WAN
 Local EVPN peering between spines and GOLF routers  GOLF routers can be connected to the ISN or directly
 GOLF routers connect to the ISN or directly to the spines to the spines

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
ACI Multi-Site and ‘GOLF’ L3Outs
Must Use Host-Route Advertisement for Stretched BDs with GOLF L3Outs
10.1.0.10/32  G1, G2 Traffic destined


10.1.0.20/32  G3, G4 10.1.0.0/24  G1-G4


to 10.1.0.20

WAN WAN
G1 G2 G3 G4 G1 G2 G3 G4

ISN ISN

10.1.0.0/24 10.1.0.0/24
.10 .20 .10 .20

 Host-route advertisement into the WAN for stretched BDs  Without host-route advertisement traffic destined to a
stretched IP subnet may enter the ‘wrong site’
 Ensures that ingress traffic is always delivered to the ‘right
site’  A site can’t be used as ‘transit’ for traffic destined to an
endpoint part of a stretched BD and remotely located (traffic
is dropped on the receiving spines)
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
Multi-Site and L3Outs
Endpoints always Use Local L3Outs for Outbound Traffic

Supported Design
✓ Not Supported Design

Inter-Site Network Inter-Site Network
X

L3Out L3Out L3Out


Site 1 Site 2 Site 1

WAN WAN

Note: the same consideration applies to both Border Leaf L3Outs and GOLF L3Outs
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
Multi-Site and L3Out
Endpoints always Use Local L3Outs for Outbound Traffic

Inter-Site Network

Site 1 Site 2

Web-EPG C1 Ext-EPG

With BL L3Outs traffic can


Active/Standby Traffic dropped
because of lack of X Active/Standby

state in the FW
come in Site 1 and reach
and endpoint in Site 2

Need a better solution than use of L3Outs to integrate perimeter FW!


#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Multi-Site and Network Services
Integration
Multi-Site and Network Services
Integration Models
ISN
• Active and Standby pair deployed across Pods
• Currently supported only if the FW is in L2 mode or in
L3 mode but acting as default gateway for the
endpoints
Active Standby

Active/Active FW cluster nodes stretched across Sites


ISN

(single logical FW)
• Requires the ability of discovering the same MAC/IP info
in separate sites at the same time
Active/Active Cluster
• Not currently supported (scoped for a future ACI release)

ISN • Recommended deployment model for ACI Multi-Site


• Option 1: supported from 3.0 for N-S if the FW is
connected in L3 mode to the fabric  mandates the
deployment of traffic ingress optimization
• Option 2: supported from 3.2 release with the use of
Active/Standby Active/Standby #CLUS Service Graph with
FLPACI-2125 © 2018 Policy
Cisco and/orBased Redirection
its affiliates. All (PBR)
rights reserved. Cisco Public 64
Independent Active/Standby FW Pairs across Sites
Option 1: FWs Connected via L3Out (N-S Traffic Only)

Inter-Site Network

Site 1 Site 2

Web-EPG C1 Ext-EPG

L3Out L3Out
Site 1 Site 2

10.10.10.10 Host routes 10.10.10.11


Active/Standby Active/Standby Host routes

*Alternative could be
running an overlay solution
Host-routes
(LISP, GRE, etc.) injected into the
WAN*

• Ingress optimization usually requires host route advertisement on the L3Out


 Currently supported only on GOLF L3Outs
 Native support on ACI Border Leaf#CLUS
nodes planned for ACI
FLPACI-2125
release 4.0
© 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Independent Active/Standby FW Pairs across Sites Release
ACI 3.2

Option 2: Use of Service Graph and Policy Based Redirection

 SW and HW dependencies:
• Supported from ACI release 3.2
• Mandates the use of EX/FX leaf nodes (both for compute and service leaf switches)

 The PBR policy applied on a leaf switch can only redirect traffic to a service
node deployed in the local site
• Requires the deployment of independent service node function in each site
• Various design options to increase resiliency for the service node function: per site
Active/Standby pair, per site Active/Active cluster, per site multiple independent Active
nodes
• No support in 3.2 release for Active/Standby* or Active/Active clustering across sites
• Only a single node PBR policy with FW supported in 3.2 release

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
Use of Service Graph and Policy Based Redirection
Resilient Service Node Deployment in Each Site

Active/Standby Cluster Active/Active Cluster Independent Active Nodes


Site1 Site1 Site1

L3 Mode L3 Mode L3 Mode L3 Mode L3 Mode


Active/Standby Cluster Active/Active Cluster Active Node 1 Active Node 2 Active Node 3

• The Active/Standby pair represents a • The Active/Active cluster represents a • Each Active node represent a unique
single MAC/IP entry in the PBR policy single MAC/IP entry in the PBR policy MAC/IP entry in the PBR policy
• Spanned Ether-Channel Mode • Use of Symmetric PBR to ensure each
supported with Cisco ASA/FTD flow is handled by the same Active node
platforms in both directions

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
Use of Service Graph and Policy Based Redirection
North-South and East-West Use Cases

North-South  Best practice recommendations for both North-South


and East-West use cases:
VRF1
L3 L3
• Service Node deployed in ‘one arm’ mode (‘two-arms’
mode also supported but not preferred)
WAN L3Out
WAN Web-EPG
• Service-BD must be stretched across sites (BUM flooding
WAN
Edge can/should be disabled)
Ext-BD Service-BD Web-BD • Ext-EPG must also be a stretched object, mapped to the
individual L3Outs defined in each site
• Web-BD and App-BD can be stretched across sites or
East-West locally defined in each site

VRF1  North-South use case


L3 L3 L3 • Intra-VRF only support in ACI 3.2 release
Web-EPG App-EPG  East-West use case
• Supported intra-VRF or inter-VRFs/Tenants
Web-BD Service-BD App-BD
• Requires to configure the IP range for the endpoints under
the Provider EPG
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
Use of Service Graph and Policy Based Redirection
North-South Communication – Inbound Traffic

Inter Site
Network

Site1 Site2
Compute leaf
always applies
the PBR policy Compute leaf
always applies
EPG EPG the PBR policy
Ext C Web
Consumer Provider
(Provider) (Consumer)

L3O ut-Site1 L3O ut-Site2

L3 Mode L3 Mode EPG


EPG Active/Standby
Active/Standby
Web Web

• Inbound traffic can enter any site when destined to a stretched subnet (if ingress optimization is not deployed or
possible)
• PBR policy must a lways be applied on the compute leaf node where the destination endpoint is connected
• Requires the VRF to have the default policies for enforcement preference and direction
• Supported only intr a-VRF in ACI release 3.2
• Ext-EPG and Web EPG can indifferently be provider or consumer of the contract
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
Use of Service Graph and Policy Based Redirection
North-South Communication – Outbound Traffic

Inter Site
Network

Site1 Site2
Compute leaf
always applies
the PBR policy Compute leaf
always applies
EPG EPG the PBR policy
Ext C Web
Consumer Provider
(Provider) (Consumer)

L3O ut-Site1 L3O ut-Site2

EPG L3 Mode L3 Mode EPG


Active/Standby Active/Standby
Web Web

• PBR policy always applied on the same leaf where it was applied for inbound traffic
• Ensures the same service node is selected for both legs of the flow
• Different L3Outs can be used for inbound and outbound directions of the same flow

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
Use of Service Graph and Policy Based Redirection
East-West Communication (1)

Inter Site
Network

Site1 Site2

Consumer leaf
always applies
the PBR policy
EPG EPG
Web C App
Consumer Provider

EPG L3 Mode L3 Mode EPG


Active/Standby Active/Standby
Web App

*This will change to Provider


 EPGs can be locally defined or stretched across sites side enforcement from ACI
 EPGs can be part of the same VRF or in different VRFs (and/or Tenants) release 4.0

 PBR policy is always applied on the leaf switch where the Consumer* endpoint is connected
• The Consumer leaf always redirects traffic to a local service node
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
Use of Service Graph and Policy Based Redirection
East-West Communication (2)

Inter Site
Network

Site1 Site2

Consumer leaf
always applies Provider leaf
the PBR policy does not apply
EPG EPG
Web C App the PBR policy

Consumer Provider

EPG L3 Mode L3 Mode EPG


Active/Standby Active/Standby
Web App

 The Provider leaf must not apply PBR policy to ensure proper traffic stitching to the FW
node that has built connection state
 Ensures both legs of the flow are handled by the same service node

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 72
Multi-Site and Virtual Machine Manager
(VMM) Integration
ACI Multi-Site and VMM Integration
Option 1 – Separate VMM per Site
ISN

Site 1 VMM Domain VMM Domain Site 2


DC1 DC2

VMM 1 VMM 2

HV vSwitch1
HV HV Managed HV vSwitch2
HV HV
by VMM 1
Managed
by VMM 2

• Typical deployment model for an ACI Multi-Site


• Creation of separate VMM domains in each site, which are then exposed to
the Multi-Site Orchestrator
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 74
ACI Multi-Site and VMM Integration
Option 2 – Single VMM Managing Host Clusters in Separate Sites
ISN

Site 1 VMM Domain VMM Domain Site 2


DC1 DC2

VMM 1

HV vSwitch1
HV HV HV vSwitch2
HV HV
Managed
by VMM 1

• Even the deployment of a single VMM leads to the creation of separate


VMM domains across sites

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 75
ACI Multi-Site and VMM Integration
Live Migration across Sites
ISN

Site 1 VMM Domain VMM Domain Site 2


DC1 DC2

vCenter vCenter
Server 1 Server 2

HV HVVDS1 HV
EPG1 HV HVVDS2 HV
EPG1

Live vMotion

• Live virtual machines migration across sites is supported only with vCenter
deployments (both for single or multiple vCenter options)
 Requires vSphere 6.0 and newer

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 76
Multi-Pod and Multi-Site Integration
ACI Multi-Pod and Multi-Site ACI 3.2 Release
Main Use Cases

 Adding a Multi-Pod Fabric as a ‘Site’ on the Multi-Site Orchestrator (MSO)

 Converting a single Pod Fabric (already added to MSO) to a Multi-Pod fabric

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
ACI Multi-Pod and Multi-Site
Connectivity between Pods and Sites

IP WAN

IPN

Site 2
1s t Gen 1st Gen

APIC Cluster
Pod ‘A’ Pod ‘B’

Site 1 Site 2

 Only 2nd generation spines must be connected to the external network


• Need to add 2nd gen spines in each Pod (at least two per Pod) and migrate connections to the IPN from 1 st gen
spines to 2nd gen spines
 Single ‘infra’ L3Out and set of uplinks to carry both Multi-Pod and Multi-Site East-West traffic

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 79
Connectivity between Pods and Sites
Not Supported Topology

Separate uplinks
between spines IP WAN
and external
networks

IPN

Site 2
1s t Gen 1st Gen

APIC Cluster
Pod ‘A’ Pod ‘B’

Site 1 Site 2

 Only 2nd generation spines must be connected to the external network


• Need to add 2nd gen spines in each Pod (at least two per Pod) and migrate connections to the IPN from 1 st gen
spines to 2nd gen spines
 Single ‘infra’ L3Out and set of uplinks to carry both Multi-Pod and Multi-Site East-West traffic

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 80
ACI Multi-Pod and Multi-Site
BGP Spine Roles

 Spines in each Multi-Pod fabric can have one


of those two roles:
1. BGP Speakers: establish EVPN peerings with BGP
IPN/ISN speakers in remote sites and with BGP Forwarders
in the local site (intra- and inter- Pods)
BGP Forwarders
BGP Forwarders
BGP
BGP Forwarders
Forwarders
• Recommended to deploy two speakers per
BGP Speakers
BGP Speakers Multi-Pod fabric (in separate Pods)

1st Gen 1st Gen


• Explicitly configured on MSO
• Multi-Site Speaker must be a Multi-Pod spine
as well
2. BGP Forwarders: establish BGP EVPN peerings
APIC Cluster
with BGP speakers in the local site
Pod ‘B’
• All the spines that are not speakers implicitly
become forwarders
Site 1

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 81
ACI Multi-Pod and Multi-Site
Inter-Site and Intra-Site EVPN Sessions
= Inter-Site MP-BGP EVPN Peering (Speaker-to-Speaker)
= Intra-Site MP-BGP EVPN Peering (Speaker-to-Forwarders)

Site 2
IP
BGP BGP
Speaker Speaker

IP WAN

IPN

BGP Forw arder Forw arder Forw arder Forw arder BGP
Speaker Speaker

Site 1
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 82
ACI Multi-Pod and Multi-Site
Inter-Site L2/L3 Unicast Traffic

Site 2
IP
DP-ETEP-S2

IP WAN
EP1
Site 2 Spine Table
EP1 Leaf 1

EP2 DP-ETEP-S1P1 Site 1-Pod2 Spine Table


EP3 DP-ETEP-S1P3 EP4 Leaf 4
IPN EP1
EP4 DP-ETEP-S1P2 DP-ETEP-S2
EP2 DP-ETEP-S1P1

EP3 DP-ETEP-S1P3
Site 1-Pod3 Spine Table
Site 1-Pod1 Spine Table DP-ETEP-S1P1 DP-ETEP-S1P2 DP-ETEP-S1P3
EP3 Leaf 4
EP2 Leaf 1
EP1 DP-ETEP-S2
EP1 DP-ETEP-S2
EP2 DP-ETEP-S1P1
EP3 DP-ETEP-S1P3
EP4 DP-ETEP-S1P2
EP4 DP-ETEP-S1P2
EP2 EP4 EP3

Site 1
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 83
ACI Multi-Pod and Multi-Site
Inter-Site L2 BUM Traffic
= BUM sent v ia Ingress Replication
= BUM sent v ia PIM-Bidir

Site 2
IP
HER-ETEP-S2

IP WAN
EP1

Use PIM-Bidir for


Replication

BUM originated in the


IPN
local Pod, so use
Ingress Replication Only forwarding
Only f orwarding
toward the Remote Site the BUM
the BUM fframe
rame
into
into the
the local Pod

DF DF DF

BUM
Frame
EP2 EP4 EP3

Site 1
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 84
ACI Multi-Pod and Multi-Site
Inter-Site L2 BUM Traffic
= BUM sent v ia Ingress Replication
Ingress Replicated = BUM sent v ia PIM-Bidir

Site 2
IP to HER-ETEP
address of Site 1
DF

BUM
Frame IP WAN
EP1

Use PIM-Bidir for


Use PIM-Bidir for Replication
Replication

IPN

HER-ETEP-S1

EP2 EP4 EP3

Site 1
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 85
ACI Multi-Pod and Multi-Site
TEP Pools Deployment and Advertisement
Filter out the
advertisement of
TEP Pool advertisement
the TEP Pool into not needed for inter-Sites
IP the backbone
Site 2 communication

X
IP WAN
Filter out the
All Pods TEP Pools
TEP Pool Site 2
1 0 .1.0.0/16
advertisement of
the TEP Pools into X advertised into the
IPN
the backbone

IPN
TEP Pool Pod1 TEP Pool Pod3
10.1.0.0/16 TEP Pool Pod2 10.3.0.0/16
10.2.0.0/16

Site 1
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 86
Migration Scenarios
ACI Multi-Site
Migration Scenarios

Site 1

2
Site 1

1
Site 2 Site 2

2 2 1 3

Site 1 Site 2 Site 1 Site 2


Green Field Green Field Brown Field Green Field

1. Model new tenant and inter-site 1. Import existing tenant policies from site 1 to
policies on the ACI Multi-Site a new template on ACI Multi-Site
Orchestrator and associate the Orchestrator
template to the sites
2. Associate the template also to site 2
2. Push policies to the ACI sites
3. Push template policies to site 2
#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 88
ACI Multi-Site
Migration Scenarios Import Policies
from Site 1 and
Push it to Site 2
Fabric 1 ISN

‘Brownfield’ ACI Fabric Si te 1 S i te 2

to Multi-Site

ISN
Pod ‘A’ IPN Pod ‘B’ IPN
Multi-Pod to Po d ‘A’ Po d ‘B’
‘Hierarchical Multi-Site’ S i te 2

APIC Cluster
APIC Cluster
Multi-Pod From ACI 3.2 release S i te 1

Fabric 1 Fabric 2 ISN


Multi-Fabric Design to
Si te 1 S i te 2
Multi-Site
Inter-Site
App

L2/L3
DCI Scoped for the future
#CLUS 89
Multi-Fabric FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public
Conclusions and Q&A
Multi-Pod & Multi-Site
A Reason for Both

Multi-Pod Fabric ‘A’ (AZ 1)

‘Classic’ Active/Active

Pod ‘1.A’ Pod ‘2.A’

ACI Multi-Site

Multi-Pod Fabric ‘B’ (AZ 2)

‘Classic’ Active/Active
Application
Pod ‘1.B’
workloads
Pod ‘2.B’
deployed across
availability zones #CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 91
Where to Go for More Information

 ACI Multi-Pod White Paper


http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-737855.html?cachemode=refresh
 ACI Multi-Pod Configuration Paper
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-739714.html
 ACI Multi-Site White Paper
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-739609.html
 Deploying ACI Multi-Site from Scratch
https://www.youtube.com/watch?v=HJJ8lznodN0

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 92
Cisco Webex Teams
Questions?
Use Cisco Webex Teams (formerly Cisco Spark)
to chat with the speaker after the session

How
1 Find this session in the Cisco Live Mobile App
2 Click “Join the Discussion”
3 Install Webex Teams or go directly to the team space
4 Enter messages/questions in the team space

Webex Teams will be moderated cs.co/ciscolivebot#BRKACI-2125


by the speaker until June 18, 2018.

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 93
Complete your online session evaluation

Give us your feedback to be entered


into a Daily Survey Drawing.
Complete your session surveys through
the Cisco Live mobile app or on
www.CiscoLive.com/us.
Don’t forget: Cisco Live sessions will be available for viewing
on demand after the event at www.CiscoLive.com/Online.

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 94
Continue
your Demos in
the Cisco
Walk-in
self-paced
Meet the
engineer
Related
sessions
education campus labs 1:1
meetings

#CLUS FLPACI-2125 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 95
Thank you

#CLUS
#CLUS

You might also like