Professional Documents
Culture Documents
Network pooling
DC2 DC3
Fabric Fabric
Server Server Server Server Server Server
• Binding of service logical partitions • Networks are separated from services, • Fragmented network, scale-up
and physical locations resulting in low collaboration efficiency • Resource pool range from a single DC to
• Chimney-type network with a small • Rapid service rollout restricted by multiple DCs
Layer 2 range, where compute networks
resources cannot be migrated
VM VM VM VM
VM VM Hypervisor VM VM
VM VM VM VM
SDN architecture
Fabric
The oversubscription ratio is low, and a non-blocking
Server leaf:
• Connects to servers. Service leaf:
• Functions as a common NVE node on • Connects to firewalls and LBs.
a VXLAN network. • Functions as a common Network Virtualization
Edge (NVE) node on a VXLAN network.
Overlay network
On a VXLAN network, there are three types of overlay networks depending on the role of overlay network edge
devices (VXLAN NVE nodes): network overlay, host overlay, and hybrid overlay. Huawei CloudFabric solution
recommends the VXLAN network of the network overlay type.
Hybrid overlay: Some physical switches and vSwitches function as NVE nodes.
Spine
Leaf
VM VM VM VM VM VM VM VM VM VM VM VM
The network overlay has high forwarding The host overlay has low forwarding performance. The hybrid overlay has low forwarding performance.
performance. VXLAN tunnels are established based on VXLAN tunnels are established based on vSwitches VXLAN tunnels are established between physical
physical switches; VXLAN processing does not occupy and VXLAN processing occupies CPU resources of switches and vSwitches. Hardware-based forwarding
CPU resources of servers; physical devices have servers. The forwarding performance is greatly does not occupy CPU resources of servers, but
high forwarding performance; devices on the live affected by the CPU. All devices on the live software VXLAN processing occupies CPU resources of
network can be reused and have good compatibility; network are reused and have poor compatibility. servers. Access devices on the live network are
the network overlay allows the SDN network and Interconnection between the SDN network and reused and have poor compatibility. The SDN network
traditional network to communicate with each other. traditional network is not supported. and traditional network can communicate with each
other.
Network overlay
Spine/Border leaf
On the network overlay, VTEPs of a VXLAN tunnel
are both physical switches.
VXLAN tunnel The network overlay falls into centralized and
Leaf
distributed modes.
Spine
Host overlay
VXLAN tunnel
Leaf
On the host overlay, all VTEPs are vSwitches
deployed on servers.
vSwitch vSwitch
VM VM VM VM East-west traffic in DCs is forwarded through VXLAN
Spine
Hybrid overlay
Distributed gateway
Spine Spine
Architecture of the distributed network overlay
End port End port End port End port End Port End port
Tenant A
VPC
vFW vLB vFW vLB
vFW vRouter vLB
vRouter vRouter
VPC1 VPC2 VM VM VM VM
Department 1 Department 2
of tenant A of tenant A
When devices are deployed independently and distributed gateways are used, which of the following
roles does not need to support VXLAN?
A. Service leaf
B. Server leaf
C. Border leaf
D. Spine
vCenter
Virtualizati
Resource on
management Huawei Cloud Stack
management layer platform
Domain1 Domain2
VM VM
Server VM VM Container PM VM VM Container PM
VM VM
IP network
Site1 Site2
Intra-city Intra-city
primary DC backup DC Public & private cloud
collaboration
VM VM
Active/Standby
External arbitration device External
DC1 DC2
network network
VM VM Spine Spine
VM
• Controllers are deployed in active/standby mode, and an arbitration device is deployed at a
VM
different place to improve reliability.
Active/Standby DR
• Active and standby egresses are supported.
Inter-DC VPC
Application layer Large VPC …
DC1 DC2 communication
Infrastruc
Inter-DC communication
ture layer
between VPCs
Server leaf Border leaf Server leaf Border leaf
DC1 DC2
VM VM VM VM VM VM
VM VM VM VM VM VM
VM VM VM VM VM VM
Your Applications
OpenStack
OpenStack
Dashboard
VM VM VM VM VM VM VM VM VM
iMaster NCE
EMS/NMS SDN controller Network analyzer Open API
Intent engine
eSight/U2000 iMaster NCE- iMaster NCE- Design
Fabric FabricInsight Studio
Management Control Analysis
NETCONF/YANG Telemetry
CLI/SNMP/Qx NETCONF/YANG
OpenFlow/OVSDB Telemetry
CLI/SNMP/Qx OpenFlow/OVSDB
Traditional Traditional
SDN device SDN device
device device
• Multiple independent products, including the NMS, • Manager, controller, and analyzer convergence
controller, and analyzer • Closed-loop automation
Big Data
Zero-waiting deployment through E2E automated network deployment
Cloud platform
layer/ • Ultra-fast network provisioning: Simple service logic and drag-and-drop
management and FusionStage FusionSphere operations on the GUI ensures high deployment efficiency.
control layer • Fast container rollout: 10K/min
Case 2: [Underlay network verification] Check network connectivity after DCNs are created
Case 1: [Route configuration fault] When the and expanded.
network administrator modifies the route
After devices are powered on, cables are connected based on the network plan, and the
configuration on the device, a routing loop
is incorrectly introduced. underlay network is automatically configured. It takes 5 person-days to manually check
the network connectivity and verify the network connection correctness.
It takes a long time to manually detect
It takes 5 person-days to check network-wide route configurations for loops and
faults.
blackholes after automatic route configuration and route addition/deletion.
VMware vCenter
System Center
OpenShift
VM VM VM VM C C
VM VM Hypervisor VM Hypervisor Hypervisor C
VM VM VM C
VM VM VM VM C
VM VM C
VM VM
Automatic DR mode
(recommended)
*RPO: Recovery Point Objective
Common indicators: Health check report: Multi- Abnormal root causes: Quick
Proactive monitoring in dimensional heath details diagnosis and rectification
multiple modes
• Real-time monitoring and • Comprehensive network health • Root cause diagnosis for a
proactive subscription to all- check based on the five-layer detected typical fault in 3
scenario data model minutes
• Data collection using multiple • Real-time or periodic push of • Troubleshooting together with
modes, such as gRPC or syslog professional health check reports iMaster NCE-Fabric
Intuitive status
Intelligent exception detection
based on dynamic baselines,
intuitively displaying historical
trends and facilitating network
optimization
Object Metric Default Interval
Device CPU usage and memory usage 1 min Set up a benchmark, compare against baseline
CPU usage and memory usage metric trends, and identify abnormal metrics.
Board 1 min
FIB/MAC entry usage
Chip TCAM usage 1 min
Numbers of received/sent packets and
bytes, lost packets, error packets,
Interface 1 min
broadcast packets, multicast packets, and
unicast packets
Queue Buffer size 100 ms
Rx/Tx power, current, voltage, and
Optical module 30 min
temperature
Packet loss
Packet loss and congestion detection 10s
behavior
Network interconnection Traffic and error Optical link Congestion and packet loss
Network Queue depth detection based on the network
port status packets on ports status
link load
Hardware status: board, fan, Capacity: ARP, FIB, MAC CPU and memory Whether physical components are
Device
power supply, etc. entries, etc. usage normal and whether resource
overflow occurs
Analyze 20+ types of monitoring objects and 70+ metrics to intuitively display network-wide experience quality
Step 1
Health overview
Display the overall network health
metrics and trend based on the five-
level model.
Step 2
Multi-dimensional
detailed analysis
Analyze the network health from
the following dimensions to
determine the network health
trend: device, network,
protocol, service, and overlay.
Step 3
Professional report
interpretation
Summarize issues from each dimension
and periodically push reports of
detection details, facilitating
identification of exceptions.
Intuitively display the resource Identify network quality issues based on Display reports in multiple dimensions,
overview, load overview, and quality the five dimensions of the network health identify abnormal monitoring objects, and
overview across the entire network. evaluation system. provide troubleshooting and network
optimization suggestions.
BGP OSPF
flapping Interfac
Knowledge-based flapping e
flapping
Exception detection
Continuous learning
Root cause analysis
and training based
Knowledge 1 Knowledge 2 Knowledge 3 Knowledge 4
on real site faults Risk prediction Intent-based
troubleshooting in a
Model application closed-loop manner
Multi-dimensional
data of DCs Data AI-based exception Network object
Service flow cleansing identification modeling
data/Telemetry data...
CloudEngine 6820-48S6CQ
CloudEngine 8850-SAN
CloudEngine 6881-48S6CQ
CloudEngine 9860--4C-EI CloudEngine 6860-SAN
CloudEngine 6881-48T6CQ
CloudEngine 16800
2. (Single-answer question) Which statement about management and control interfaces between iMaster
NCE-Fabric and CloudEngine series physical switches is false?
A. NETCONF: is used by iMaster NCE-Fabric to deliver configurations to physical switches.
B. OVSDB: is used by iMaster NCE-Fabric to exchange dynamic configuration with physical
switches.
C. SNMP: is used by iMaster NCE-Fabric to discover and obtain device information and manages
NEs.
D. OpenFlow: is used by iMaster NCE-Fabric to implement path detection and connectivity
detection for physical switches.
Page 48 Copyright © Huawei Technologies Co., Ltd. All rights reserved.
Summary
Huawei CloudFabric solution redefines O&M, deployment, and interconnection of
DCNs, helping customers build an intelligent, ultra-simplified, ultra-broadband,
open, and secure cloud DCN.
This document provides an overview of the CloudFabric solution, describes four
automation scenarios of CloudFabric SDN, and introduces components including
CloudEngine series DC switches, iMaster NCE-Fabric, iMaster NCE-FabricInsight.
https://e.huawei.com/en/material/materiallist?&id=%7B93C489B0-8074-4D34-BE88-
46A41F54458D%7D&permissions=PARTNER-MEDIUM