Professional Documents
Culture Documents
The NE40E-X8A consists of the data plane, control and management plane, and
monitoring plane, as shown in the figure above. This architecture improves system
reliability and facilitates upgrades on each plane.
The data plane is responsible for high speed processing and non-blocking switching of
data packets. It encapsulates or decapsulates packets, forwards IPv4/IPv6/MPLS packets,
performs QoS as well as scheduling and internal high-speed switching, and collects
statistics.
The control and management plane completes all control and management functions for
the system and is the core of the entire system. Control and management units process
protocols and signals, configure and manage the system, and display the system status.
The monitoring plane monitors the ambient environment to ensure the secure and stable
operation of the system. It detects voltage levels, controls system power-on and power-off,
monitors the temperature, and controls fan modules. If a unit fails, the monitoring plane
isolates the faulty unit promptly so the other units remain unaffected.
Dimensions (H x W x D)
930 mm x 442 mm × 650 mm (chassis main body dimensions)
(36.64 in. x 17.40 i n. x 25.59 in.) Chassis main body dimensions
930 mm x 442 mm × 750 mm (chassis dimensions including the chassis's front
and rear assembly and cable racks)
(36.64 in. x 17.40 i n. x 29.53 in.) Chassis dimensions including the chassis's front
and rear assembly and cable racks
The system draws air from the front and discharges air from the back. The air intake vent
resides above the board area on the front chassis; the air exhaust vent resides above the
board area on the rear chassis.
The air filters on the air intake vents are vertically installed and featured with the curved
face, large area, and small windage resistance, helping to improve the heat dissipation
efficiency.
The black sponge air filteron the air intake vent is used to prevent dust from getting into
the system.
To ensure good heat dissipation and ventilation for the system and to prevent the
accumulation of dust on an air filter, you need to clean the air filter regularly. It is
recommended that an air filter be cleaned at least once every three months and be
replaced once every year. When an air filter is placed in the dusty environment, it needs to
be cleaned more frequently.
On core switches, cabling between line cards and switch fabric units is an important factor
that determines per slot bandwidth. A longer backplane cable and a higher rate indicate a
greater loss.
The CE12800 uses an orthogonal architecture, which does not require backplane cables.
This architecture greatly increases system bandwidth and improves the evolution capability.
With an orthogonal design (three-level Clos architecture), service line cards and switch
fabric units of the CE12800 constitute a multi-level multi-plane switch fabric. This switch
fabric allows for unlimited capacity expansion, helping implement large-scale non-blocking
switching in data centers.
Clos architecture has multiple levels, at each of which a switch unit is connected to all
switch units at a lower level. Clos architecture is non-blocking, re-arrangeable, and
scalable.
Huawei SDN Products Overview 28
Huawei SDN Products Overview 29
Huawei CE6800/5800 series switches are next-generation data center switches designed
for high-performance data centers. The CE6800 series switches support 10GE access, while
the CE5800 series switches support GE access.
The CE6800/5800 series switches function as access switches, servers connect to
CE6800/5800 switches through GE/10GE uplinks; and CE6800/5800 switches connect to
core switches CE12800 through 10GE/40GE uplinks.
CE6800/5800 switches provide high-performance 40GE ports, which can connect to high-
density 40GE line processing units (LPUs) on CE12800 switches to construct full-40G data
center networks.
Huawei SDN Products Overview 30
CE6850-48T4Q-EI: Provides forty-eight 10G BASE-T Ethernet ports and four 40G QSFP+
Ethernet optical ports
The 1.28Tbps exchange capacity, the industry‘s highest performance;
960Mpps forwarding performance, L2/L3 wire-speed forwarding. 64
10GE interfaces at most, the industry’s highest density, to meet Gigabit
server density access needs. Supports four 40G high-performance QSFP
+ interface, and the QSFP+ can be used as four 10GE interfaces to allows
flexible networking capability; 40GE uplink and CE12800 series work
together to build a non-blocking network platform.
Huawei SDN Products Overview 32
CE6850-48S4Q-EI: Provides forty-eight 10G SFP+ Ethernet optical ports and four 40G
QSFP+ Ethernet optical ports.
Huawei SDN Products Overview 33
Huawei SDN Products Overview P-34
Versatile Routing Platform (VRP) is the main network operating system running on Huawei
network products such as routers, switches and firewalls. Nowadays, the scalability of VRP
is further extended to the SDN network, to be used on a SDN controller, which is SNC in
Huawei. VRP is now treated as a network controller software which can be installed
separated from the dedicated hardware, unlike traditional network products in the past.
SNC is a network controller running based on VRP is a central network control system that
provides northbound RESTful/NETCONF APIs to third-party applications, such as OpenStack
and policy control applications, and provides southbound OpenFlow/NETCONF/PCEP/BGP
interfaces to control and manage forwarders. VRP software runs on Linux and is comprised
of four logical planes: NE drive layer, NE abstraction layer, network resource layer, and
application layer. Figure above shows the architecture. A SNC software architecture is
consisting of 4 layers, which are network element driver layer, network element
abstraction layer, network resource later and application layer
1. NE drive layer: consists of four functional modules for southbound OpenFlow, NETCONF,
PCEP, and BGP interfaces. The VRP uses these four protocols to communicate with
forwarders of different vendors. Different southbound interfaces can be used in different
scenarios.
2. NE abstraction layer: consists of logical router, logical switch, logical optical device, and
logical VAS functional modules. Each functional module corresponds to a type of
forwarder.
3. Network resource layer: consists of Fabric, path computation algorithm, and topology
resource management functional modules, responsible for collecting network topology
information, managing topology resources, and computing service paths based on
network topologies and virtualising topologies.
4. Application layer: consists of various service applications. Currently, only the VXLAN
application and BGP RR+ are supported.
A SNC can support various types southbound interface protocols such as Netconf,
Openflow, BGP, PCEP etc, based on different SDN solution network applications. The SNC
controls networks by using these southbound protocols, mainly used for link discovery,
topology management, policy making and entry delivery etc.
Besides communicating with forwarders through southbound interfaces, SNC also supports
different types of Northbound interface protocols to integrate functions with various OSS;
Among the examples of the NBI protocols include SNMP, RESTful etc.
It is recommended that you install the VRP software on a dedicated RH1288 rack server.
The VRP software provides the best performance when being installed on such a server.
If you want to install the VRP software on a generalized server, ensure that the server
meets the hardware configuration requirements must be met.
The 2 most commonly used servers for SNC installation are RH servers and E9000
converged servers. There are multiple types of RH servers that can be used also, for
example RH1288v2, RH2288v2, RH2285 V2 etc.
The following slide will describe about basic features of a type of most commonly used RH
server, which is RH1288
This chapter describes the basic feature and appearance of the RH1288, the DC rack server
of the SNC.
The RH1288 is a generic 1 U dual-socket rack server launched by Huawei to meet
customer requirements for the Internet, Internet data center (IDC), cloud computing,
enterprise market applications, and telecom service applications.
The RH1288 supports high performance computing (HPC), databases, virtualization, basic
enterprise applications, and telecommunication service applications thanks to its
outstanding computing performance, large storage capacity, low energy consumption,
high reliability, and ease of deployment and management.
Generally, RH1288 V2 has 2 models, which are:-
1. RH1288 V2-8S: Supports eight 2.5-inch SAS HDDs, SATA HDDs, or solid-state
drives (SSDs).
2. RH1288 V2-4L: Supports four 3.5-inch SAS or SATA HDDs.
The power supply system for the E9000 chassis consists of six PSUs it Support N+N or
N+1 redundancy. The PSU hibernation feature sets extra PSUs to the hibernation mode
to save energy based on the PSU configurations and current chassis power. The 2000 W,
2500 W, and 3000 W PSUs all support this feature under an appropriate voltage.
The E9000 provides 14 fan modules which are separated into three zones. Each fan
module contains two fan units. Every zone configured in N + 1 redundancy, which ensures
that the heat dissipation effect is not affected even if a fan or a fan module fails. The
E9000 adopts the forced air-cooling technology and the design of front-to-rear ventilation
channels. That is, the air goes into the chassis from the front and dissipates from the back
of the chassis. The fan modules automatically adjust the fan speed based on the operating
temperature of compute nodes in the chassis or under control by the management module.
Determine the slots for installing the switch or pass through modules in the E9000 chassis.
A maximum of four switch or pass through modules can be installed in a chassis. From the
left to right they are: 1E 2X 3X 4E.
The MM910 manages all the hardware devices in the E9000 chassis. Each chassis is
configured with two MM910s in active/standby mode. They support active/standby
switchover, hot swap, and the protocols of Intelligent Platform Management
Interface 2.0 (IPMI 2.0), Simple Network Management Protocol (SNMP) v3, SNMP
Trap v1, SNMP Trap v2c, SNMP Trap v3, Secure Sockets Layer (SSL), Secure Shell
(SSH), Secure File Transfer Protocol (SFTP), Hypertext Transfer Protocol Secure
(HTTPS), Network Time Protocol (NTP), Domain Name Service (DNS), Lightweight
Directory Access Protocol (LDAP), and Simple Mail Transfer Protocol (SMTP).
MM910 performs the following functions
Stateless computing
Management functions
Supports keyboard, video, and mouse (KVM) over IP.
Supports virtual media.
Supports local KVM
Supports serial over LAN (SOL).
Supports unite loading.
Configuration restoration
Offline configuration
The CH121 compute node (CH121 for short) adopts Intel new-generation Romley processor
platform. It is half the width of a standard compute node. The CH121 provides large memory
capacity, powerful computing capabilities, and flexible scalability. The CH121 is installed in a E9000
chassis and is managed by the management module MM910 in a centralized manner.
The CH121 combines dense computing capabilities with an ultra-large memory capacity. Optimized
for computing-intensive enterprise service applications, virtualization, cloud computing, and high-
performance computing.
The main board contains two next-generation Intel Sandy Bridge-EP or Ivy Bridge-EP CPUs and 24
dual in-line memory modules (DIMMs). CPUs are connected over the QuickPath Interconnect (QPI)
bus. The maximum transmission rate reaches 8.0 GT/s. A CPU connects to a bridge over the DMI2
bus. The maximum transmission rate reaches 5 GT/s. A CPU connects to a mezzanine card over the
peripheral Component Interconnect Express (PCIe). The mezzanine card provides a service interface.
As PCH (Platform Controller Hub), the Patsburg-A is a next-generation Intel southbridge chip used
on server platforms and supports external input/output (I/O) bus interfaces and bus expansion. A
PCH connect to a Ethernet control chip over the PCIe, and the Ethernet control chip provides a
management service interface.
The hard disk interface module consists of a redundant array of independent disks (RAID) card and a
midplane where two hard disk drives (HDDs) are located. The hard disk interface module connects
to a CPU over the PCIe.
The iMana 200 provides device management functions, such as power-on control for compute
nodes, obtaining slot ID, monitoring power supply, and KVM over IP.
The CH121 V3 compute node (CH121 V3 for short) adopts Intel® new-generation
Grantley processor platform. It is half the width of a standard compute node. The CH121
V3 provides large memory capacity, powerful computing capabilities, and flexible
scalability. The CH121 V3 is installed in a E9000 chassis and is managed by the
management module MM910 in a centralized manner.
The CH121 V3 combines dense computing capabilities with an ultra-large memory
capacity. Optimized for computing-intensive enterprise service applications, virtualization,
cloud computing, and high-performance computing.
The CH220 IO expansion 2S compute node (CH220 for short) expands Peripheral
Component Interconnect Express (PCIe) resources by supporting graphics processing units
(GPU) and solid-state drive (SSD) cards.
The CH221 IO expansion 2S compute node (CH221 for short) expands Peripheral
Component Interconnect Express (PCIe) resources by supporting standard graphics
processing units (GPU) cards.
The CH220 and CH 221 are installed in a E9000 chassis and managed by the MM910
management module in a centralized manner.
The CH220 and CH221 meet requirements for telecommunications operation, enterprise,
and Internet applications and is particularly suitable for the Virtual Desktop Infrastructure
(VDI), virtualization, database, and application performance acceleration scenarios.
In the local HA system, the software architecture of each NetMatrix server is the same as
the software architecture of the single-server system.
The Openstack Plug-in NBI can interconnect with common network orchestration layers to
complete virtual and physical conversion at network layers, unlock network capabilities,
and provide the Openstack Neutron API.
Providing Layer 2 driver for the OpenStack Neutron ML2 Plug-in.
The NetMatrix interworks with peripheral systems through Restful and Secure File Transfer
Protocol (SFTP) interfaces to provide a wide range of functions, such as visualized traffic
display and service provisioning.
This topic uses the RR+ traffic adjustment solution as an example, where the NetMatrix
and uTraffic interwork with each other through Restful and SFTP interfaces. The NetMatrix
supplies network resource data to the uTraffic, and the uTraffic supplies network traffic
data to the NetMatrix.
For devices at the NE layer, the NetMatrix provides southbound interfaces (SBIs).
With security management, the NetMatrix can operate and communicate with peripheral
systems securely.
A series of security policies, such as user management, operation authorization
(rights- and domain-based) management, and user login management, are used to
ensure the secure operating of the NetMatrix and the effectiveness of service
functions.
The logs generated when users log in to and perform operations on the NetMatrix
and when the NetMatrix is running are managed so that all activities can be traced.
The high availability (HA) solution and database backup function are supported to
supplement the security solution.