Professional Documents
Culture Documents
2 Solution Design
Figure 2-1 Inter-subnet traffic forwarding between different edge nodes in the
two gateway solutions
When designing the virtualization solution, first determine the gateway solution to
be used. After the gateway solution is determined, you can perform end-to-end
design on the entire campus network based on the selected gateway solution.
Table 2-1 compares the two gateway solutions, and Table 2-2 provides their
recommended networking and application scenarios.
NOTE
The centralized gateway solution supports only one border node, whereas the distributed
gateway solution supports multiple border nodes.
O&M The border node functions as the User gateways are deployed on
deploy gateway of all users, and the edge nodes, and the native WAC
ment native WAC function is typically function is typically enabled on the
enabled on the border node to border node to support wireless
support wireless services. services.
Table 2-2 Recommended networking planning for the two gateway solutions
(WAC)
Num Nu Sol Networking Characteristics
ber mb uti Solution
of er on
Ter of Sel
min Du ecti
als al- on
Sta
ck
Ter
mi
nal
s
Sin 10,0 10, Pref Centralized VXLAN The characteristics are the same
gle 00 00 erre gateway + native as those of the preceding
- 0 d WAC (border node) centralized VXLAN gateway +
sta native WAC solution.
ck
an > > Pref Distributed VXLAN The characteristics are the same
d 10,0 10, erre gateway + as those of the preceding
du 00 00 d standalone WAC distributed VXLAN gateway +
al- 0 (connected to the standalone WAC solution.
sta border node in off-
ck path mode)
sce
nar
io
E Device model.
Table 2-5 Resource pools on a fabric and resource invoking methods during VN
creation
Resource Pool on a Fabric How to Invoke Resources in a
Resource Pool During VN Creation
User access point resource pool, which When configuring user access in a VN,
is planned during access management you can select planned access point
configuration for a fabric. This resource resources.
pool includes the authentication
modes that can be bound to access
points.
Egress pool, which contains the When creating a VN, you can select
external resources that can be used by external networks and network service
VNs. Two types of external resources resources.
are created during fabric configuration:
● External networks: used for VNs to
communicate externally
● Network service resources: used for
VNs to communicate with the
authentication server and DHCP
server
functional zones. Modules in each functional zone are clearly defined, and the
internal adjustment of each module is limited to a small scope, facilitating fault
location.
Figure 2-5 Physical network in the virtualization solution for large- and medium-
sized campus networks
In office, education, and hospitality scenarios, central switches and remote units
(RUs) are deployed together to build simplified all-optical campus networks. The
three-layer networking (core, aggregation, and access) for ELV rooms is changed
to the two-layer (core and aggregation) networking. At the access layer, RUs are
deployed to enable access to the desktop. The three-layer networking is also
supported, where access switches function as central switches and RUs are
connected to the access switches.
Figure 2-6 Simplified all-optical networking topology for large and midsize
campus networks
Ter The terminal layer involves various terminals that access the campus
min network, such as PCs, printers, IP phones, mobile phones, and cameras.
al
laye
r
Acce The access layer provides various access modes for users and is the first
ss network layer to which terminals connect. The access layer is usually
laye composed of access switches. There are a large number of access
r switches that are sparsely distributed in different places on the network.
In most cases, an access switch is a simple Layer 2 switch. If wireless
terminals are present at the terminal layer, wireless access points (APs)
need to be deployed at the access layer and access the network through
access switches.
On a three-layer simplified all-optical campus network, access switches
are deployed as central switches to manage the connected RUs.
Agg The aggregation layer sits between the core and access layers. It
rega forwards horizontal traffic (east-west traffic) between users and forwards
tion vertical traffic (north-south traffic) to the core layer. The aggregation
laye layer can also function as the switching core for a department or zone
r and connect the department or zone to a dedicated server zone. In
addition, the aggregation layer can further extend the quantity of access
terminals.
On a two-layer simplified all-optical campus network, aggregation
switches are deployed as central switches to manage the connected RUs.
Na Description
me
Core The core layer is the core of data exchange on a campus network. It
laye connects to various components of the campus network, such as the DC/
r network management zone, aggregation layer, and campus egress. The
core layer is responsible for high-speed interconnection of the entire
campus network. High-performance core switches need to be deployed
to meet network requirements for high bandwidth and fast convergence
upon network faults. It is recommended that the core layer be deployed
for any campus with more than three departments.
Egre The campus egress is the boundary that connects a campus network to
ss an external network. Internal users of the campus network can access
net the external network through the campus egress zone, and external
wor users can access the internal network through the campus egress zone.
k Firewalls need to be deployed in the campus egress zone to provide
perimeter security protection.
DC In the DC zone, service servers such as the file server and email server
zon are managed, and services are provided for internal and external users.
e
Net The network management zone is the server zone where the O&M and
wor management systems are deployed. In the virtualization solution for
k large- and medium-sized campus networks, the following systems are
man deployed:
age ● iMaster NCE-Campus: campus network automation engine. It is used
men to provision service configurations for network devices; provides open
t APIs for integration with third-party platforms; and can function as an
zon authentication policy server to deliver authentication, authorization,
e accounting (AAA) and free mobility services.
● iMaster NCE-CampusInsight: intelligent campus network analytics
engine, which provides intelligent O&M services by utilizing Telemetry,
big data, and intelligent algorithms.
● DHCP server: dynamically assigns IP addresses to user clients.
During network design, you can use the bottom-up method to determine the
layered architecture depending on the network scale, as illustrated in Figure 2-8.
NOTE
The campus network involving one building usually uses the two-layer
architecture, that is, only the access layer and core layer are required. A large-scale
campus network (such as a university campus network) that involves multiple
buildings usually uses the three-layer architecture that consists of the access,
aggregation, and core layers.
During network design, you can use the bottom-up method to determine the type
of architecture required based on the network scale, as shown in Figure 2-11.
BD Resource Planning
In a VN, a Layer 2 broadcast domain is constructed based on bridge domains
(BDs). In a BD, user terminals in different geographical locations can communicate
with each other. In the virtualization solution for large- and medium-sized campus
networks, BD resource planning guidelines are as follows:
● 1:1 mapping between BDs and user service VLANs is recommended, as shown
in Figure 2-13.
● In a VN, each time a VXLAN user gateway is created, a BD is automatically
invoked from the global BD resource pool of the fabric in sequence. You do
not need to consider how to divide a BD. Instead, you only need to consider
how to assign user service VLANs.
Cate Recommendations
gory
NOTE
In 2.2.7 Access Control Design, if policy association is required between the authentication
control point and authentication enforcement point, you need to plan a management VLAN
for policy association to establish a Control and Provisioning of Wireless Access Points
(CAPWAP) tunnel between the authentication control point and authentication
enforcement point.
Categor Recommendation
y
NOTE
In 2.2.7 Access Control Design, if policy association is required between the authentication
control point and authentication enforcement point, you need to plan a management VLAN
for policy association to establish a CAPWAP tunnel between the authentication control
point and authentication enforcement point.
● You are advised to configure DHCP snooping in the BD where a user gateway
belongs to ensure that user terminals obtain IP addresses from a valid DHCP
server and prevent attacks. In addition, if DHCP options are used for terminal
identification, DHCP snooping also needs to be configured.
● In dynamic IP address allocation provided by DHCP, the lease period of IP
addresses needs to be planned based on the online duration of user terminals.
In large- and medium-sized campus networks, the online duration of user
terminals in the office area are long, so a long lease period needs to be
planned for IP addresses of these user terminals.
If a fixed IP address needs to be allocated to a specific user terminal, this IP
address must be excluded from the DHCP address pool during DHCP address
pool planning. This is to prevent this IP address from being allocated.
Figure 2-15 Routing protocol planning in the virtualization solution for large- and
medium-sized campus networks
installed on servers in the equipment room. During the installation, make sure
that the egress gateway can communicate with the campus intranet. This section
describes the basic server networking design for communication between these
software systems and the campus intranet.
Active-backup mode
In this mode, one NIC interface in the bonded interface is in the active state, and
the other is in the backup state. All data is transmitted through the active NIC
interface. In the event of a failure on the link corresponding to the active NIC
interface, data is transmitted through the backup NIC interface. In this case, the
Layer 3 switch functioning as the server gateway connects to the two NIC
interfaces on a server through two physical ports. The physical ports do not need
to be aggregated, and are recommended to be added to the VLAN of the
corresponding network plane in access mode. As shown in Figure 2-17, add
physical ports (GE1/0/1 and GE2/0/1) on the switch to VLAN 100 using the
following commands.
<Switch> system-view
[Switch] vlan batch 100
[Switch] interface gigabitethernet 1/0/1
[Switch-GigabitEthernet1/0/1] port link-type access
[Switch-GigabitEthernet1/0/1] port default vlan 100
[Switch-GigabitEthernet1/0/1] quit
[Switch] interface gigabitethernet 2/0/1
[Switch-GigabitEthernet2/0/1] port link-type access
[Switch-GigabitEthernet2/0/1] port default vlan 100
[Switch-GigabitEthernet2/0/1] quit
<Switch> system
[Switch] vlan batch 100
[Switch] interface eth-trunk 1
[Switch-Eth-Trunk1] trunkport gigabitethernet 1/0/1 2/0/1
[Switch-Eth-Trunk1] port link-type access
[Switch-Eth-Trunk1] port default vlan 100
[Switch-Eth-Trunk1] quit
Communicatio iMaster NCE- Functions as the NAC server for user access
n with the user Campus authentication. The user subnet must be
subnet on the able to communicate with iMaster NCE-
Campus.
The network management zone adopts the basic networking design, the topology
between the gateway in the network management zone and the core switch
cluster is stable, and only a few network segments are required for
communication. If this is the case, you are advised to configure static routes
between the gateway in the network management zone and the core switch
cluster. As illustrated in Figure 2-19, the planning of static routes is as follows:
● Two VLANIF interfaces are separately planned on the gateway in the network
management zone as well as on the core switch. One (VLANIF 500 in the
figure) is used for communication between the network management zone
and the device management subnet on the underlay network, and the other
(VLANIF 600 in the figure) for communication between the network
management zone and the user subnet on the overlay network.
● For communication between the network management zone and the device
management subnet on the underlay network:
– On the core switch: Configure a static route destined for the network
management zone. The destination network segment is the network
segment where the software systems (for example, iMaster NCE-Campus
and iMaster NCE-CampusInsight in the figure) that need to communicate
with the device management subnet resides. The next hop of the static
route is the IP address of VLANIF 500 on the gateway in the network
management zone.
– On the gateway in the network management zone: Configure a route
destined for the device management subnet on the underlay network.
The destination network segment is the device management network
segment, and the next hop is the IP address of VLANIF 500 on the core
switch.
● For communication between the network management zone and the user
subnet on the overlay network:
– On the core switch: When creating network service resources for a fabric,
configure the IP addresses of the connected network service resources as
well as the VLANs and IP addresses for interconnecting with the gateway
in the network management zone on the core switch that functions as
the border node. After the configuration is complete, the core switch
imports routes between the virtual routing and forwarding (VRF) instance
that represents the network service resource and the VRF instance that
represents a VN. In addition, the core switch creates a private static route
destined for the network management zone in the VRF instance that
represents the network service resource. The destination network
segment of this static route is the network segment where the software
system that needs to communicate with the user subnet resides, such as
iMaster NCE-Campus or the DHCP server in the figure.
Figure 2-19 Planning for communication between the network management zone
and the campus intranet
Net Switch Local CLI or web Generally, you need to configure the
wor (gateway in system switch before installing software
k the network systems in the network
man management management zone.
age zone)
men
t
zone
Figure 2-20 Using default VLAN 1 for plug-and-play deployment of devices below
the core layer
If VLAN 1 is used as the management VLAN, broadcast storms may occur. To avoid
this, you can enable management VLAN auto-negotiation to configure another
VLAN as the management VLAN. In addition, wired and wireless devices can share
an auto-negotiated management VLAN or use separate auto-negotiated
management VLANs.
a. The core switch goes online on iMaster NCE-Campus through the CLI. If a
standalone WAC is used, it also goes online on iMaster NCE-Campus
through the CLI.
b. On iMaster NCE-Campus, configure VLANIF 100 on the core switch as the
gateway interface of the management subnet, configure a DHCP address
pool, and configure DHCP Option 148 to carry the southbound IP address
of iMaster NCE-Campus and DHCP Option 43 to carry the WAC address.
c. Configure the core switch as the root device and use the management
VLAN auto-negotiation function to enable management VLAN
communication for devices below the core layer. The process is as follows:
i. On iMaster NCE-Campus, enable the management VLAN auto-
negotiation function on the core switch and configure VLAN 100 as
the auto-negotiated management VLAN.
ii. After the core switch is configured, aggregation switches
automatically add their interfaces to VLAN 100 through protocol
packet auto-negotiation.
iii. After the management channels between the core and aggregation
switches are established, access switches automatically add their
interfaces to VLAN 100 through protocol packet auto-negotiation. In
addition, access switches' interfaces connected to APs change the
PVID to VLAN 100 through auto-negotiation.
d. The aggregation and access switches obtain the southbound address of
iMaster NCE-Campus through VLAN 100 and go online on iMaster NCE-
Campus.
e. APs obtain the WAC address through DHCP Option 43. After APs are
associated with the WAC and the CAPWAP source interface is configured,
APs successfully join the WAC.
Figure 2-21 Wired and wireless devices using the same auto-negotiated
management VLAN for plug-and-play deployment
Management VLAN Switching Design After Devices Below the Core Layer Go
Online
Sometimes, there are a large number of network devices on a campus network.
After these devices go online in plug-and-play mode for the first time, broadcast
storms may still occur even if an auto-negotiated management VLAN is planned
separately for wired and wireless devices. In this case, you are advised to plan
multiple management VLANs. After devices go online in plug-and-play mode for
the first time, switch the management VLAN to isolate the broadcast domains of
these devices.
You are advised to plan device groups based on network layers, with each device
group assigned one management VLAN. For example, each aggregation switch
and its connected downstream devices are grouped into one device group and use
the same management VLAN, as illustrated in Figure 2-23.
Note: Before switching the management VLAN, add the interconnection interfaces
on the core switch and devices below the core layer to the new management
VLAN. In this way, devices below the core layer will not fail to go online due to
communication failures with the core switch on the new management VLAN.
Figure 2-24 iMaster NCE-Campus-based deployment process for switches and APs
When there are fewer than 100 switches in a network area where routes need to
be deployed on the underlay network, single-area orchestration is recommended.
● All switches between the border and edge nodes on the fabric support
automatic orchestration of OSPF routes. These devices refer to all aggregation
and core switches if VXLAN is deployed across the core and aggregation
layers, and refer to all core, aggregation, and access switches if VXLAN is
deployed across the core and access layers.
● All switches between the border and edge nodes on the fabric are planned in
area 0.
● Different VLANIF interfaces are planned on all switches for interconnection
through OSPF. The interconnected Layer 2 interfaces allow packets from the
corresponding VLANs to pass through.
● When configuring a fabric, you need to create loopback interfaces on the
switches that function as border and edge nodes for establishing BGP EVPN
peer relationships. Routes on the network segments where the loopback
interface IP addresses reside are also advertised to area 0.
When there are more than 100 switches in a network area where routes need to
be deployed on the underlay network, multi-area orchestration is recommended.
● All switches between the border and edge nodes on the fabric support
automatic orchestration of OSPF routes. These devices refer to all aggregation
and core switches if VXLAN is deployed across the core and aggregation
layers, and refer to all core, aggregation, and access switches if VXLAN is
deployed across the core and access layers.
● The core switch is planned in area 0. Each downlink VLANIF interface on the
core switch, as well as the aggregation and access switches connected to
these VLANIF interfaces are planned in the same area.
● Different VLANIF interfaces are planned on all switches for interconnection
through OSPF. The interconnected Layer 2 interfaces are added to the
corresponding VLANs in trunk mode.
● On the core switch that functions as a border node, routes on the network
segment where its loopback interface IP address resides are advertised to area
0. On an edge node, routes on the network segment where its loopback
interface IP address resides are advertised to the area to which the edge node
belongs.
● If a Layer 2 switch is required for interconnection between the border and
edge nodes and performs transparent transmission between them, this Layer
2 switch cannot be the aggregation switch. (When adding a switch to a site
on iMaster NCE-Campus, you can set the switch role.) You can select Core or
Regional aggregation as the switch role. After the automatic OSPF route
orchestration function is enabled, interfaces connecting this Layer 2 switch to
the border and edge nodes allow packets from the corresponding VLAN to
pass through.
Design Guidelines
In the virtualization solution for large- and medium-sized campus networks, to
reduce the impact of topology changes on the entire network, you are advised to:
● Select a device with higher reliability as the root bridge.
● Divide the entire underlay network into multiple loop detection domains.
Figure 2-26 shows the underlay network in a virtualization scenario on a large or
midsize campus. When no loop exists between core and aggregation switches, the
loop prevention design can be implemented as follows:
● When a loop exists between core and aggregation switches, do not disable the loop
prevention function between them. You can only configure the core switch as a root
bridge to improve root bridge robustness.
● Currently, the controller allows you to increase the priority of core or aggregation
switches so that they can be preferentially selected as root bridges.
● To perform VLAN-based loop prevention design for inter-VLAN load balancing, see the
MSTP or VBST design in the switch product documentation.
Table 2-12 VLAN/BD resource plan for the fabric global resource pool
Reso Description
urce
Item
Servi ● User terminals access the campus network through service VLANs,
ce which are bound to BDs.
VLA ● You are advised to assign service VLANs based on logical areas,
N organizational structures, and service types of campus networks.
Table 2-13 IP address plan for the fabric global resource pool
Resource Description
Item
Resource Description
Item
Figure 2-29 Traffic models for L3 shared egress and L3 exclusive egress on a fabric
● Routes from the campus intranet to external networks on the border node:
Generally, default routes are used to prevent a huge number of external
network routes from affecting intranets.
● Configure routes from external networks to the campus intranet on the
firewall: Generally, specific routes are used.
Figure 2-30 Route planning between the border node and firewall
When creating external network resources on the border node, you can use any of
the following routing protocols to interconnect the border node with the firewall.
According to the route design principles described above, Table 2-14 lists the
recommended configurations for the three routing protocols.
Table 2-14 Configurations of different routing protocols between the border node
and firewall
Rou Default Routes Return Routes Interconnection Between the
ting from VNs to from External Border Node and Firewall
Prot External Networks to VNs
ocol Networks on the on the Firewall
Border Node
When selecting a routing protocol between the firewall and border node, you
need to consider how to switch the service traffic path in active/standby
switchover scenarios when firewalls are deployed in HSB mode. For details, see the
egress route design in 2.2.6 Egress Network Design.
NOTE
You can configure routes on the border node when creating external network resources on
iMaster NCE-Campus, and configure routes on the firewall by logging in to the web system
or CLI.
Figure 2-31 Design model of network service resources on a fabric (border node
directly connected to servers)
As shown in Figure 2-31, for each network service resource created on the border
node, a VRF instance is allocated. After a network service resource is selected
during VN creation, the VRF instances of the created VN and network service
resource import routes from each other. In this way, service subnets in the VN can
communicate with the network service resource. Static routes are configured on
the border node based on the addresses for accessing these network service
resources.
In this scenario, the border node is directly connected to network service resources,
and the physical interfaces that connect the border node to the resources are
added to VLANs in access mode.
Figure 2-32 Design model of network service resources on a fabric (border node
directly connected to a switch)
As shown in Figure 2-32, for each network service resource created on the border
node, a VRF instance is allocated. After a network service resource is selected
during VN creation, the VRF instances of the created VN and network service
resource import routes from each other. In this way, service subnets in the VN can
communicate with the network service resource. Static routes are configured on
the border node based on the addresses for accessing these network service
resources.
This design model is typically used if a DHCP server, iMaster NCE-Campus, and
other network service resources are deployed in the network management zone.
The border node is directly connected to the gateway switch in the network
management zone and then communicates with network service resources
through the switch. If only a small number of network service resources are
deployed, it is recommended that these resources be planned in the same network
service resource model. This saves interconnection VLAN and IP address resources
and simplifies route configuration for the network management zone, as shown in
Figure 2-33.
Figure 2-33 Traffic model for communication between the VN and network
service resource
NOTE
Routes on the border node are automatically delivered when network service resources are
created on iMaster NCE-Campus. To configure routes on the gateway in the network
management zone, log in to the web system or CLI of the device.
This design model is selected only when a DHCP server is deployed on an external
network, as shown in Figure 2-34. In this scenario, the VN and DHCP server
communicate with each other based on an external network design model of the
fabric. This network service resource model is mainly used for obtaining the DHCP
server address. When this model is used, the gateway of the VN subnet can
function as the DHCP relay agent and automatically configure the DHCP server
address after the gateway is created.
NOTE
The connection types "fabric extended AP" and "fabric extended switch" are
mainly used for configuring a management VLAN for policy association and
forwarding data between the authentication control point and authentication
enforcement point. In this scenario, the fabric extended switch functions as the
authentication enforcement point and can be connected to fabric extended APs
and terminals.
In policy association, the authentication control point is moved up to the
aggregation or core layer. Devices at the aggregation or core layer and those at
the access layer can complete policy association through Control and Provisioning
of Wireless Access Points (CAPWAP) tunnels. In this way, the number of
authentication control points is reduced, and access control of terminals can be
implemented at the access layer.
Policy association is designed based on the traditional "WAC + Fit AP" architecture
for access control. In this architecture, WACs function as authentication control
points and APs as authentication enforcement points. User authentication
information is synchronized between WACs and APs through CAPWAP tunnels.
Therefore, policy association applies to scenarios where aggregation or core
devices function as unified authentication control points for wired and wireless
users.
In the centralized gateway solution, wired and wireless authentication control
points are deployed separately. Therefore, pay attention to the following points
when configuring fabric access management:
● If VXLAN is deployed across core and access layers for the fabric network,
policy association is not deployed.
● If VXLAN is deployed across core and aggregation layers for the fabric
network, policy association can be deployed between edge nodes and access
switches for wired access authentication, and the authentication enforcement
point for wired access can be moved down to the access switches.
NOTE
In the centralized gateway scenario, it is recommended that the border node with the
native WAC function be deployed, or standalone WACs be connected to the border
node in off-path mode. In this scenario, do not select Extended AP for interfaces
connecting access switches to APs. If this connection type is selected, the APs cannot
communicate with the border node through management VLAN auto-negotiation. You
don't need to configure the connection type for the interfaces connecting access
switches to APs.
RU Access Design
In the simplified all-optical campus solution, central switches and RUs are
launched as combinations. The following figure shows the networking and
connection types of fabric access interfaces.
RUs do not support VLAN configuration or policy association and are used only as
the remote ports of a central switch. In addition, RUs (without management IP
addresses) are managed by the central switch in a unified manner and are not
displayed as independent NEs on iMaster NCE-Campus. Therefore, during fabric
access network design, you only need to configure port isolation for RUs through
the central switch on iMaster NCE-Campus when deploying policy control (see
Policy Control Solution Design).
An RU provides multiple extension interfaces that can connect to terminals or APs.
During access authentication configuration, it is recommended that authentication
be configured on the interfaces connecting to terminals and non-authentication
be configured on the interfaces connecting to APs. In this case, the authentication
policies for the two access types are different. Therefore, it is not recommended
that an RU be connected to APs and terminals at the same time. When an RU is
connected to both terminals and APs, terminal authentication needs be enabled
on the corresponding interface on the central switch, and APs need to be
authenticated on the central switch to access the network.
NOTE
1. Port isolation for RUs cannot be configured based on unified fabric orchestration and
needs to be configured site by site.
2. RUs must be deployed together with and directly connected to a central switch.
2.2.4.4 VN Design
Table 2-15 Comparison between the static VLAN mode and dynamically
authorized VLAN mode
VLAN Implementation Application Scenario
Access
Mode
Static ● Wired access: Configure a The static VLAN mode applies when
VLAN static VLAN on the switch terminals access the VLAN at fixed
interface connected to locations and do not need to be
wired user terminals. authenticated. This access mode is
● Wireless access: Configure more secure but lacks flexibility.
a static service VLAN for When the locations of terminals
an SSID. change, you need to perform the
configuration again.
● Manual configuration: Manually configure the user access VLAN and the IP
address of the gateway interface. This mode applies to scenarios where a few
subnets are deployed and automatic gateway configuration is not required.
Figure 2-38 Traffic model for communication between users on the same subnet
in a VN
● Users on the same subnet connected to the same edge node can directly
communicate with each other through the edge node.
a. Host 1 and Host 2 are on the same subnet. When Host 1 accesses Host 2,
the destination MAC address of the packet sent by Host 1 to Host 2 is the
MAC address of Host 2.
b. After the packet arrives at Edge 1, Edge 1 searches for the MAC address
entry of Host 2. The entry belongs to VLAN 10 and is learned from
GE0/0/2. Edge 1 then forwards the packet.
c. Host 2 receives the packet from Host 1 through GE0/0/2.
● Users on the same subnet connected to different edge nodes
communicate with each other through the VXLAN tunnel between the edge
nodes.
a. Host 1 and Host 2 are on the same subnet. When Host 1 accesses Host 2,
the destination MAC address of the packet sent by Host 1 to Host 2 is the
MAC address of Host 2.
b. After the packet arrives at Edge 1, Edge 1 searches for the MAC address
entry of Host 2. The entry belongs to BD 10 and is learned from the
tunnel source interface (displayed as the IP address) of Edge 2. Edge 1
then encapsulates the packet into a VXLAN packet.
c. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of Edge 1 and
Edge 2, respectively. Then the packet is forwarded based on the underlay
route.
d. After the packet arrives at Edge 2, Edge 2 performs VXLAN decapsulation,
searches for the MAC address entry of Host 2, determines the outbound
interface GE0/0/1, and forwards the packet.
e. Host 2 receives the packet from Host 1 through GE0/0/1.
Figure 2-39 Traffic model for communication between users on different subnets
in a VN
nodes and border node. Mutual access traffic is sent to the border node first,
then forwarded at Layer 3 based on direct routes in the VN.
a. Host 1 and Host 2 are on different subnets. When Host 1 accesses Host 2,
the packet is sent to the gateway first. The destination MAC address of
the packet is the MAC address of VBDIF 10 on the gateway.
b. After the packet arrives at Edge 1, Edge 1 searches for the MAC address
entry of VBDIF 10. The entry belongs to BD 10 and is learned from the
tunnel source interface (displayed as the IP address) of the border node.
Edge 1 then encapsulates the packet into a VXLAN packet.
c. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of Edge 1 and
the border node, respectively. Then the packet is forwarded based on the
underlay route.
d. After the packet arrives at the border node, the border node performs
VXLAN decapsulation and searches for the direct route to Host 2 in the
VN 1 routing table. The next hop is the IP address of the tunnel source
interface of Edge 2. The border node then encapsulates the packet into a
VXLAN packet. The inner destination MAC address of the packet is the
MAC address of Host 2.
e. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of the border
node and Edge 2, respectively. Then the packet is forwarded based on the
underlay route.
f. After the packet arrives at Edge 2, Edge 2 performs VXLAN decapsulation
and searches for the MAC address entry of Host 2. The entry belongs to
VLAN 20 and is learned from GE0/0/2. Edge 2 then forwards the packet.
g. Host 2 receives the packet from Host 1 through GE0/0/1.
Figure 2-40 Traffic model for subnet communication between VNs through a
border node
Figure 2-41 Traffic model for subnet communication between VNs through a
firewall
of the packet is the MAC address of VLANIF 12, and the packet is not
encapsulated into a VXLAN packet.
f. After the packet arrives at the border node, the border node searches for
the direct route to Host 2 in the VN 2 routing table. The next hop of the
packet is the IP address of the tunnel source interface of Edge 1. The
border node then encapsulates the packet into a VXLAN packet. The
inner destination MAC address of the packet is the MAC address of Host
2.
g. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of the border
node and Edge 1, respectively. Then the packet is forwarded based on the
underlay route.
h. After the packet arrives at Edge 1, Edge 1 performs VXLAN decapsulation
and searches for the MAC address entry of Host 2. The entry belongs to
VLAN 20 and is learned from GE0/0/2. Edge 1 then forwards the packet.
i. Host 2 receives the packet from Host 1 through GE0/0/2.
● Users on subnets of different VNs connected to different edge nodes
communicate with each other through the VXLAN tunnels between the edge
nodes and border node. Mutual access traffic is sent to the border node first,
then forwarded to the firewall based on the imported routes of external
networks. The firewall then forwards the traffic between VNs based on
mutual access control policies between security zones.
a. Host 1 and Host 2 are on different subnets. When Host 1 accesses Host 2,
the packet is sent to the gateway first. The destination MAC address of
the packet is the MAC address of VBDIF 10 on the gateway.
b. After the packet arrives at Edge 1, Edge 1 searches for the MAC address
entry of VBDIF 10. The entry belongs to BD 10 and is learned from the
tunnel source interface (displayed as the IP address) of the border node.
Edge 1 then encapsulates the packet into a VXLAN packet.
c. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of Edge 1 and
the border node, respectively. Then the packet is forwarded based on the
underlay route.
d. After the packet arrives at the border node, the border node performs
VXLAN decapsulation and searches for the route to the network segment
of Host 2 in the VN 1 routing table. Because the VPN routing tables of
VN 1 and the external network resource model VN1-Outer import routes
from each other, the route to the network segment of Host 2 can be
found in the VN 1 routing table. The next hop of the packet is the IP
address of GE1/0/1.1 on the firewall. The destination MAC address of the
packet is the MAC address of GE1/0/1.1, and the packet is not
encapsulated into a VXLAN packet.
e. After the packet arrives at the firewall, the firewall allows VN 1 to access
VN 2 based on the mutual access policies and searches for the route to
the network segment of Host 2. The next hop of the packet is the IP
address of VLANIF 12 on the border node. The destination MAC address
of the packet is the MAC address of VLANIF 12, and the packet is not
encapsulated into a VXLAN packet.
f. After the packet arrives at the border node, the border node searches for
the direct route to the network segment of Host 2 in the VN 2 routing
table. The next hop is the IP address of the tunnel source interface of
Edge 2. The border node then encapsulates the packet into a VXLAN
packet. The inner destination MAC address of the packet is the MAC
address of Host 2.
g. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of the border
node and Edge 2, respectively. Then the packet is forwarded based on the
underlay route.
h. After the packet arrives at Edge 2, Edge 2 performs VXLAN decapsulation
and searches for the MAC address entry of Host 2. The entry belongs to
VLAN 20 and is learned from GE0/0/1. Edge 2 then forwards the packet.
i. Host 2 receives the packet from Host 1 through GE0/0/1.
Figure 2-42 Traffic model for communication between VNs and external networks
● Users in a VN access the Internet through the VXLAN tunnel between the
edge node and border node. Traffic is sent to the border node first, then
forwarded to the firewall based on the imported routes of external networks.
The firewall then forwards the packet to the Internet.
a. Host 1 and the Internet are on different subnets. When Host 1 accesses
the Internet, the packet is sent to the gateway first. The destination MAC
address of the packet is the MAC address of VBDIF 10 on the gateway.
b. After the packet arrives at Edge 1, Edge 1 searches for the MAC address
entry of VBDIF 10. The entry belongs to BD 10 and is learned from the
tunnel source interface (displayed as the IP address) of the border node.
Edge 1 then encapsulates the packet into a VXLAN packet.
c. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of Edge 1 and
the border node, respectively. Then the packet is forwarded based on the
underlay route.
d. After the packet arrives at the border node, the border node performs
VXLAN decapsulation and searches for the route to the Internet in the VN
1 routing table. Because the VPN routing tables of VN 1 and the external
network resource model VN1-Outer import routes from each other, the
route to the Internet can be found in the VN 1 routing table. The next
hop of the packet is the IP address of GE1/0/1.1 on the firewall. The
destination MAC address of the packet is the MAC address of GE1/0/1.1,
and the packet is not encapsulated into a VXLAN packet.
e. After the packet arrives at the firewall, the firewall allows VN 1 to access
the Internet based on the mutual access policies and searches for the
route to Internet. The firewall then forwards the packet.
● Users in a VN access network service resources through the VXLAN tunnel
between the edge node and border node. Traffic is sent to the border node
first, then forwarded to the gateway in the network management zone based
on the imported routes of the network management zone. The gateway in
the network management zone then forwards the packet to the network
management zone.
a. Host 1 and the network service resource are on different subnets. When
Host 1 accesses the network service resource, the packet is sent to the
gateway first. The destination MAC address of the packet is the MAC
address of VBDIF 10 on the gateway.
b. After the packet arrives at Edge 1, Edge 1 searches for the MAC address
entry of VBDIF 10. The entry belongs to BD 10 and is learned from the
tunnel source interface (displayed as the IP address) of the border node.
Edge 1 then encapsulates the packet into a VXLAN packet.
c. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of Edge 1 and
the border node, respectively. Then the packet is forwarded based on the
underlay route.
d. After the packet arrives at the border node, the border node performs
VXLAN decapsulation and searches for the route to the network service
resource in the VN 1 routing table. Because the VPN routing tables of VN
1 and the network service resource model VN1-Server import routes from
each other, the route to the network service resource can be found in the
VN 1 routing table. The next hop of the packet is the IP address of
Table 2-17 describes the design of Layer 2 multicast services in the BDs
corresponding to user subnets based on the locations of multicast sources.
NOTE
For switches running V600R022C00 or later versions, Layer 2 multicast cannot be deployed
using dynamically authorized VLANs.
Table 2-17 Layer 2 multicast service design in BDs corresponding to user subnets
Multica Layer 2 Multicast Service Design Description
st
Source
Locatio
n
Outside ● On the border node, add the interface When the border
the connected to the multicast source to the node is
fabric subnet VLAN where the multicast receivers connected to a
reside, and associate the VLAN with the Layer 3 multicast
corresponding BD. You can specify the interface device that is not
for the border node to connect to devices enabled with
outside the fabric when configuring IGMP IGMP, you need
snooping in the BD on iMaster NCE-Campus. to configure the
● In this scenario, IGMP is typically enabled on interface
the Layer 3 multicast device connected to the connecting the
border node. Therefore, you do not need to border node to
specify an IGMP snooping querier in the BD. the Layer 3
multicast device
● To prevent unknown multicast traffic from as a static router
being broadcast in the BD corresponding to the port to ensure
user subnet, which wastes bandwidth, it is that IGMP
recommended that the function of dropping Report/Leave
unknown multicast packets be enabled in the messages can be
IGMP snooping profile. forwarded to the
● Wireless multicast traffic optimization: When upstream Layer 3
an AP sends multicast packets to a multicast multicast device.
receiver, multicast-to-unicast conversion can be In addition, you
used to improve the transmission efficiency of need to specify
multicast data flows. In addition, adaptive the border node
multicast-to-unicast conversion can be enabled as an IGMP
to automatically adjust air interface snooping querier.
performance and improve wireless multicast
experience.
the number of STAs connected to APs. If WLAN planning design is not performed
in the early stage, rework may be required after APs are installed. This is because
network optimization after APs are installed may require AP reinstallation and re-
cabling.
NOTE
Wi-Fi 6 APs need to be powered by PoE++ switches. Therefore, select appropriate access
switches for power supply based on AP models.
Control packets between the WAC and APs are forwarded through a CAPWAP
tunnel. APs forward service packets of wireless users to the wired side in tunnel
forwarding (centralized forwarding) or direct forwarding (local forwarding) mode.
Tunnel Forwarding
In tunnel forwarding mode, an AP encapsulates the service packets of wireless
users over a CAPWAP tunnel and sends them to the WAC. The WAC then forwards
these packets to other networks. Figure 2-46 shows the traffic forwarding model
adopted when the tunnel forwarding mode is used in this solution.
In tunnel forwarding mode, switches on the links between the WAC and APs do
not need to allow service VLANs, and interfaces on the switches do not need to be
added to such VLANs. This facilitates centralized control and management.
However, the disadvantage is that the service traffic of all wireless users is
centrally forwarded by the WAC, which imposes a heavy workload on the WAC.
Figure 2-46 Service traffic model of wireless users (tunnel forwarding mode)
Direct Forwarding
In direct forwarding mode, an AP directly forwards users' service packets to other
networks without encapsulating them over a CAPWAP tunnel. Figure 2-47 shows
the traffic forwarding model adopted when the direct forwarding mode is used in
this solution.
In direct forwarding mode, the east-west service traffic of local wireless users can
be directly forwarded by the local access switch without passing through the WAC.
However, switches on the links between the WAC and APs need to allow service
VLANs, and interfaces on the switches need to be added to such VLANs, making it
difficult to perform centralized control and management.
Figure 2-47 Service traffic model for wireless users (direct forwarding mode)
Table 2-18 compares the tunnel forwarding mode with the direct forwarding
mode. In the virtualization solution for a large or midsize campus network, the
tunnel forwarding mode that can provide centralized traffic management and
control is recommended, irrespective of which gateway solution is selected. The
subsequent WLAN planning following this section is also designed based on the
tunnel forwarding mode.
Tunnel Wireless user service The WAC forwards Service traffic must
forwardi traffic is processed and service traffic in a be forwarded by the
ng forwarded by the WAC centralized manner, WAC, reducing packet
in a centralized ensuring high forwarding efficiency
manner. security and and burdening the
facilitating WAC.
centralized traffic
management and
control.
Configuration page on iMaster NCE-Campus. In the WAC list, select the row
where the WAC resides, and click Add in the lower right corner to add APs for
management by the WAC.
SSID Planning
In most cases, service set identifiers (SSIDs) are planned based on user roles or
service types. For example, three SSIDs can be planned for three types of wireless
services in a large-scale business scenario, as shown in Figure 2-48. Employee is
used for wireless office access of employees. Guest is used for Internet access of
guests. Dumb is used for wireless access of dumb terminals such as printers. For
an SSID that is not intended for end users, for example, the SSID used for access
of printers, you can configure SSID hiding to prevent the SSID from being detected
by end users.
On a large and midsize campus network, a large number of STAs exist and require
area-specific policies. Typically, the SSID:VLAN = 1:N mapping policy is used.
On a WLAN using the "WAC + Fit AP" architecture, the WAC serves as the wireless
authentication control point. In this solution, the deployment process of the
wireless authentication control point varies according to the WAC type.
NOTE
Secu Characteristics
rity
Polic
y
Secu Characteristics
rity
Polic
y
WPA WPA and WPA2 provide almost the same security. WPA/WPA2 has two
/ editions: enterprise edition and personal edition.
WPA ● WPA/WPA2-Enterprise: uses a RADIUS server and the Extensible
2 Authentication Protocol (EAP) to provide IEEE 802.1X network access
control. Users provide authentication information, including the user
name and password, and are authenticated by an authentication
server (generally a RADIUS server). This edition applies to scenarios
that have high requirements on network security.
● WPA/WPA2-Personal: adopts a simpler mechanism, that is, WPA/
WPA2 pre-shared key (WPA/WPA2-PSK) mode. This edition does not
require an authentication server and applies to scenarios that have
low requirements on network security.
NOTE
1. Intelligent radio calibration and traditional radio calibration cannot be deployed for the
APs in the same calibration region at the same time.
2. Intelligent radio calibration needs to be used together with iMaster NCE-CampusInsight.
Ensure that APs can communicate with iMaster NCE-CampusInsight.
3. AP load prediction is applicable to scenarios where service traffic is relatively stable and
historical data is regular, such as the office automation (OA) scenario. When there is a
sudden increase or decrease in service traffic, for example, when network expansion or
large-scale personnel relocation occurs (such as in stadiums), AP load prediction cannot be
implemented.
4. Only the 5 GHz frequency band is supported when increasing the channel bandwidth of
high-load APs. In addition, if the number of available channels is less than six (for example,
in some countries, the number of available 5 GHz channels is small; if dual-5G is enabled,
the 40 MHz or higher channel bandwidth cannot be completely staggered), the channel
bandwidth cannot be increased on high-load APs.
Table 2-22 Comparison between traditional radio calibration and intelligent radio
calibration
Calib Application Scenario Advantage Disadvantage
ratio
n
Mode
uses the load balancing algorithm to measure the dual-band capability of the
STA, AP load, and AP signal quality, and steers the STA to a better AP.
● Dynamic load balancing: After a STA connects to an AP, the WAC checks
whether the number of STAs on this AP reaches the load balancing threshold.
Then, the WAC determines whether to steer the STA to a neighboring AP that
meets the load balancing conditions based on the load balancing algorithm.
Static load balancing limits the maximum number of AP radios to 16 and allows
only radios on the same frequency band to join a load balancing group.
Additionally, a load balancing group needs to be manually specified. In practice,
dynamic load balancing is recommended. In this mode, APs collect neighbor
information and steer STAs to proper APs based on the load balancing status,
dynamically implementing better STA access.
NOTE
1. The two frequency bands of an AP enabled with the band steering function must use the
same SSID and security policy. The band steering function cannot be deployed on a single-
radio AP.
2. To allow STAs to preferentially associate with the 5 GHz radio and achieve better access
effect, configure larger power for the 5 GHz radio than the 2.4 GHz radio.
In Layer 2 roaming, the service VLAN and gateway remain unchanged after STA
roaming, and traffic can be directly forwarded on the new AP. During WLAN
deployment, Layer 2 roaming is recommended. In the case of a single WAC, the
user gateway can be deployed on the WAC or a core switch at an upper layer. In
the case of multiple WACs (deployed in off-path or in-path mode), it is
recommended that the gateway be deployed on a core switch at an upper layer,
and inter-WAC roaming (still Layer 2 roaming) be used. During Layer 2 roaming,
the gateway remains unchanged, and either tunnel forwarding or direct
forwarding can be adopted. Select a forwarding mode based on service
requirements.
In Layer 3 roaming, the VLAN and gateway of a STA both change after roaming,
and the STA moves between Layer 3 networks. If different VLANs and gateways
are deployed in different buildings or areas, Layer 3 roaming is used. After
roaming, the STA IP address remains unchanged. On the new network, this IP
address cannot directly communicate with the corresponding gateway, and thus
traffic cannot be forwarded. Therefore, a tunnel must be established between
WACs to forward traffic of the roaming STA to the original gateway. In this case,
an inter-WAC mobility group must be configured, with a tunnel established
between WACs to forward STA traffic. If Layer 3 roaming is required on the
network, the tunnel forwarding mode is recommended because this mode does
not require the setup for a large number of tunnels between APs and allows
traffic to be forwarded only through the roaming tunnel between WACs.
During inter-WAC roaming, especially inter-WAC Layer 3 roaming, service traffic
generated by a STA needs to be redirected to the home WAC through the tunnel
between WACs for forwarding. This complicates STA roaming, and consumes more
WAC resources and inter-WAC link resources. Therefore, in actual deployments,
you are advised to properly plan the WLAN to avoid possible inter-WAC roaming.
For example, configure APs in the same building or at the same site to be
managed by the same WAC. If inter-WAC roaming is inevitable, properly plan the
number of members in the mobility group to reduce resource consumption caused
by user information synchronization between mobility group members.
Key Points in Designing the In-Roaming Packet Loss Rate and Handover
Delay
Apart from the basic roaming functions, the packet loss rate and handover delay
during STA roaming are also important indicators to consider. For example, in
industrial manufacturing scenarios, the automated guided vehicles (AGVs) used in
warehouses and factories require the network system to deliver a packet loss rate
less than 1% and a roaming delay less than 100 ms.
To this end, when designing wireless roaming, you are advised to:
● Ensure signal coverage continuity. That is, ensure no coverage hole exists in
areas where roaming is required. Keep a 10% to 15% signal overlap between
the coverage areas of neighboring APs to ensure smooth STA roaming
between the APs.
● Enable fast roaming function in order to reduce the handover delay and
minimize the packet loss probability.
Huawei WLAN supports pairwise master key (PMK) fast roaming and 802.11r fast
roaming. Table 2-23 lists the handover delay of STAs in different roaming modes.
Fast roaming can be enabled as required.
Smart Roaming
Dumb terminals and some outdated STAs have low roaming aggressiveness. As a
result, they stick to the initially connected APs regardless of the long distance from
the APs, weak signals, or low rates. The STAs do not roam to neighboring APs with
better signals. Such STAs are generally called sticky STAs. The negative impact of
sticky STAs is described as follows:
● The service experience of a sticky STA is poor, and such a STA is always
associated with an AP with poor signal strength. As a result, the channel rate
decreases significantly.
● The overall performance of wireless channels is affected. A sticky STA may
encounter frequent packet loss or retransmission caused by poor signal
quality and low rates, and therefore occupies the channel for a long time. As
a result, other STAs cannot obtain sufficient channel resources.
To reduce the impact of sticky STAs on a WLAN, you are advised to enable smart
roaming. The smart roaming function intelligently identifies sticky STAs on the
network and proactively directs them to APs with better signals in a timely
manner. This function improves user experience in terms of the following aspects:
● Better performance: Smart roaming can direct poor-signal STAs to APs with
better signals, improving user service experience and overall channel
performance.
● Load balancing: Smart roaming ensures that each STA is associated with the
nearest AP, achieving inter-AP load balancing.
AI Roaming
In smart roaming, APs scan STAs on their operating channels, which may lead to
the following problems that affect the roaming effect:
● The operating radio of an AP is used to scan STAs. If no STA is scanned, the
generated roaming neighbor information may be incomplete, affecting
roaming steering.
● A unified roaming steering mechanism is used during smart roaming, without
distinguishing STAs. As the roaming sensitivity varies with different STAs, the
mechanism may fail in some cases.
● During smart roaming, the Received Signal Strength Indicator (RSSI) of a STA
is detected by the AP, but not in the opposite way. Therefore, the roaming
neighbor to which the STA is steered may not be the optimal one.
The AI roaming feature can be deployed to resolve the preceding problems. As
illustrated in Figure 2-55, AI roaming utilizes intelligent analysis algorithms to
profile the roaming capabilities of STAs, identify such capabilities of different STA
types and operating system versions, and provide targeted roaming steering for
the STAs, improving the roaming steering success rate. Combined with the
independent scanning radio (a third radio) feature, AI roaming uses a fixed radio
for real-time STA scanning to obtain the AP's RSSI that STAs detect through STAs'
RSSI measurement packets. In this way, more complete and effective information
about roaming neighbors is available, so that the optimal AP to which a STA is to
roam can be identified, enhancing user experience during roaming.
When deploying AI roaming, ensure that roaming profiles of STAs are available,
which are used to obtain STAs' roaming characteristics for differentiated steering.
The system has built-in STA profile files and can dynamically generate roaming
profiles of STAs on the live network through online real-time learning. In addition,
AI roaming depends on terminal identification. That is, the roaming profile of a
STA can be matched only after the STA is identified. The WAC has a built-in
terminal fingerprint database, which can help identify STAs, without the need to
work with iMaster NCE-Campus.
AI roaming depends on hardware and feature deployment. Pay attention to the
following when deploying this feature:
● AI roaming depends on the terminal identification capability of the WAC. The
WAC has a built-in terminal identification database that cannot be upgraded
currently. Therefore, some new STAs may not be identified.
● AI roaming needs to work with the independent scanning radio (a third
radio), that is, the AP with this feature deployed must support such a radio.
Some AP models support an independent scanning radio only after they have
a right-to-use (RTU) license loaded. Therefore, this feature can be deployed
only when the hardware conditions are met.
● AI roaming supports roaming steering only for STAs working on the 5 GHz
frequency band. Therefore, a 5 GHz SSID must be deployed on the network.
● AI roaming is mutually exclusive with the PMF feature, and therefore cannot
work for a device with the PMF feature enabled.
Wireless tag location technology uses radio frequency identification (RFID) devices
and a location system to locate a specific target via a WLAN. This technology
involves locating Wi-Fi, Bluetooth, and UWB tags. To implement wireless tag
location, an AP collects and sends tag information to a location server. The
location server then calculates the physical location of the tag and sends the
calculated data to a third-party device so that the user can view the location of
the target tag through a map or table. Huawei's end-to-end wireless tag location
solution is provided in cooperation with third-party vendors in the industry.
Wireless terminal location involves locating Wi-Fi and Bluetooth terminals.
● Wi-Fi terminal location technology locates terminals based on wireless signal
strength information in the surrounding environment collected by APs. To be
specific, an AP reports the collected wireless signal information transmitted by
a Wi-Fi terminal to a location server. The location server calculates the
location of the terminal according to the obtained wireless signal information
as well as the AP's location, and then displays the terminal's location to the
user. The Wi-Fi terminal location solution can be implemented by using
Huawei WLAN devices (the location engine of iMaster NCE-CampusInsight
serves as the location server) or by cooperating with third-party partners.
For details about the principles of the cooperation solution between Huawei and
third-party location vendors as well as device selection, see related documents at:
https://e.huawei.com/en/material/bookshelf/bookshelfview/202004/03160039
security zone identifies a network, and a firewall connects networks. Firewalls use
security zones to divide networks and mark the routes of packets. When packets
travel between security zones, security check is triggered and corresponding
security policies are enforced. Security zones are isolated by default.
Generally, there are three types of security zones: trusted, DMZ, and untrusted.
● Trusted zone: refers to the network of internal users.
● DMZ: demilitarized zone, which refers to the network of internal servers.
● Untrusted zone: refers to untrusted networks, such as the Internet.
Figure 2-57 Firewall security zone division when user gateways are located inside
the fabric
equal-cost multi-path routing (ECMP) routes, the firewall can forward the access
traffic from two different paths to Server 1. Apparently, path 1 is not the best
path, and path 2 is the most desired path. After you configure ISP-based traffic
steering, when an intranet user accesses Server 1, the firewall selects an outbound
interface based on the ISP network where the destination address resides to
enable the access traffic to reach Server 1 through the shortest path, that is, path
2 in Figure 2-59.
The network configuration consists of two parts: core switch -> firewall and
firewall -> core switch -> egress router.
● Core switch -> firewall: On the core switch, L3 egress is configured for the
fabric for interconnection with the southbound interfaces of the firewalls. The
firewalls are deployed in VRRP hot standby (HSB) mode and connect to the
core switch through the southbound interfaces. It is recommended that static
routes be used between firewalls' southbound interfaces and the core switch.
● Firewall -> core switch > egress router: It is recommended that OSPF be
configured to implement communication. When configuring OSPF on
firewalls, you need to import the static routes from the firewalls destined for
the campus intranet. When configuring OSPF on egress routers, you need to
import the default routes from the egress routers destined for the external
network. Interfaces connecting the core switch to the firewalls and egress
routers need to be added to additional VPN instances for isolation from other
traffic.
Figure 2-61 Using static routing between the firewall and the core switch
● Dynamic routing
If VRRP is not deployed on firewalls, dynamic routing can be used to
implement automatic switching of the service traffic path. In this case, you
need to run the hrp standby-device command on the standby firewall to set
it to the standby state. As shown in Figure 2-62, OSPF is used as an example.
When both the active and standby firewalls work properly, the active firewall
advertises routes based on the OSPF configuration, and the cost of the OSPF
routes advertised by the standby firewall is adjusted to 65500 (default value,
which can be changed). In such a scenario, the core switch selects a path with
a smaller cost to forward traffic, and all service traffic is diverted to the active
firewall for forwarding.
If the active firewall is faulty, the standby firewall converts to the active state.
In addition, the VRRP Group Management Protocol (VGMP) adjusts the cost
of the OSPF routes advertised by the active firewall to 65500 and that of the
OSPF routes advertised by the standby firewall to 1. After route convergence
is complete, the service traffic path is switched to the standby firewall.
Figure 2-62 Using dynamic routing between the firewall and the core switch
For details about the deployment when using different routing protocols between
the firewall and the core switch, see the external network design in 2.2.4.3 Fabric
Network Design.
Table 2-25 describes the recommended security policy design for common zones.
● If private IP addresses are used on the intranet, source NAT technology needs
to be used to translate source IP addresses of packets to public IP addresses
when user traffic destined for the Internet passes through the firewall.
Network Address Port Translation (NAPT) is recommended to translate both
IP addresses and port numbers, which enables multiple private addresses to
share one or more public addresses. NAPT applies to scenarios with a few
public addresses but many private network users who need to access the
Internet.
● If intranet servers are used to provide server-related services for public
network users, destination NAT technology is required for translating
destination IP addresses and port numbers of the access traffic of public
network users into IP addresses and port numbers of the servers in the
intranet environment.
● When two firewalls operate in VRRP hot standby (master/backup) mode, IP
addresses in the NAT address pool may be on the same network segment as
the virtual IP addresses of the VRRP group configured on the uplink interfaces
of the firewalls. If this is the case, after the return packets from the external
network arrive at the PE, the PE broadcasts ARP packets to request the MAC
address corresponding to the IP address in the NAT address pool. The two
firewalls in the VRRP group have the same NAT address pool configuration.
Therefore, the two firewalls send the MAC addresses of their uplink interfaces
to the PE. In this case, you need to associate the hot standby status (master/
backup) of the firewalls with the NAT address pool on each firewall, so that
only the master firewall in the VRRP group responds to the ARP requests
initiated by the PE.
● It is recommended that the authentication control point for wired user access
be deployed on the edge node. When configuring access management for a
fabric, you need to configure the authentication control point for wired user
access. For details, see "Access Management Design" in 2.2.4.3 Fabric
Network Design.
● The authentication control point for wireless user access is deployed on the
WAC. For details about the planning and design of the authentication control
point for wireless user access, see "WLAN Admission Design" in 2.2.5 WLAN
Design.
In the centralized gateway solution, you are advised to deploy a VXLAN across
core and access layers for fabric networking. If the VXLAN is deployed across core
and aggregation layers (for example, in the device reuse and reconstruction
scenario), policy association can be deployed between an aggregation switch
(functioning as an edge node) and access switch.
fabric networking. In this way, the edge node can function as both the
authentication control point and authentication enforcement point. In a few
network device reuse scenarios, the VXLAN is deployed across core and
aggregation layers, and an aggregation switch functions as the edge node. In
this deployment mode, policy association can be deployed between the edge
node and access switch. In this way, the wired authentication control point
does not need to be moved down from the edge node to the access switch,
the number of wired authentication control points does not increase, and the
wired authentication enforcement point that controls access of wired user
terminals still sits on the access switch.
● Policy association is designed based on the traditional "WAC + Fit AP"
architecture for access control. In this architecture, WACs function as wireless
authentication control points and APs as wireless authentication enforcement
points. Wireless user authentication information is synchronized between
WACs and APs through CAPWAP tunnels. The AP prevents unauthorized users
from accessing the campus network.
mode can be used as the wireless policy enforcement point for deploying
security group policies for free mobility. When the border node functions
as the wireless policy enforcement point, it is not a wireless
authentication control point and cannot obtain IP-security group entry
information through user authentication and authorization. Therefore,
you need to enable IP-security group entry subscription on the border
node.
The free mobility solution is recommended for policy control on large- and
medium-sized campus networks. If the existing campus network of the customer
does not support the free mobility solution, use the traditional NAC solution.
complex and difficult to maintain. Therefore, for large- and medium-sized campus
networks, you are advised to use the NAC server to authorize dynamic ACL
policies. With this approach, terminals do not need to be strictly bound to IP
addresses and VLANs, making IP and VLAN planning flexible, as shown in Figure
2-66. When different types of users are present, you are advised to restrict access
locations of the users. That is, users with different permissions access the Internet
from their respective areas specified by the administrator. This ensures that only
related policies need to be configured on devices in these areas. Otherwise, it will
be difficult to configure policies and perform O&M.
B Empty NA Empty NA
D Empty NA Empty NA
policies that need to be defined and thereby simplifying policy configuration. For
example, as described in Table 2-28, an administrator simply needs to configure a
policy for denying access from group A to group B so that access from group A to
the Any group is permitted.
NOTE
● Only bypass in wired user authentication and wireless user authentication (native WAC)
scenarios can be configured on NCE-Campus. In wireless user authentication
(standalone WAC) scenarios, bypass can be configured through the web UI or CLI.
● The Portal server needs to support the heartbeat detection function.
● Bypass policies do not support VLAN authorization for online Portal users.
● Bypass when the authentication server goes Down is not supported for wireless users
using 802.1X authentication In this case, new users are not allowed to access the
network. In standalone WAC scenarios, you can configure an SSID in open-system
authentication mode which is automatically enabled when the authentication server
goes Down. This SSID allows users to access the temporary bypass network.
● In 802.1X authentication for wired users, when the RADIUS server goes Down, some
new clients will fail to go online because they do not have bypass rights. For example,
when a Windows client receives a Success packet from the device, but does not receive
the authentication packets exchanged with the RADIUS server, the client fails the
authentication and cannot go online. Currently, the following clients have bypass rights
when they go online after the user bypass function is configured: H3C iNode clients
using EAP-MD5 and PEAP, and Cisco AnyConnect clients using EAP-FAST and PEAP. For a
Windows client (for example, Windows 7 client), choose Local Area Connection >
Properties > Authentication > Fallback to unauthorized network access to grant
bypass rights to the client.
● NCE-Campus does not support configuration of re-authentication for users in
authentication bypass state after the authentication server or Portal server comes Up.
NOTE
● Only bypass in wired user authentication and wireless user authentication (native WAC)
scenarios can be configured on NCE-Campus. In wireless user authentication
(standalone WAC) scenarios, bypass can be configured through the web UI or CLI.
● Bypass before successful authentication cannot be configured for wireless users using
802.1X authentication
NOTE
● If EAP-PEAP-MSCHAPv2 is used, the third-party AD/LDAP server does not support the
bypass function.
● If NCE-Campus is not in AD synchronization mode, the third-party AD/LDAP server does
not support the bypass function.
● When NCE-Campus functions as both an authentication server and a RADIUS relay
agent in the multi-mode authentication scenario, bypass when the third-party
authentication server goes Down is not supported. It is recommended that all
authentication be deployed on the third-party server.
NOTE
When terminal identification is used together with the VLAN authorization policy, you can
disable pre-connection in 802.1X and MAC address authentication scenarios to prevent IP
address re-assignment to terminals.
As a result, in dumb terminal access scenarios, you can deploy the terminal
anomaly detection function to detect dumb terminals that are spoofed or attacked
in real time and block their access to the network. This prevents dumb terminals
from being spoofed or attacked, ensuring network security.
Function Description
NOTE
● The terminal anomaly detection function can be configured only on switches using
commands, but not on iMaster NCE-Campus.
● The terminal anomaly detection function applies only to wired access terminals.
● The terminal anomaly detection function supports three types of dumb terminals: IP
phones, IP cameras, and printers.
format that meets Huawei iConnect ecosystem standards. The exchanged wireless
protocol packets trigger the first access authentication for Huawei iConnect
terminals. In addition, the authentication packets sent from the WAC to iMaster
NCE-Campus (functioning as the NAC server) also carry electronic identity
information. With this information, iMaster NCE-Campus is able to identify
terminals, and then re-authenticates them and delivers authorization policies
when the terminals access the service SSID.
Related Products
Table 2-34 lists the related products involved in the Huawei iConnect terminal
access control solution in the current version of the CloudCampus Solution.
Table 2-34 Products involved in the Huawei iConnect terminal access control
solution
IoT Terminal Authentica PKI IoT Platform
tion Server Serv
er
Table 2-35 Different phases when Huawei iConnect terminals access the
campus network
automatically apply for and load digital certificates. Table 2-37 describes the
schemes for Huawei iConnect terminals to automatically apply for digital
certificates in different scenarios.
Table 2-37 Schemes for Huawei iConnect terminals to automatically apply for
digital certificates in different scenarios
1. The campus administrator purchases SIM cards and 5G IoT terminals, and
then manually imports information such as the international mobile
subscriber identity (IMSI) and international mobile equipment identity (IMEI)
of the terminals to iMaster NCE-Campus (functioning as the authentication
server), or synchronizes such information from the IoT platform deployed by
the enterprise to iMaster NCE-Campus.
2. 5G terminals equipped with SIM cards access the 5G core network after
successful 5G-AKA authentication, and then create PDU sessions.
3. The session management function (SMF) module on the 5G core network
triggers RADIUS PAP or CHAP authentication and terminal information such
as the IMSI and IMEI carried in RADIUS packets is reported to iMaster NCE-
Campus for authentication. The communication between the SMF and
iMaster NCE-Campus involves sensitive information. Therefore, data flows
between the SMF and iMaster NCE-Campus are transmitted through the
private line and are encrypted by IPsec tunnels.
4. After the authentication is successful on iMaster NCE-Campus, the 5G
terminal can access the enterprise's intranet resources through the 5G
network.
5. After the authentication is successful, iMaster NCE-Campus delivers the
security policy for 5G terminals to the MSCG.
NOTE
● Currently, only iMaster NCE-Campus can function as the authentication server to work
with the carrier's SMF to perform second authentication on 5G terminals and work with
the MSCG.
● 5G terminals in the current solution refer to IoT terminals only and cannot roam
between 5G base stations.
● In the current solution, 5G terminals cannot smoothly switch from a carrier network to
a campus Wi-Fi network.
● Air interface security: Identifies and defends against attacks such as rogue
APs, rogue STAs, unauthorized ad-hoc networks, and DoS attacks.
● STA access security: Ensures the validity and security of STAs' access to the
WLAN.
● Service security: Protects service data of authorized users from being
intercepted by unauthorized users during transmission.
● To prevent unauthorized attacks, you are advised to enable the illegal attack
detection function in public areas and student dormitories with high security
requirements to detect flood, weak-vector, and spoofing attacks,
automatically add attackers to the dynamic blacklist, and alert the
administrator through alarms.
refer to 2.2.8.2.4 Core Layer if the aggregation switch functions as the user
gateway or authentication point.
Request packets are sent to the main control board for processing, the CPU usage
of the main control board will increase and other services cannot be processed
promptly.
The optimized ARP reply function addresses this issue. After this function is
enabled, the interface card directly responds to ARP requests if the ARP Request
packets are destined for the local interface of the switch, helping defend against
ARP flood attacks. This function is applicable to the scenario where a modular
switch is configured with multiple interface cards or fixed switches are stacked.
● Remote attestation
Remote attestation for NE software package integrity is provided to ensure
secure NE running.
● NE situational awareness
Based on AI algorithms, the NE log analysis helps determine the overall NE
security situation and predict the future trend, enabling security O&M
personnel to quickly obtain and understand a large amount of network
security data and locate threat sources in a timely manner. Security O&M
personnel can thereby quickly respond to various attacks, threats, and
exceptions on network devices, ensuring service continuity.
The NE situational awareness function can detect the following exceptions:
Brute force cracking, login using a blacklisted IP address, an unusual IP
address, an unauthorized account, a compromised account, or a zombie
account; login through an uncommon path or at unusual time, abnormal
number of login accounts, abnormal login frequency, unauthorized account
creation, unauthorized password change, unauthorized account activation
(detected when the product has activation logs), password change violation,
unauthorized account deletion, unauthorized user permission change,
unauthorized operation attempt, file permission escalation, key file
tampering, Rootkit attack, unauthorized superuser, and shell file tampering.
● Security configuration check
The security configuration check function provides visualized NE security
management capabilities. The function supports the checking of device
Netw Traffic models of various services and E2E Determine the network
ork forwarding paths of each service, including location where a QoS
statu forwarding paths in normal and abnormal policy is to be deployed.
s conditions
collect and analyze multiple packets to identify the protocol type. The system
analyzes traffic flowing through the device, and compares the analysis result with
the signature database file loaded on the device. It identifies an application by
detecting signatures in data packets, and performs further application quality
analysis and QoS policies based on the identification result.
Figure 2-72 shows the deployment position of the SAC function on a large or
midsize campus network. Application traffic of wired users is identified by access
switches, and that of wireless users is identified by APs.
The applications that can be identified by the SAC function depend on the
signature database supported by a device. The device can accurately identify some
mainstream applications that are covered by its signature database.
However, private applications of enterprises are used in actual scenarios. If these
private applications are not covered by the signature database supported by the
device, the device cannot effectively identify these applications. In this case, you
can define application identification rules to identify private applications based on
key 5-tuple or URL information. The following figure is an example of customizing
an application identification rule.
NOTE
● If user-defined rules conflict with the rules in the signature database, the device uses
the user-defined rules for application identification.
NOTE
Security groups are supported for wireless users only in native WAC scenarios.
measurement for all flows sent to a server on a per-flow basis, you need to
configure the 5-tuple information of each flow separately. By contrast, iPCA 2.0
requires only two configurations: one any-to-server DIP rule configured for
upstream flows, and the other server SIP-to-any rule for downstream flows. Then,
a 5-tuple flow table is automatically generated for each flow that matches either
of the two rules for further iPCA 2.0 measurement.
iPCA 2.0 uses the color bit (bit 0 in the IP Flags field is recommended) to measure
packet loss and delay.
As shown in Figure 2-74, iPCA 2.0 defines three types of measurement points: in-
point, out-point, and mid-point. The in-point is used for coloring and
measurement, the out-point for decoloring and measurement, and the mid-point
for measurement. For a bidirectional flow, the upstream (terminal-to-server) and
downstream (server-to-terminal) traffic on the same interface of a device passes
through the in-point and out-point, respectively. iPCA 2.0 needs to be configured
on the interfaces through which all flows may pass.
NOTE
The preceding figure shows only the monitoring of traffic in one direction. The monitoring
of traffic in the other direction needs to be configured in the reverse direction using the
same logic.
iPCA 2.0 depends on time synchronization of NTP and supports only two-way delay
measurement, which requires that the upstream and downstream flows be transmitted
along the same path.
iPCA 2.0-based delay measurement is inaccurate in the presence of out-of-order packets.
Only the S5731-H, S5731-H-K, S5731-S, S5731S-S, S5731S-H, S5732-H, S5732-H-K, S6730-
H, S6730-H-K, S6730S-H, S6730-S, S6730S-S, and modular switches installed with X series
cards support delay measurement.
iPCA 2.0 is not applicable to multicast scenarios.
As illustrated in the preceding figure, when deploying iPCA 2.0, you are advised to
specify mid-points and out-points in advance and enable AutoDetect on them. In
this way, when adding an application flow measurement task, you only need to
specify the application name on in-points, but not on mid-points or out-points.
Application identification must be deployed on the node where the in-point for
flow measurement resides. In this manner, application identification data can be
used to associate an application name with 5-tuple information, which is then
used for traffic matching.
Without application identification data, the nodes where the mid-point and out-
point reside need to automatically identify the packets to be measured and trigger
flow creation and measurement based on the packets colored by the node where
the in-point resides.
If two-way measurement is performed on the node where the out-point resides,
the received reverse flow is not colored because no application identification data
is available. The 5-tuple information in the reverse flow can be obtained only
based on the forward flow. After such information is obtained, the ACL is
automatically delivered for a 5-tuple match and trigger flow coloring, creation,
and measurement.
The colored reverse flow is processed in the same way as the forward flow.
Subsequent nodes identify the color bit in the flow to trigger flow creation and
measurement.
Nodes enabled with AutoDetect can provide sufficient entry resources to
implement automatic in-band flow measurement only after the resource mode of
the nodes is switched to iPCA. In this mode, the number of IPv4/IPv6 dual-stack
STAs on a fabric cannot exceed 30,000.
NOTE
The fault demarcation capability based on application identification can be used only for
long flows. The duration of a long flow must be longer than two iPCA 2.0 reporting periods.
As the minimum reporting period can be set to 10 seconds, only flows with longer than 20-
second duration can be displayed on the analyzer.
Only the S5731-H, S5731-H-K, S5731-S, S5731S-S, S5731S-H, S5732-H, S5732-H-K, S6730-
H, S6730-H-K, S6730S-H, S6730-S, S6730S-S, and modular switches installed with X series
cards support fault demarcation based on application identification.
The application identification and traffic statistics collection functions cannot be configured
on a switch interface where iPCA 2.0 is configured. Therefore, the in-point can be
configured only on the uplink interface. As a result, packet loss on access switches cannot
be detected.
Other constraints are the same as those for 5-tuple-based fault demarcation.
● Hop-by-hop fault demarcation based on security groups and applications
The customer wants to preferentially guarantee experience of key applications
(such as video conferencing and email) for VIP users (such as executives and
student union members) in key places (such as conference rooms, exam rooms
and surrounding places) at key moments (such as exams). In this case, fault
demarcation based on application identification mentioned above cannot meet
the requirements, because the specifications may be insufficient to support fault
demarcation for all key applications. To this end, fault demarcation based on
security groups or both security groups and applications is provided. Security
groups are used to authorize authenticated users based on 5W1H, so that
information including user identities, access locations (switches the users connect
to), and access time can be identified.
NOTE
Security groups are supported for wireless users only in native WAC scenarios.
but packet loss rate, latency, and jitter cannot. In practice, QoS is deployed based
on engineering experience, as shown in Table 2-39.
VoIP Real-time voice calls over IP networks. The network Ver Ver Ver
data must provide low latency and low jitter to ensure y y y
flow service quality. low low low
Voice Signaling protocols for controlling VoIP calls and Low Lo Per
signali establishing communication channels, for example, w mit
ng SIP, H.323, H.248, and Media Gateway Control
Protocol (MGCP).
Signaling protocols have a lower priority than VoIP
data flows because call failure is often considered
worse than intermittent voices.
Multi Multiple parties can share camera feeds and screens Low Ver Lo
media over IP networks. Protocols or applications can or y w
confer adapt to different network quality levels by me low
encing adjusting the bitrate (image definition) to ensure diu
the smoothness. m
Strea Online audio and video streaming. Audio and video Low Me Per
ming programs are made in advance and then cached on or diu mit
media local terminals before being played. Therefore, the me m
requirements on the network latency, packet loss, diu
and jitter are reduced. m
Delay- Data services that are sensitive to delay. For Low Lo Per
sensiti example, long delay on an online ordering system w mit
ve may reduce the revenue and efficiency of or
data enterprises. me
service diu
s m
Low- Services that are not important to enterprises, such Hig Hig Per
priorit as social network and entertainment applications. h h mit
y
service
s
control. Devices in the same DiffServ domain only need to schedule packets in
queues based on the priorities marked on boundary nodes. Typically, service
deployment involves traffic identification at the access layer, DiffServ model
deployment at the aggregation or core layer, and bandwidth control on egress
firewalls.
In WLAN QoS design, you need to consider priority mapping between wired and
wireless network packets. For example, 802.11 packets sent by WLAN clients carry
user priorities or DSCP priorities, VLAN packets on wired networks carry 802.1p
priorities, and IP packets carry DSCP priorities. To ensure consistent QoS
scheduling of packets on wired and wireless networks, you need to configure
priority mapping on network devices.
NOTE
This example defines two VIP users (VIP1 and VIP2) and analyzes indicators of
mission-critical applications of the VIP users, including the packet loss rate,
latency, jitter, and bandwidth requirements. For details about traffic classification,
see 2.2.9.3 Traffic Classification Design.
The VIP users and applications in the preceding table are for reference only, and
used to demonstrate the planning and design methods and logic. Therefore, the
indicator data cannot be recommended in actual deployments.
You are advised to customize applications using the preceding two methods. After
customizing applications on iMaster NCE-Campus as an administrator, you can
add the customized applications to an application scheduling template.
Table 2-43 Example design of an application scheduling template (for VIP users
VIP1, VIP3, and VIP5)
Application Priority (a Higher Shaping Bandwidth of Burst Traffic
Value Indicates a (Mbit/s)
Higher Priority)
Other applications 1 -
Non-critical 2 -
service application
APP6
Non-critical 3 -
service application
APP5
Customized 4 20
application APP4
Video 5 10
conferencing
application APP3
Customized 6 -
application APP2
Instant messaging 7 -
application APP1
Since the service traffic models of VIP users VIP1, VIP3, and VIP5 are similar, the
same application scheduling template is defined for the three VIP users. After the
application scheduling templates of all VIP users are designed, you can authorize
these templates to respective VIP users on the controller. Scheduling policies will
be automatically delivered to network devices.
● Improving reliability
Table 2-44 Common protocols used in traditional network O&M that can be
configured on iMaster NCE-Campus
Proto Configuration Supported by iMaster NCE-Campus
col
SSH ● Configures local accounts for login over SSH on network devices.
● Logs in to the CLI of a network device from iMaster NCE-Campus
using SSH.
Basic ● Monitors the basic status of sites, devices, and terminals, such
service as the online rate, CPU usage, and memory status of devices at
monitoring a site.
● Provides traffic statistics analysis and reports from multiple
dimensions (site, terminal, application, and SSID).
● Monitors user information, including the online status and
traffic statistics.
WLAN Network plan import: After the network plan file made by
topolog the WLAN Planner is imported, iMaster NCE-CampusInsight
y displays data such as sites, pre-deployed APs, obstacles,
background images, and scale planned in the file.
Network comparison: After pre-deployed APs are
associated with real APs, the planned data and actual data
are compared in terms of the power, channel, frequency
bandwidth, number of clients, negotiated rate, and signal
strength, and the comparison result is displayed.
Wi-Fi heatmap display: The radio heatmap can be
displayed based on the AP location.
User Applica Based on the monitoring and analysis of audio and video
applica tion service sessions, the SIP session statistics, service traffic
tion analysi trend, and session details list can be displayed, helping users
experie s quickly learn about the quality status of audio and video
nce services.
● The time on network devices must be synchronous with that on CampusInsight. If the
time difference between the network devices and CampusInsight is greater than 10
minutes, CampusInsight cannot display the data reported by the network devices.
Typically, you need to configure an NTP server with the same source to ensure time
synchronization between network devices and CampusInsight. During the installation of
CampusInsight, you need to enter the IP address of the external NTP clock source. In
addition, you need to configure the same IP address of the external NTP clock source on
the network devices.
● The wireless network uses the "WAC + Fit AP" architecture, and the performance data
collection function of the Fit APs must be enabled on the web system of the WAC. This
function cannot be enabled on iMaster NCE-Campus.
● To log in to CampusInsight through iMaster NCE-Campus as a proxy service, you need
to enable the intelligent analyzer agent feature on iMaster NCE-Campus in advance.
In the distributed gateway solution, multiple border nodes are supported. Generally, it is
recommended that a single border node be deployed. If high reliability is needed on the
core layer, multi-border node networking is recommended. The following sections describe
the overall design process based on the single-border node solution. For details about the
differences between the multi-border node solution and the single-border node solution,
see 2.3.10.3 Multi-Border Node Reliability.
E Device model.
Table 2-49 Resource pools on a fabric and resource invoking methods during VN
creation
Resource Pool on a Fabric How to Invoke Resources in a
Resource Pool During VN Creation
User access point resource pool, which When configuring user access in a VN,
is planned during access management you can select planned access point
configuration for a fabric. This resource resources.
pool includes the authentication
modes that can be bound to access
points.
Egress pool, which contains the When creating a VN, you can select
external resources that can be used by external networks and network service
VNs. Two types of external resources resources.
are created during fabric configuration:
● External networks: used for VNs to
communicate externally
● Network service resources: used for
VNs to communicate with the
authentication server and DHCP
server
functional zones. Modules in each functional zone are clearly defined, and the
internal adjustment of each module is limited to a small scope, facilitating fault
location.
Figure 2-87 Physical network in the virtualization solution for large- and medium-
sized campus networks
In office, education, and hospitality scenarios, central switches and remote units
(RUs) are deployed together to build simplified all-optical campus networks. The
three-layer networking (core, aggregation, and access) for ELV rooms is changed
to the two-layer (core and aggregation) networking. At the access layer, RUs are
deployed to enable access to the desktop. The three-layer networking is also
supported, where access switches function as central switches and RUs are
connected to the access switches.
Figure 2-88 Simplified all-optical networking topology for large and midsize
campus networks
Ter The terminal layer involves various terminals that access the campus
min network, such as PCs, printers, IP phones, mobile phones, and cameras.
al
laye
r
Acce The access layer provides various access modes for users and is the first
ss network layer to which terminals connect. The access layer is usually
laye composed of access switches. There are a large number of access
r switches that are sparsely distributed in different places on the network.
In most cases, an access switch is a simple Layer 2 switch. If wireless
terminals are present at the terminal layer, wireless access points (APs)
need to be deployed at the access layer and access the network through
access switches.
On a three-layer simplified all-optical campus network, access switches
are deployed as central switches to manage the connected RUs.
Agg The aggregation layer sits between the core and access layers. It
rega forwards horizontal traffic (east-west traffic) between users and forwards
tion vertical traffic (north-south traffic) to the core layer. The aggregation
laye layer can also function as the switching core for a department or zone
r and connect the department or zone to a dedicated server zone. In
addition, the aggregation layer can further extend the quantity of access
terminals.
On a two-layer simplified all-optical campus network, aggregation
switches are deployed as central switches to manage the connected RUs.
Na Description
me
Core The core layer is the core of data exchange on a campus network. It
laye connects to various components of the campus network, such as the DC/
r network management zone, aggregation layer, and campus egress. The
core layer is responsible for high-speed interconnection of the entire
campus network. High-performance core switches need to be deployed
to meet network requirements for high bandwidth and fast convergence
upon network faults. It is recommended that the core layer be deployed
for any campus with more than three departments.
Egre The campus egress is the boundary that connects a campus network to
ss an external network. Internal users of the campus network can access
net the external network through the campus egress zone, and external
wor users can access the internal network through the campus egress zone.
k Firewalls need to be deployed in the campus egress zone to provide
perimeter security protection.
DC In the DC zone, service servers such as the file server and email server
zon are managed, and services are provided for internal and external users.
e
Net The network management zone is the server zone where the O&M and
wor management systems are deployed. In the virtualization solution for
k large- and medium-sized campus networks, the following systems are
man deployed:
age ● iMaster NCE-Campus: campus network automation engine. It is used
men to provision service configurations for network devices; provides open
t APIs for integration with third-party platforms; and can function as an
zon authentication policy server to deliver authentication, authorization,
e accounting (AAA) and free mobility services.
● iMaster NCE-CampusInsight: intelligent campus network analytics
engine, which provides intelligent O&M services by utilizing Telemetry,
big data, and intelligent algorithms.
● DHCP server: dynamically assigns IP addresses to user clients.
During network design, you can use the bottom-up method to determine the
layered architecture depending on the network scale, as illustrated in Figure 2-90.
NOTE
BD Resource Planning
In a VN, a Layer 2 broadcast domain is constructed based on bridge domains
(BDs). In a BD, user terminals in different geographical locations can communicate
with each other. In the virtualization solution for large- and medium-sized campus
networks, BD resource planning guidelines are as follows:
● 1:1 mapping between BDs and user service VLANs is recommended, as shown
in Figure 2-92.
● In a VN, each time a VXLAN user gateway is created, a BD is automatically
invoked from the global BD resource pool of the fabric in sequence. You do
not need to consider how to divide a BD. Instead, you only need to consider
how to assign user service VLANs.
● BD resources in the BD resource pool must be sufficient to support user
service VLAN assignment.
Cate Recommendation
gory
NOTE
In 2.2.7 Access Control Design, if policy association is required between the authentication
point and policy enforcement point, you need to plan a management VLAN for policy
association to establish a CAPWAP tunnel between the authentication point and policy
enforcement point. In the distributed gateway solution, if VXLAN is deployed across core
and aggregation layers (recommended) on the fabric network, policy association is usually
deployed on the aggregation switches (edge nodes) and access switches. If edge nodes also
function as native WACs, APs connected to access switches can establish CAPWAP tunnels
with the edge nodes through the management VLAN for policy association and go online
on the edge nodes. In this case, no additional management VLAN needs to be planned.
Categor Suggestion
y
NOTE
In 2.2.7 Access Control Design, if policy association is required between the authentication
point and policy enforcement point, you need to plan a management VLAN for policy
association to establish a CAPWAP tunnel between the authentication point and policy
enforcement point. In the distributed gateway solution, if VXLAN is deployed across core
and aggregation layers (recommended) on the fabric network, policy association is usually
deployed on the aggregation switches (edge nodes) and access switches. If edge nodes also
function as native WACs, after management IP addresses for policy association are
configured on the edge nodes, APs connected to access switches can go online on the edge
nodes. In this case, no additional management IP address needs to be planned.
Figure 2-94 Routing protocol planning in the virtualization solution for large- and
medium-sized campus networks
installed on servers in the equipment room. During the installation, make sure
that the egress gateway can communicate with the campus intranet. This section
describes the basic server networking design for communication between these
software systems and the campus intranet.
Active-backup mode
In this mode, one NIC interface in the bonded interface is in the active state, and
the other is in the backup state. All data is transmitted through the active NIC
interface. In the event of a failure on the link corresponding to the active NIC
interface, data is transmitted through the backup NIC interface. In this case, the
Layer 3 switch functioning as the server gateway connects to the two NIC
interfaces on a server through two physical ports. The physical ports do not need
to be aggregated, and are recommended to be added to the VLAN of the
corresponding network plane in access mode. As shown in Figure 2-96, add
physical ports (GE1/0/1 and GE2/0/1) on the switch to VLAN 100 using the
following commands.
<Switch> system-view
[Switch] vlan batch 100
[Switch] interface gigabitethernet 1/0/1
[Switch-GigabitEthernet1/0/1] port link-type access
[Switch-GigabitEthernet1/0/1] port default vlan 100
[Switch-GigabitEthernet1/0/1] quit
[Switch] interface gigabitethernet 2/0/1
[Switch-GigabitEthernet2/0/1] port link-type access
[Switch-GigabitEthernet2/0/1] port default vlan 100
[Switch-GigabitEthernet2/0/1] quit
<Switch> system
[Switch] vlan batch 100
[Switch] interface eth-trunk 1
[Switch-Eth-Trunk1] trunkport gigabitethernet 1/0/1 2/0/1
[Switch-Eth-Trunk1] port link-type access
[Switch-Eth-Trunk1] port default vlan 100
[Switch-Eth-Trunk1] quit
Communicatio iMaster NCE- Functions as the NAC server for user access
n with the user Campus authentication. The user subnet must be
subnet on the able to communicate with iMaster NCE-
Campus.
The network management zone adopts the basic networking design, the topology
between the gateway in the network management zone and the core switch
cluster is stable, and only a few network segments are required for
communication. If this is the case, you are advised to configure static routes
between the gateway in the network management zone and the core switch
cluster. As illustrated in Figure 2-98, the planning of static routes is as follows:
● Two VLANIF interfaces are separately planned on the gateway in the network
management zone as well as on the core switch. One (VLANIF 500 in the
figure) is used for communication between the network management zone
and the device management subnet on the underlay network, and the other
(VLANIF 600 in the figure) for communication between the network
management zone and the user subnet on the overlay network.
● For communication between the network management zone and the device
management subnet on the underlay network:
– On the core switch: Configure a static route destined for the network
management zone. The destination network segment is the network
segment where the software systems (for example, iMaster NCE-Campus
and iMaster NCE-CampusInsight in the figure) that need to communicate
with the device management subnet resides. The next hop of the static
route is the IP address of VLANIF 500 on the gateway in the network
management zone.
– On the gateway in the network management zone: Configure a route
destined for the device management subnet on the underlay network.
The destination network segment is the device management network
segment, and the next hop is the IP address of VLANIF 500 on the core
switch.
● For communication between the network management zone and the user
subnet on the overlay network:
– On the core switch: When creating network service resources for a fabric,
configure the IP addresses of the connected network service resources as
well as the VLANs and IP addresses for interconnecting with the gateway
in the network management zone on the core switch that functions as
the border node. After the configuration is complete, the core switch
imports routes between the virtual routing and forwarding (VRF) instance
that represents the network service resource and the VRF instance that
represents a VN. In addition, the core switch creates a private static route
destined for the network management zone in the VRF instance that
represents the network service resource. The destination network
segment of this static route is the network segment where the software
system that needs to communicate with the user subnet resides, such as
iMaster NCE-Campus or the DHCP server in the figure.
Figure 2-98 Planning for communication between the network management zone
and the campus intranet
Net Switch Local CLI or web Generally, you need to configure the
wor (gateway in system switch before installing software
k the network systems in the network
man management management zone.
age zone)
men
t
zone
In the distributed gateway solution, if the fabric uses the recommended networking of
VXLAN deployed across core and aggregation layers and edge nodes that provide the native
WAC function are used, policy association is deployed between aggregation switches (edge
nodes) and access switches. In this way, an AP connected to an access switch can establish
a CAPWAP tunnel with an edge node through the management VLAN for policy association
and go online on the edge node. No additional management VLAN is required. For details
about the AP join process design, see "AP Join Process Design" in 2.3.5 WLAN Design.
On a large or midsize campus network, you are advised to deploy devices below
the core layer in plug-and-play mode through DHCP to onboard aggregation and
access switches on iMaster NCE-Campus and APs on the WAC. How to implement
management VLAN communication is critical for onboarding devices below the
core layer. In the distributed gateway solution, the management VLANs of
aggregation and access switches can communicate with each other in the
following modes:
Using default VLAN 1 as the management VLAN
As shown in Figure 2-99, the process for onboarding aggregation and access
switches on iMaster NCE-Campus using VLAN 1 (default VLAN) is as follows:
1. The core switch goes online on iMaster NCE-Campus through the CLI.
2. On iMaster NCE-Campus, configure VLANIF 1 on the core switch as the
gateway interface of the management subnet, configure a DHCP address
pool, and configure DHCP Option 148 to carry the southbound IP address of
iMaster NCE-Campus.
Figure 2-99 Using default VLAN 1 for plug-and-play deployment of devices below
the core layer
If VLAN 1 is used as the management VLAN, broadcast storms may occur. To avoid
this, you can enable management VLAN auto-negotiation to configure another
VLAN as the management VLAN. As shown in Figure 2-100, assume that VLAN
100 is used as the auto-negotiated management VLAN. The process for
onboarding aggregation and access switches in plug-and-play mode is as follows:
1. The core switch goes online on iMaster NCE-Campus through the CLI.
2. On iMaster NCE-Campus, configure VLANIF 100 on the core switch as the
gateway interface of the management subnet, configure a DHCP address
pool, and configure DHCP Option 148 to carry the southbound IP address of
iMaster NCE-Campus.
3. Configure the core switch as the root device and use the management VLAN
auto-negotiation function to enable management VLAN communication for
devices below the core layer. The process is as follows:
a. On iMaster NCE-Campus, enable the management VLAN auto-
negotiation function on the core switch and configure VLAN 100 as the
auto-negotiated management VLAN.
b. After the core switch is configured, aggregation switches automatically
add their interfaces to VLAN 100 through protocol packet auto-
negotiation.
Management VLAN Switching Design After Devices Below the Core Layer Go
Online
Sometimes, there are a large number of network devices on a campus network.
After these devices go online in plug-and-play mode for the first time, broadcast
storms may still occur even if an auto-negotiated management VLAN is used. In
this case, you are advised to plan multiple management VLANs. After devices go
online in plug-and-play mode for the first time, switch the management VLAN to
isolate the broadcast domains of these devices.
In the distributed gateway solution, management VLAN switching is not required
for APs that go online through a management VLAN for policy association, which
is different from the management VLAN for switch onboarding. For switches, if
the broadcast storm risk is still high without the presence of APs, you are advised
to plan device groups based on network layers and configure each device group to
use the same management VLAN during VLAN switching. For example, each
aggregation switch and its connected downstream devices are grouped into a
device group and use the same management VLAN, as shown in Figure 2-101.
Note: Before switching the management VLAN, add the interconnection interfaces
on the core switch and devices below the core layer to the new management
VLAN. In this way, devices below the core layer will not fail to go online due to
communication failures with the core switch on the new management VLAN.
When there are fewer than 100 switches in a network area where routes need to
be deployed on the underlay network, single-area orchestration is recommended.
● All switches between the border and edge nodes on the fabric support
automatic orchestration of OSPF routes. These devices refer to all aggregation
and core switches if VXLAN is deployed across the core and aggregation
layers, and refer to all core, aggregation, and access switches if VXLAN is
deployed across the core and access layers.
● All switches between the border and edge nodes on the fabric are planned in
area 0.
● Different VLANIF interfaces are planned on all switches for interconnection
through OSPF. The interconnected Layer 2 interfaces allow packets from the
corresponding VLANs to pass through.
● When configuring a fabric, you need to create loopback interfaces on the
switches that function as border and edge nodes for establishing BGP EVPN
peer relationships. Routes on the network segments where the loopback
interface IP addresses reside are also advertised to area 0.
When there are more than 100 switches in a network area where routes need to
be deployed on the underlay network, multi-area orchestration is recommended.
● All switches between the border and edge nodes on the fabric support
automatic orchestration of OSPF routes. These devices refer to all aggregation
and core switches if VXLAN is deployed across the core and aggregation
layers, and refer to all core, aggregation, and access switches if VXLAN is
deployed across the core and access layers.
● The core switch is planned in area 0. Each downlink VLANIF interface on the
core switch, as well as the aggregation and access switches connected to
these VLANIF interfaces are planned in the same area.
● Different VLANIF interfaces are planned on all switches for interconnection
through OSPF. The interconnected Layer 2 interfaces are added to the
corresponding VLANs in trunk mode.
● On the core switch that functions as a border node, routes on the network
segment where its loopback interface IP address resides are advertised to area
0. On an edge node, routes on the network segment where its loopback
interface IP address resides are advertised to the area to which the edge node
belongs.
● If a Layer 2 switch is required for interconnection between the border and
edge nodes and performs transparent transmission between them, this Layer
2 switch cannot be the aggregation switch. (When adding a switch to a site
on iMaster NCE-Campus, you can set the switch role.) You can select Core or
Regional aggregation as the switch role. After the automatic OSPF route
orchestration function is enabled, interfaces connecting this Layer 2 switch to
the border and edge nodes allow packets from the corresponding VLAN to
pass through.
Design Guidelines
In the virtualization solution for large- and medium-sized campus networks, to
reduce the impact of topology changes on the entire network, you are advised to:
● Select a device with higher reliability as the root bridge.
● Divide the entire underlay network into multiple loop detection domains.
Figure 2-104 shows the underlay network in a virtualization scenario on a large
or midsize campus. When no loop exists between core and aggregation switches,
the loop prevention design can be implemented as follows:
● When a loop exists between core and aggregation switches, do not disable the loop
prevention function between them. You can only configure the core switch as a root
bridge to improve root bridge robustness.
● Currently, the controller allows you to increase the priority of core or aggregation
switches so that they can be preferentially selected as root bridges.
● To perform VLAN-based loop prevention design for inter-VLAN load balancing, see the
MSTP or VBST design in the switch product documentation.
Table 2-56 VLAN/BD resource plan for the fabric global resource pool
Reso Description
urce
Item
Servi ● User terminals access the campus network through service VLANs,
ce which are bound to BDs.
VLA ● You are advised to assign service VLANs based on logical areas,
N organizational structures, and service types of campus networks.
Table 2-57 IP address plan for the fabric global resource pool
Resource Description
Item
Resource Description
Item
Border and edge nodes also function as VTEPs. You are advised to configure the
route reflector (RR) function on the nodes to establish BGP EVPN peer
relationships. If no RR is configured, BGP peer relationships need to be established
between edge nodes, and between edge and border nodes. The configuration is
complex and many BGP connections consume CPU resources. Border and edge
nodes can function as RRs. The border node used as the RR has the strongest
processing capability, so it is recommended that border nodes be used as RRs.
Figure 2-107 Traffic models for L3 shared egress and L3 exclusive egress on a
fabric
communicate with each other through routing protocols. In Figure 2-108, routes
between the border node and firewall are configured based on the route design
principles for communication between campus intranets and external networks.
● Routes from the campus intranet to external networks on the border node:
Generally, default routes are used to prevent a huge number of external
network routes from affecting intranets.
● Configure routes from external networks to the campus intranet on the
firewall: Generally, specific routes are used.
Figure 2-108 Route planning between the border node and firewall
When creating external network resources on the border node, you can use any of
the following routing protocols to interconnect the border node with the firewall.
According to the route design principles described above, Table 2-58 lists the
recommended configurations for the three routing protocols.
Table 2-58 Configurations of different routing protocols between the border node
and firewall
Rou Default Routes Return Routes Interconnection Between the
ting from VNs to from External Border Node and Firewall
Prot External Networks to VNs
ocol Networks on the on the Firewall
Border Node
When selecting a routing protocol between the firewall and border node, you
need to consider how to switch the service traffic path in active/standby
switchover scenarios when firewalls are deployed in HSB mode. For details, see the
egress route design in 2.3.6 Egress Network Design.
NOTE
You can configure routes on the border node when creating external network resources on
iMaster NCE-Campus, and configure routes on the firewall by logging in to the web system
or CLI.
Figure 2-109 Design model of network service resources on a fabric (border node
directly connected to servers)
As shown in Figure 2-109, for each network service resource created on the
border node, a VRF instance is allocated. After a network service resource is
selected during VN creation, the VRF instances of the created VN and network
service resource import routes from each other. In this way, service subnets in the
VN can communicate with the network service resource. Static routes are
configured on the border node based on the addresses for accessing these
network service resources.
In this scenario, the border node is directly connected to network service resources,
and the physical interfaces that connect the border node to the resources are
added to VLANs in access mode.
Figure 2-110 Design model of network service resources on a fabric (border node
directly connected to a switch)
As shown in Figure 2-110, for each network service resource created on the
border node, a VRF instance is allocated. After a network service resource is
selected during VN creation, the VRF instances of the created VN and network
service resource import routes from each other. In this way, service subnets in the
VN can communicate with the network service resource. Static routes are
configured on the border node based on the addresses for accessing these
network service resources.
Figure 2-111 Traffic model for communication between a VN and network service
resources
NOTE
Routes on the border node are automatically delivered when network service resources are
created on iMaster NCE-Campus. To configure routes on the gateway in the network
management zone, log in to the web system or CLI of the device.
This design model is selected only when a DHCP server is deployed on an external
network, as shown in Figure 2-112. In this scenario, the VN and DHCP server
communicate with each other based on an external network design model of the
fabric. This network service resource model is mainly used for obtaining the DHCP
server address. When this model is used, the gateway of the VN subnet can
function as the DHCP relay agent and automatically configure the DHCP server
address after the gateway is created.
NOTE
NOTE
In the distributed gateway scenario, it is recommended that the border node with the
native WAC function be deployed, or standalone WACs be connected to the border node in
off-path mode. In this scenario, do not select Extended AP for interfaces connecting access
switches to APs. If this connection type is selected, the APs cannot communicate with the
border node through management VLAN auto-negotiation. You don't need to configure the
connection type for the interfaces connecting access switches to APs.
RU Access Design
In the simplified all-optical campus solution, central switches and RUs are
launched as combinations. The following figure shows the networking and
connection types of fabric access interfaces.
RUs do not support VLAN configuration or policy association and are used only as
the remote ports of a central switch. In addition, RUs (without management IP
addresses) are managed by the central switch in a unified manner and are not
displayed as independent NEs on iMaster NCE-Campus. Therefore, during fabric
access network design, you only need to configure port isolation for RUs through
the central switch on iMaster NCE-Campus when deploying policy control (see
Policy Control Solution Design).
An RU provides multiple extension interfaces that can connect to terminals or APs.
During access authentication configuration, it is recommended that authentication
be configured on the interfaces connecting to terminals and non-authentication
be configured on the interfaces connecting to APs. In this case, the authentication
policies for the two access types are different. Therefore, it is not recommended
that an RU be connected to APs and terminals at the same time. When an RU is
connected to both terminals and APs, terminal authentication needs be enabled
on the corresponding interface on the central switch, and APs need to be
authenticated on the central switch to access the network.
NOTE
1. Port isolation for RUs cannot be configured based on unified fabric orchestration and
needs to be configured site by site.
2. RUs must be deployed together with and directly connected to a central switch.
2.3.4.4 VN Design
Table 2-59 Comparison between the static VLAN mode and dynamically
authorized VLAN mode
VLAN Implementation Application Scenario
Access
Mode
Static ● Wired access: Configure a The static VLAN mode applies when
VLAN static VLAN on the switch terminals access the VLAN at fixed
interface connected to locations and do not need to be
wired user terminals. authenticated. This access mode is
● Wireless access: Configure more secure but lacks flexibility.
a static service VLAN for When the locations of terminals
an SSID. change, you need to perform the
configuration again.
● Automatic allocation: After the number of user subnets and start VLAN and IP
address of the subnet are specified, the user subnet gateway is automatically
Figure 2-117 Traffic model for communication between users on the same subnet
in a VN
● Users on the same subnet connected to the same edge node can directly
communicate with each other through the edge node.
a. Host 1 and Host 2 are on the same subnet. When Host 1 accesses Host 2,
the destination MAC address of the packet sent by Host 1 to Host 2 is the
MAC address of Host 2.
b. After the packet arrives at Edge 1, Edge 1 searches for the MAC address
entry of Host 2. The entry belongs to VLAN 10 and is learned from
GE0/0/2. Edge 1 then forwards the packet.
c. Host 2 receives the packet from Host 1 through GE0/0/2.
● Users on the same subnet connected to different edge nodes
communicate with each other through the VXLAN tunnel between the edge
nodes.
a. Host 1 and Host 2 are on the same subnet. When Host 1 accesses Host 2,
the destination MAC address of the packet sent by Host 1 to Host 2 is the
MAC address of Host 2.
b. After the packet arrives at Edge 1, Edge 1 searches for the MAC address
entry of Host 2. The entry belongs to BD 10 and is learned from the
tunnel source interface (displayed as the IP address) of Edge 2. Edge 1
then encapsulates the packet into a VXLAN packet.
c. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of Edge 1 and
Edge 2, respectively. Then the packet is forwarded based on the underlay
route.
d. After the packet arrives at Edge 2, Edge 2 performs VXLAN decapsulation,
searches for the MAC address entry of Host 2, determines the outbound
interface GE0/0/1, and forwards the packet.
e. Host 2 receives the packet from Host 1 through GE0/0/1.
Figure 2-118 Traffic model for communication between users on different subnets
in a VN
● Users on different subnets connected to the same edge node can directly
communicate with each other through the edge node.
a. Host 1 and Host 2 are on different subnets. When Host 1 accesses Host 2,
the packet is sent to the gateway first. The destination MAC address of
the packet is the MAC address of VBDIF 10 on the gateway.
b. After the packet arrives at Edge 1, Edge 1 searches the VN 1 routing table
for the direct route to Host 2 and then forwards the packet based on the
ARP entry.
c. Host 2 receives the packet from Host 1 through GE0/0/2.
● Users on different subnets connected to different edge nodes
communicate with each other through the VXLAN tunnel between the edge
nodes.
a. Host 1 and Host 2 are on different subnets. When Host 1 accesses Host 2,
the packet is sent to the gateway first. The destination MAC address of
the packet is the MAC address of VBDIF 10 on the gateway.
b. After the packet arrives at Edge 1, Edge 1 searches for the route to Host
2 in the VN 1 routing table. The next hop is the IP address of the tunnel
source interface of Edge 2. Edge 1 then encapsulates the packet into a
VXLAN packet. The inner destination MAC address of the packet is the
MAC address of Host 2.
c. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of Edge 1 and
Edge 2, respectively. Then the packet is forwarded based on the underlay
route.
d. After the packet arrives at Edge 2, Edge 2 performs VXLAN decapsulation
and searches for the MAC address entry of Host 2. The entry belongs to
VLAN 20 and is learned from GE0/0/2. Edge 2 then forwards the packet.
e. Host 2 receives the packet from Host 1 through GE0/0/1.
Figure 2-119 Traffic model for subnet communication between VNs through a
border node
the border node, respectively. Then the packet is forwarded based on the
underlay route.
d. After the packet arrives at the border node, the border node performs
VXLAN decapsulation and searches for the route to the network segment
of Host 2 in the VN 1 routing table. Because the VPN routing tables of
VN 1 and VN 2 import routes from each other, the route to the network
segment of Host 2 can be found in the VN 1 routing table. The next hop
of the packet is the IP address of the tunnel source interface of Edge 1.
The border node then encapsulates the packet into a VXLAN packet. The
inner destination MAC address of the packet is the MAC address of VBDIF
20 on Edge 1.
e. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of the border
node and Edge 1, respectively. Then the packet is forwarded based on the
underlay route.
f. After the packet reaches Edge 1, Edge 1 decapsulates the packet by
removing its VXLAN header and searches the VN 2 routing table for the
direct route to the network segment of Host 2. Then Edge 1 directly
forwards the packet out.
g. Host 2 receives the packet from Host 1 through GE0/0/2.
● Users on subnets of different VNs connected to different edge nodes
communicate with each other through the VXLAN tunnels between the edge
nodes and border node. Mutual access traffic is sent to the border node first,
then forwarded between VNs based on the imported routes of the VNs.
a. Host 1 and Host 2 are on different subnets. When Host 1 accesses Host 2,
the packet is sent to the gateway first. The destination MAC address of
the packet is the MAC address of VBDIF 10 on the gateway.
b. After the packet reaches Edge 1, Edge 1 searches the VN 1 routing table
for the route to the network segment of Host 2. Because routes have
been imported between the VPN routing tables of VN 1 and VN 2 on the
border node, Edge 1 can learn the VPN route of VN 2 imported by the
border node from its BGP peer. Then, Edge 1 finds that the next hop is
the IP address of the tunnel source interface of the border node and
encapsulates the packet into a VXLAN packet.
c. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of Edge 1 and
the border node, respectively. Then the packet is forwarded based on the
underlay route.
d. After the packet arrives at the border node, the border node performs
VXLAN decapsulation and searches for the route to the network segment
of Host 2 in the VN 1 routing table. Because the VPN routing tables of
VN 1 and VN 2 import routes from each other, the route to the network
segment of Host 2 can be found in the VN 1 routing table. The next hop
of the packet is the IP address of the tunnel source interface of Edge 2.
The border node then encapsulates the packet into a VXLAN packet. The
inner destination MAC address of the packet is the MAC address of VBDIF
20 on Edge 2.
e. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of the border
node and Edge 2, respectively. Then the packet is forwarded based on the
underlay route.
f. After the packet reaches Edge 2, Edge 2 decapsulates the packet by
removing its VXLAN header and searches the VN 2 routing table for the
direct route to the network segment of Host 2. Then Edge 2 directly
forwards the packet out.
g. Host 2 receives the packet from Host 1 through GE0/0/1.
Figure 2-120 Traffic model for subnet communication between VNs through a
firewall
source interface of the border node and encapsulates the packet into a
VXLAN packet.
c. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of Edge 1 and
the border node, respectively. Then the packet is forwarded based on the
underlay route.
d. After the packet arrives at the border node, the border node performs
VXLAN decapsulation and searches for the route to the network segment
of Host 2 in the VN 1 routing table. Because the VPN routing tables of
VN 1 and the external network resource model VN1-Outer import routes
from each other, the route to the network segment of Host 2 can be
found in the VN 1 routing table. The next hop of the packet is the IP
address of GE1/0/1.1 on the firewall. The destination MAC address of the
packet is the MAC address of GE1/0/1.1, and the packet is not
encapsulated into a VXLAN packet.
e. After the packet arrives at the firewall, the firewall allows VN 1 to access
VN 2 based on the mutual access policies and searches for the route to
the network segment of Host 2. The next hop of the packet is the IP
address of VLANIF 12 on the border node. The destination MAC address
of the packet is the MAC address of VLANIF 12, and the packet is not
encapsulated into a VXLAN packet.
f. After the packet arrives at the border node, the border node searches for
the route to Host 2 in the VN 2 routing table. The next hop of the packet
is the IP address of the tunnel source interface of Edge 1. The border
node then encapsulates the packet into a VXLAN packet. The inner
destination MAC address of the packet is the MAC address of VBDIF 20
on Edge 1.
g. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of the border
node and Edge 1, respectively. Then the packet is forwarded based on the
underlay route.
h. After the packet reaches Edge 1, Edge 1 decapsulates the packet by
removing its VXLAN header and searches the VN 2 routing table for the
direct route to the network segment of Host 2. Then Edge 1 directly
forwards the packet out.
i. Host 2 receives the packet from Host 1 through GE0/0/2.
● Users on subnets of different VNs connected to different edge nodes
communicate with each other through the VXLAN tunnels between the edge
nodes and border node. Mutual access traffic is sent to the border node first,
then forwarded to the firewall based on the imported routes of external
networks. The firewall then forwards the traffic between VNs based on
mutual access control policies between security zones.
a. Host 1 and Host 2 are on different subnets. When Host 1 accesses Host 2,
the packet is sent to the gateway first. The destination MAC address of
the packet is the MAC address of VBDIF 10 on the gateway.
b. After the packet reaches Edge 1, Edge 1 searches the VN 1 routing table
for the route to the network segment of Host 2. Because routes have
been imported between the VPN routing tables of VN 1 and the external
network resource model VN1-Outer on the border node, Edge 1 can learn
the VPN route of VN1-Outer imported by the border node from its BGP
peer. Then, Edge 1 finds that the next hop is the IP address of the tunnel
source interface of the border node and encapsulates the packet into a
VXLAN packet.
c. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of Edge 1 and
the border node, respectively. Then the packet is forwarded based on the
underlay route.
d. After the packet arrives at the border node, the border node performs
VXLAN decapsulation and searches for the route to the network segment
of Host 2 in the VN 1 routing table. Because the VPN routing tables of
VN 1 and the external network resource model VN1-Outer import routes
from each other, the route to the network segment of Host 2 can be
found in the VN 1 routing table. The next hop of the packet is the IP
address of GE1/0/1.1 on the firewall. The destination MAC address of the
packet is the MAC address of GE1/0/1.1, and the packet is not
encapsulated into a VXLAN packet.
e. After the packet arrives at the firewall, the firewall allows VN 1 to access
VN 2 based on the mutual access policies and searches for the route to
the network segment of Host 2. The next hop of the packet is the IP
address of VLANIF 12 on the border node. The destination MAC address
of the packet is the MAC address of VLANIF 12, and the packet is not
encapsulated into a VXLAN packet.
f. After the packet arrives at the border node, the border node searches for
the route to Host 2 in the VN 2 routing table. The next hop of the packet
is the IP address of the tunnel source interface of Edge 2. The border
node then encapsulates the packet into a VXLAN packet. The inner
destination MAC address of the packet is the MAC address of VBDIF 20
on Edge 2.
g. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of the border
node and Edge 2, respectively. Then the packet is forwarded based on the
underlay route.
h. After the packet reaches Edge 2, Edge 2 decapsulates the packet by
removing its VXLAN header and searches the VN 2 routing table for the
direct route to the network segment of Host 2. Then Edge 2 directly
forwards the packet out.
i. Host 2 receives the packet from Host 1 through GE0/0/1.
Figure 2-121 Traffic model for communication between VNs and external
networks
● Users in a VN access the Internet through the VXLAN tunnel between the
edge node and border node. Traffic is sent to the border node first, then
forwarded to the firewall based on the imported routes of external networks.
The firewall then forwards the packet to the Internet.
a. Host 1 and the Internet are on different subnets. When Host 1 accesses
the Internet, the packet is sent to the gateway first. The destination MAC
address of the packet is the MAC address of VBDIF 10 on the gateway.
b. After the packet reaches Edge 1, Edge 1 searches the VN 1 routing table
for the route to the Internet. Because routes have been imported
between the VPN routing tables of VN 1 and the external network
resource model VN1-Outer on the border node, Edge 1 can learn the VPN
route of VN1-Outer imported by the border node from its BGP peer.
Then, Edge 1 finds that the next hop is the IP address of the tunnel
source interface of the border node and encapsulates the packet into a
VXLAN packet.
c. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of Edge 1 and
the border node, respectively. Then the packet is forwarded based on the
underlay route.
d. After the packet arrives at the border node, the border node performs
VXLAN decapsulation and searches for the route to the Internet in the VN
1 routing table. Because the VPN routing tables of VN 1 and the external
network resource model VN1-Outer import routes from each other, the
route to the Internet can be found in the VN 1 routing table. The next
hop of the packet is the IP address of GE1/0/1.1 on the firewall. The
destination MAC address of the packet is the MAC address of GE1/0/1.1,
and the packet is not encapsulated into a VXLAN packet.
e. After the packet arrives at the firewall, the firewall allows VN 1 to access
the Internet based on the mutual access policies and searches for the
route to Internet. The firewall then forwards the packet.
● Users in a VN access network service resources through the VXLAN tunnel
between the edge node and border node. Traffic is sent to the border node
first, then forwarded to the gateway in the network management zone based
on the imported routes of the network management zone. The gateway in
the network management zone then forwards the packet to the network
management zone.
a. Host 1 and the network service resource are on different subnets. When
Host 1 accesses the network service resource, the packet is sent to the
gateway first. The destination MAC address of the packet is the MAC
address of VBDIF 10 on the gateway.
b. After the packet reaches Edge 1, Edge 1 searches the VN 1 routing table
for the route to the Internet. Because routes have been imported
between the VPN routing tables of VN 1 and the external network
resource model VN1-Server on the border node, Edge 1 can learn the
VPN route of VN1-Server imported by the border node from its BGP peer.
Then, Edge 1 finds that the next hop is the IP address of the tunnel
source interface of the border node and encapsulates the packet into a
VXLAN packet.
c. After the encapsulation, the outer source and destination IP addresses of
the packet are the IP addresses of tunnel source interfaces of Edge 1 and
the border node, respectively. Then the packet is forwarded based on the
underlay route.
d. After the packet arrives at the border node, the border node performs
VXLAN decapsulation and searches for the route to the network service
resource in the VN 1 routing table. Because the VPN routing tables of VN
1 and the network service resource model VN1-Server import routes from
each other, the route to the network service resource can be found in the
VN 1 routing table. The next hop of the packet is the IP address of
VLANIF 11 on the gateway in the network management zone. The
destination MAC address of the packet is the MAC address of VLANIF 11,
and the packet is not encapsulated into a VXLAN packet.
e. After the packet arrives at the gateway in the network management
zone, the gateway searches for the route to the network service resource
and forwards the packet.
NOTE
Currently, the overlay network supports only Layer 2 multicast. If Layer 3 multicast is
required, you are advised to deploy multicast services on the underlay network.
For switches running V600R022C00 or later versions, Layer 2 multicast cannot be deployed
using dynamically authorized VLANs.
In dual-border networking, PIM priority needs to be configured to ensure the DR and IGMP
querier are the same device. Otherwise, a large number of multicast packets are lost when
a link switchover occurs.
NOTE
Wi-Fi 6 APs need to be powered by PoE++ switches. Therefore, select appropriate access
switches for power supply based on AP models.
Control packets between the WAC and APs are forwarded through a CAPWAP
tunnel. APs forward service packets of wireless users to the wired side in tunnel
forwarding (centralized forwarding) or direct forwarding (local forwarding) mode.
Tunnel Forwarding
In tunnel forwarding mode, an AP encapsulates the service packets of wireless
users over a CAPWAP tunnel and sends them to the WAC. The WAC then forwards
these packets to other networks. Figure 2-125 shows the traffic forwarding model
adopted when the tunnel forwarding mode is used in this solution.
In tunnel forwarding mode, switches on the links between the WAC and APs do
not need to allow service VLANs, and interfaces on the switches do not need to be
added to such VLANs. This facilitates centralized control and management.
However, the disadvantage is that the service traffic of all wireless users is
centrally forwarded by the WAC, which imposes a heavy workload on the WAC.
Figure 2-125 Service traffic model of wireless users (tunnel forwarding mode)
Direct Forwarding
In direct forwarding mode, an AP directly forwards users' service packets to other
networks without encapsulating them over a CAPWAP tunnel. Figure 2-126 shows
the traffic forwarding model adopted when the direct forwarding mode is used in
this solution.
In direct forwarding mode, the east-west service traffic of local wireless users can
be directly forwarded by the local access switch without passing through the WAC.
However, switches on the links between the WAC and APs need to allow service
VLANs, and interfaces on the switches need to be added to such VLANs, making it
difficult to perform centralized control and management.
Figure 2-126 Service traffic model for wireless users (direct forwarding mode)
Table 2-61 compares the tunnel forwarding mode with the direct forwarding
mode. In the virtualization solution for a large or midsize campus network, the
tunnel forwarding mode that can provide centralized traffic management and
control is recommended, irrespective of which gateway solution is selected. The
subsequent WLAN planning following this section is also designed based on the
tunnel forwarding mode.
Tunnel Wireless user service The WAC forwards Service traffic must
forwardi traffic is processed and service traffic in a be forwarded by the
ng forwarded by the WAC centralized manner, WAC, reducing packet
in a centralized ensuring high forwarding efficiency
manner. security and and burdening the
facilitating WAC.
centralized traffic
management and
control.
Configuration page on iMaster NCE-Campus. In the WAC list, select the row
where the WAC resides, and click Add in the lower right corner to add APs for
management by the WAC.
SSID Planning
In most cases, service set identifiers (SSIDs) are planned based on user roles or
service types. For example, three SSIDs can be planned for three types of wireless
services in a large-scale business scenario, as shown in Figure 2-127. Employee is
used for wireless office access of employees. Guest is used for Internet access of
guests. Dumb is used for wireless access of dumb terminals such as printers. For
an SSID that is not intended for end users, for example, the SSID used for access
of printers, you can configure SSID hiding to prevent the SSID from being detected
by end users.
On a large and midsize campus network, a large number of STAs exist and require
area-specific policies. Typically, the SSID:VLAN = 1:N mapping policy is used.
On a WLAN using the "WAC + Fit AP" architecture, the WAC serves as the wireless
authentication control point. In this solution, the deployment process of the
wireless authentication control point varies according to the WAC type.
NOTE
Secu Characteristics
rity
Polic
y
Secu Characteristics
rity
Polic
y
WPA WPA and WPA2 provide almost the same security. WPA/WPA2 has two
/ editions: enterprise edition and personal edition.
WPA ● WPA/WPA2-Enterprise: uses a RADIUS server and the Extensible
2 Authentication Protocol (EAP) to provide IEEE 802.1X network access
control. Users provide authentication information, including the user
name and password, and are authenticated by an authentication
server (generally a RADIUS server). This edition applies to scenarios
that have high requirements on network security.
● WPA/WPA2-Personal: adopts a simpler mechanism, that is, WPA/
WPA2 pre-shared key (WPA/WPA2-PSK) mode. This edition does not
require an authentication server and applies to scenarios that have
low requirements on network security.
NOTE
1. Intelligent radio calibration and traditional radio calibration cannot be deployed for the
APs in the same calibration region at the same time.
2. Intelligent radio calibration needs to be used together with iMaster NCE-CampusInsight.
Ensure that APs can communicate with iMaster NCE-CampusInsight.
3. AP load prediction is applicable to scenarios where service traffic is relatively stable and
historical data is regular, such as the office automation (OA) scenario. When there is a
sudden increase or decrease in service traffic, for example, when network expansion or
large-scale personnel relocation occurs (such as in stadiums), AP load prediction cannot be
implemented.
4. Only the 5 GHz frequency band is supported when increasing the channel bandwidth of
high-load APs. In addition, if the number of available channels is less than six (for example,
in some countries, the number of available 5 GHz channels is small; if dual-5G is enabled,
the 40 MHz or higher channel bandwidth cannot be completely staggered), the channel
bandwidth cannot be increased on high-load APs.
Table 2-65 Comparison between traditional radio calibration and intelligent radio
calibration
Calib Application Scenario Advantage Disadvantage
ratio
n
Mode
uses the load balancing algorithm to measure the dual-band capability of the
STA, AP load, and AP signal quality, and steers the STA to a better AP.
● Dynamic load balancing: After a STA connects to an AP, the WAC checks
whether the number of STAs on this AP reaches the load balancing threshold.
Then, the WAC determines whether to steer the STA to a neighboring AP that
meets the load balancing conditions based on the load balancing algorithm.
Static load balancing limits the maximum number of AP radios to 16 and allows
only radios on the same frequency band to join a load balancing group.
Additionally, a load balancing group needs to be manually specified. In practice,
dynamic load balancing is recommended. In this mode, APs collect neighbor
information and steer STAs to proper APs based on the load balancing status,
dynamically implementing better STA access.
NOTE
1. The two frequency bands of an AP enabled with the band steering function must use the
same SSID and security policy. The band steering function cannot be deployed on a single-
radio AP.
2. To allow STAs to preferentially associate with the 5 GHz radio and achieve better access
effect, configure larger power for the 5 GHz radio than the 2.4 GHz radio.
In Layer 2 roaming, the service VLAN and gateway remain unchanged after STA
roaming, and traffic can be directly forwarded on the new AP. During WLAN
deployment, Layer 2 roaming is recommended. In the case of a single WAC, the
user gateway can be deployed on the WAC or a core switch at an upper layer. In
the case of multiple WACs (deployed in off-path or in-path mode), it is
recommended that the gateway be deployed on a core switch at an upper layer,
and inter-WAC roaming (still Layer 2 roaming) be used. During Layer 2 roaming,
the gateway remains unchanged, and either tunnel forwarding or direct
forwarding can be adopted. Select a forwarding mode based on service
requirements.
In Layer 3 roaming, the VLAN and gateway of a STA both change after roaming,
and the STA moves between Layer 3 networks. If different VLANs and gateways
are deployed in different buildings or areas, Layer 3 roaming is used. After
roaming, the STA IP address remains unchanged. On the new network, this IP
address cannot directly communicate with the corresponding gateway, and thus
traffic cannot be forwarded. Therefore, a tunnel must be established between
WACs to forward traffic of the roaming STA to the original gateway. In this case,
an inter-WAC mobility group must be configured, with a tunnel established
between WACs to forward STA traffic. If Layer 3 roaming is required on the
network, the tunnel forwarding mode is recommended because this mode does
not require the setup for a large number of tunnels between APs and allows
traffic to be forwarded only through the roaming tunnel between WACs.
During inter-WAC roaming, especially inter-WAC Layer 3 roaming, service traffic
generated by a STA needs to be redirected to the home WAC through the tunnel
between WACs for forwarding. This complicates STA roaming, and consumes more
WAC resources and inter-WAC link resources. Therefore, in actual deployments,
you are advised to properly plan the WLAN to avoid possible inter-WAC roaming.
For example, configure APs in the same building or at the same site to be
managed by the same WAC. If inter-WAC roaming is inevitable, properly plan the
number of members in the mobility group to reduce resource consumption caused
by user information synchronization between mobility group members.
Key Points in Designing the In-Roaming Packet Loss Rate and Handover
Delay
Apart from the basic roaming functions, the packet loss rate and handover delay
during STA roaming are also important indicators to consider. For example, in
industrial manufacturing scenarios, the automated guided vehicles (AGVs) used in
warehouses and factories require the network system to deliver a packet loss rate
less than 1% and a roaming delay less than 100 ms.
To this end, when designing wireless roaming, you are advised to:
● Ensure signal coverage continuity. That is, ensure no coverage hole exists in
areas where roaming is required. Keep a 10% to 15% signal overlap between
the coverage areas of neighboring APs to ensure smooth STA roaming
between the APs.
● Enable fast roaming function in order to reduce the handover delay and
minimize the packet loss probability.
Huawei WLAN supports pairwise master key (PMK) fast roaming and 802.11r fast
roaming. Table 2-66 lists the handover delay of STAs in different roaming modes.
Fast roaming can be enabled as required.
Smart Roaming
Dumb terminals and some outdated STAs have low roaming aggressiveness. As a
result, they stick to the initially connected APs regardless of the long distance from
the APs, weak signals, or low rates. The STAs do not roam to neighboring APs with
better signals. Such STAs are generally called sticky STAs. The negative impact of
sticky STAs is described as follows:
● The service experience of a sticky STA is poor, and such a STA is always
associated with an AP with poor signal strength. As a result, the channel rate
decreases significantly.
● The overall performance of wireless channels is affected. A sticky STA may
encounter frequent packet loss or retransmission caused by poor signal
quality and low rates, and therefore occupies the channel for a long time. As
a result, other STAs cannot obtain sufficient channel resources.
To reduce the impact of sticky STAs on a WLAN, you are advised to enable smart
roaming. The smart roaming function intelligently identifies sticky STAs on the
network and proactively directs them to APs with better signals in a timely
manner. This function improves user experience in terms of the following aspects:
● Better performance: Smart roaming can direct poor-signal STAs to APs with
better signals, improving user service experience and overall channel
performance.
● Load balancing: Smart roaming ensures that each STA is associated with the
nearest AP, achieving inter-AP load balancing.
AI Roaming
In smart roaming, APs scan STAs on their operating channels, which may lead to
the following problems that affect the roaming effect:
● The operating radio of an AP is used to scan STAs. If no STA is scanned, the
generated roaming neighbor information may be incomplete, affecting
roaming steering.
● A unified roaming steering mechanism is used during smart roaming, without
distinguishing STAs. As the roaming sensitivity varies with different STAs, the
mechanism may fail in some cases.
● During smart roaming, the Received Signal Strength Indicator (RSSI) of a STA
is detected by the AP, but not in the opposite way. Therefore, the roaming
neighbor to which the STA is steered may not be the optimal one.
The AI roaming feature can be deployed to resolve the preceding problems. As
illustrated in Figure 2-134, AI roaming utilizes intelligent analysis algorithms to
profile the roaming capabilities of STAs, identify such capabilities of different STA
types and operating system versions, and provide targeted roaming steering for
the STAs, improving the roaming steering success rate. Combined with the
independent scanning radio (a third radio) feature, AI roaming uses a fixed radio
for real-time STA scanning to obtain the AP's RSSI that STAs detect through STAs'
RSSI measurement packets. In this way, more complete and effective information
about roaming neighbors is available, so that the optimal AP to which a STA is to
roam can be identified, enhancing user experience during roaming.
When deploying AI roaming, ensure that roaming profiles of STAs are available,
which are used to obtain STAs' roaming characteristics for differentiated steering.
The system has built-in STA profile files and can dynamically generate roaming
profiles of STAs on the live network through online real-time learning. In addition,
AI roaming depends on terminal identification. That is, the roaming profile of a
STA can be matched only after the STA is identified. The WAC has a built-in
terminal fingerprint database, which can help identify STAs, without the need to
work with iMaster NCE-Campus.
AI roaming depends on hardware and feature deployment. Pay attention to the
following when deploying this feature:
● AI roaming depends on the terminal identification capability of the WAC. The
WAC has a built-in terminal identification database that cannot be upgraded
currently. Therefore, some new STAs may not be identified.
● AI roaming needs to work with the independent scanning radio (a third
radio), that is, the AP with this feature deployed must support such a radio.
Some AP models support an independent scanning radio only after they have
a right-to-use (RTU) license loaded. Therefore, this feature can be deployed
only when the hardware conditions are met.
● AI roaming supports roaming steering only for STAs working on the 5 GHz
frequency band. Therefore, a 5 GHz SSID must be deployed on the network.
● AI roaming is mutually exclusive with the PMF feature, and therefore cannot
work for a device with the PMF feature enabled.
Wireless tag location technology uses radio frequency identification (RFID) devices
and a location system to locate a specific target via a WLAN. This technology
involves locating Wi-Fi, Bluetooth, and UWB tags. To implement wireless tag
location, an AP collects and sends tag information to a location server. The
location server then calculates the physical location of the tag and sends the
calculated data to a third-party device so that the user can view the location of
the target tag through a map or table. Huawei's end-to-end wireless tag location
solution is provided in cooperation with third-party vendors in the industry.
Wireless terminal location involves locating Wi-Fi and Bluetooth terminals.
● Wi-Fi terminal location technology locates terminals based on wireless signal
strength information in the surrounding environment collected by APs. To be
specific, an AP reports the collected wireless signal information transmitted by
a Wi-Fi terminal to a location server. The location server calculates the
location of the terminal according to the obtained wireless signal information
as well as the AP's location, and then displays the terminal's location to the
user. The Wi-Fi terminal location solution can be implemented by using
Huawei WLAN devices (the location engine of iMaster NCE-CampusInsight
serves as the location server) or by cooperating with third-party partners.
For details about the principles of the cooperation solution between Huawei and
third-party location vendors as well as device selection, see related documents at:
https://e.huawei.com/en/material/bookshelf/bookshelfview/202004/03160039
security zone identifies a network, and a firewall connects networks. Firewalls use
security zones to divide networks and mark the routes of packets. When packets
travel between security zones, security check is triggered and corresponding
security policies are enforced. Security zones are isolated by default.
Generally, there are three types of security zones: trusted, DMZ, and untrusted.
● Trusted zone: refers to the network of internal users.
● DMZ: demilitarized zone, which refers to the network of internal servers.
● Untrusted zone: refers to untrusted networks, such as the Internet.
Figure 2-136 Firewall security zone division when user gateways are located
inside the fabric
equal-cost multi-path routing (ECMP) routes, the firewall can forward the access
traffic from two different paths to Server 1. Apparently, path 1 is not the best
path, and path 2 is the most desired path. After you configure ISP-based traffic
steering, when an intranet user accesses Server 1, the firewall selects an outbound
interface based on the ISP network where the destination address resides to
enable the access traffic to reach Server 1 through the shortest path, that is, path
2 in Figure 2-138.
The network configuration consists of two parts: core switch -> firewall and
firewall -> core switch -> egress router.
● Core switch -> firewall: On the core switch, L3 egress is configured for the
fabric for interconnection with the southbound interfaces of the firewalls. The
firewalls are deployed in VRRP hot standby (HSB) mode and connect to the
core switch through the southbound interfaces. It is recommended that static
routes be used between firewalls' southbound interfaces and the core switch.
● Firewall -> core switch > egress router: It is recommended that OSPF be
configured to implement communication. When configuring OSPF on
firewalls, you need to import the static routes from the firewalls destined for
the campus intranet. When configuring OSPF on egress routers, you need to
import the default routes from the egress routers destined for the external
network. Interfaces connecting the core switch to the firewalls and egress
routers need to be added to additional VPN instances for isolation from other
traffic.
Figure 2-140 Using static routing between the firewall and the core switch
● Dynamic routing
If VRRP is not deployed on firewalls, dynamic routing can be used to
implement automatic switching of the service traffic path. In this case, you
need to run the hrp standby-device command on the standby firewall to set
it to the standby state. As shown in Figure 2-141, OSPF is used as an
example. When both the active and standby firewalls work properly, the
active firewall advertises routes based on the OSPF configuration, and the
cost of the OSPF routes advertised by the standby firewall is adjusted to
65500 (default value, which can be changed). In such a scenario, the core
switch selects a path with a smaller cost to forward traffic, and all service
traffic is diverted to the active firewall for forwarding.
If the active firewall is faulty, the standby firewall converts to the active state.
In addition, the VRRP Group Management Protocol (VGMP) adjusts the cost
of the OSPF routes advertised by the active firewall to 65500 and that of the
OSPF routes advertised by the standby firewall to 1. After route convergence
is complete, the service traffic path is switched to the standby firewall.
Figure 2-141 Using dynamic routing between the firewall and the core switch
For details about the deployment when using different routing protocols between
the firewall and the core switch, see the external network design in 2.2.4.3 Fabric
Network Design.
Table 2-68 describes the recommended security policy design for common zones.
● If private IP addresses are used on the intranet, source NAT technology needs
to be used to translate source IP addresses of packets to public IP addresses
when user traffic destined for the Internet passes through the firewall.
Network Address Port Translation (NAPT) is recommended to translate both
IP addresses and port numbers, which enables multiple private addresses to
share one or more public addresses. NAPT applies to scenarios with a few
public addresses but many private network users who need to access the
Internet.
● If intranet servers are used to provide server-related services for public
network users, destination NAT technology is required for translating
destination IP addresses and port numbers of the access traffic of public
network users into IP addresses and port numbers of the servers in the
intranet environment.
● When two firewalls operate in VRRP hot standby (master/backup) mode, IP
addresses in the NAT address pool may be on the same network segment as
the virtual IP addresses of the VRRP group configured on the uplink interfaces
of the firewalls. If this is the case, after the return packets from the external
network arrive at the PE, the PE broadcasts ARP packets to request the MAC
address corresponding to the IP address in the NAT address pool. The two
firewalls in the VRRP group have the same NAT address pool configuration.
Therefore, the two firewalls send the MAC addresses of their uplink interfaces
to the PE. In this case, you need to associate the hot standby status (master/
backup) of the firewalls with the NAT address pool on each firewall, so that
only the master firewall in the VRRP group responds to the ARP requests
initiated by the PE.
The free mobility solution is recommended for policy control on large- and
medium-sized campus networks. If the existing campus network of the customer
does not support the free mobility solution, use the traditional NAC solution.
Source/ A B C D
Destination
Security
Group
B Empty NA Empty NA
Source/ A B C D
Destination
Security
Group
D Empty NA Empty NA
NOTE
When terminal identification is used together with the VLAN authorization policy, you can
disable pre-connection in 802.1X and MAC address authentication scenarios to prevent IP
address re-assignment to terminals.
As a result, in dumb terminal access scenarios, you can deploy the terminal
anomaly detection function to detect dumb terminals that are spoofed or attacked
in real time and block their access to the network. This prevents dumb terminals
from being spoofed or attacked, ensuring network security.
Function Description
NOTE
● The terminal anomaly detection function can be configured only on switches using
commands, but not on iMaster NCE-Campus.
● The terminal anomaly detection function applies only to wired access terminals.
● The terminal anomaly detection function supports three types of dumb terminals: IP
phones, IP cameras, and printers.
Related Products
Table 2-76 lists the related products involved in the Huawei iConnect terminal
access control solution in the current version of the CloudCampus Solution.
Table 2-76 Products involved in the Huawei iConnect terminal access control
solution
IoT Terminal Authentica PKI IoT Platform
tion Server Serv
er
Table 2-77 Different phases when Huawei iConnect terminals access the
campus network
automatically apply for and load digital certificates. Table 2-79 describes the
schemes for Huawei iConnect terminals to automatically apply for digital
certificates in different scenarios.
Table 2-79 Schemes for Huawei iConnect terminals to automatically apply for
digital certificates in different scenarios
1. The campus administrator purchases SIM cards and 5G IoT terminals, and
then manually imports information such as the international mobile
subscriber identity (IMSI) and international mobile equipment identity (IMEI)
of the terminals to iMaster NCE-Campus (functioning as the authentication
server), or synchronizes such information from the IoT platform deployed by
the enterprise to iMaster NCE-Campus.
2. 5G terminals equipped with SIM cards access the 5G core network after
successful 5G-AKA authentication, and then create PDU sessions.
3. The session management function (SMF) module on the 5G core network
triggers RADIUS PAP or CHAP authentication and terminal information such
as the IMSI and IMEI carried in RADIUS packets is reported to iMaster NCE-
Campus for authentication. The communication between the SMF and
iMaster NCE-Campus involves sensitive information. Therefore, data flows
between the SMF and iMaster NCE-Campus are transmitted through the
private line and are encrypted by IPsec tunnels.
4. After the authentication is successful on iMaster NCE-Campus, the 5G
terminal can access the enterprise's intranet resources through the 5G
network.
5. After the authentication is successful, iMaster NCE-Campus delivers the
security policy for 5G terminals to the MSCG.
NOTE
● Currently, only iMaster NCE-Campus can function as the authentication server to work
with the carrier's SMF to perform second authentication on 5G terminals and work with
the MSCG.
● 5G terminals in the current solution refer to IoT terminals only and cannot roam
between 5G base stations.
● In the current solution, 5G terminals cannot smoothly switch from a carrier network to
a campus Wi-Fi network.
● Air interface security: Identifies and defends against attacks such as rogue
APs, rogue STAs, unauthorized ad-hoc networks, and DoS attacks.
● STA access security: Ensures the validity and security of STAs' access to the
WLAN.
● Service security: Protects service data of authorized users from being
intercepted by unauthorized users during transmission.
● To prevent unauthorized attacks, you are advised to enable the illegal attack
detection function in public areas and student dormitories with high security
requirements to detect flood, weak-vector, and spoofing attacks,
automatically add attackers to the dynamic blacklist, and alert the
administrator through alarms.
refer to 2.2.8.2.4 Core Layer if the aggregation switch functions as the user
gateway or authentication point.
Request packets are sent to the main control board for processing, the CPU usage
of the main control board will increase and other services cannot be processed
promptly.
The optimized ARP reply function addresses this issue. After this function is
enabled, the interface card directly responds to ARP requests if the ARP Request
packets are destined for the local interface of the switch, helping defend against
ARP flood attacks. This function is applicable to the scenario where a modular
switch is configured with multiple interface cards or fixed switches are stacked.
Netw Traffic models of various services and E2E Determine the network
ork forwarding paths of each service, including location where a QoS
statu forwarding paths in normal and abnormal policy is to be deployed.
s conditions
Figure 2-151 shows the deployment position of the SAC function on a large or
midsize campus network. Application traffic of wired users is identified by access
switches, and that of wireless users is identified by APs.
The applications that can be identified by the SAC function depend on the
signature database supported by a device. The device can accurately identify some
mainstream applications that are covered by its signature database.
However, private applications of enterprises are used in actual scenarios. If these
private applications are not covered by the signature database supported by the
device, the device cannot effectively identify these applications. In this case, you
can define application identification rules to identify private applications based on
key 5-tuple or URL information. The following figure is an example of customizing
an application identification rule.
NOTE
● If user-defined rules conflict with the rules in the signature database, the device uses
the user-defined rules for application identification.
student union members) in key places (such as conference rooms, exam rooms
and surrounding places) at key moments (such as exams). In this case, experience
analysis based on application identification mentioned above cannot meet the
requirements, because the specifications may be insufficient to support experience
analysis for all key applications. To this end, experience analysis based on security
groups or both security groups and applications is provided. Security groups are
used to authorize authenticated users based on 5W1H, so that information
including user identities, access locations (switches the users connect to), and
access time can be identified.
NOTE
Security groups are supported for wireless users only in native WAC scenarios.
NOTE
The preceding figure shows only the monitoring of traffic in one direction. The monitoring
of traffic in the other direction needs to be configured in the reverse direction using the
same logic.
iPCA 2.0 depends on time synchronization of NTP and supports only two-way delay
measurement, which requires that the upstream and downstream flows be transmitted
along the same path.
iPCA 2.0-based delay measurement is inaccurate in the presence of out-of-order packets.
Only the S5731-H, S5731-H-K, S5731-S, S5731S-S, S5731S-H, S5732-H, S5732-H-K, S6730-
H, S6730-H-K, S6730S-H, S6730-S, S6730S-S, and modular switches installed with X series
cards support delay measurement.
iPCA 2.0 is not applicable to multicast scenarios.
As illustrated in the preceding figure, when deploying iPCA 2.0, you are advised to
specify mid-points and out-points in advance and enable AutoDetect on them. In
this way, when adding an application flow measurement task, you only need to
specify the application name on in-points, but not on mid-points or out-points.
Application identification must be deployed on the node where the in-point for
flow measurement resides. In this manner, application identification data can be
used to associate an application name with 5-tuple information, which is then
used for traffic matching.
Without application identification data, the nodes where the mid-point and out-
point reside need to automatically identify the packets to be measured and trigger
flow creation and measurement based on the packets colored by the node where
the in-point resides.
If two-way measurement is performed on the node where the out-point resides,
the received reverse flow is not colored because no application identification data
is available. The 5-tuple information in the reverse flow can be obtained only
based on the forward flow. After such information is obtained, the ACL is
automatically delivered for a 5-tuple match and trigger flow coloring, creation,
and measurement.
The colored reverse flow is processed in the same way as the forward flow.
Subsequent nodes identify the color bit in the flow to trigger flow creation and
measurement.
NOTE
The fault demarcation capability based on application identification can be used only for
long flows. The duration of a long flow must be longer than two iPCA 2.0 reporting periods.
As the minimum reporting period can be set to 10 seconds, only flows with longer than 20-
second duration can be displayed on the analyzer.
Only the S5731-H, S5731-H-K, S5731-S, S5731S-S, S5731S-H, S5732-H, S5732-H-K, S6730-
H, S6730-H-K, S6730S-H, S6730-S, S6730S-S, and modular switches installed with X series
cards support fault demarcation based on application identification.
The application identification and traffic statistics collection functions cannot be configured
on a switch interface where iPCA 2.0 is configured. Therefore, the in-point can be
configured only on the uplink interface. As a result, packet loss on access switches cannot
be detected.
Other constraints are the same as those for 5-tuple-based fault demarcation.
● Hop-by-hop fault demarcation based on security groups and applications
The customer wants to preferentially guarantee experience of key applications
(such as video conferencing and email) for VIP users (such as executives and
student union members) in key places (such as conference rooms, exam rooms
and surrounding places) at key moments (such as exams). In this case, fault
demarcation based on application identification mentioned above cannot meet
the requirements, because the specifications may be insufficient to support fault
demarcation for all key applications. To this end, fault demarcation based on
security groups or both security groups and applications is provided. Security
groups are used to authorize authenticated users based on 5W1H, so that
information including user identities, access locations (switches the users connect
to), and access time can be identified.
NOTE
Security groups are supported for wireless users only in native WAC scenarios.
VoIP Real-time voice calls over IP networks. The network Ver Ver Ver
data must provide low latency and low jitter to ensure y y y
flow service quality. low low low
Voice Signaling protocols for controlling VoIP calls and Low Lo Per
signali establishing communication channels, for example, w mit
ng SIP, H.323, H.248, and Media Gateway Control
Protocol (MGCP).
Signaling protocols have a lower priority than VoIP
data flows because call failure is often considered
worse than intermittent voices.
Multi Multiple parties can share camera feeds and screens Low Ver Lo
media over IP networks. Protocols or applications can or y w
confer adapt to different network quality levels by me low
encing adjusting the bitrate (image definition) to ensure diu
the smoothness. m
Strea Online audio and video streaming. Audio and video Low Me Per
ming programs are made in advance and then cached on or diu mit
media local terminals before being played. Therefore, the me m
requirements on the network latency, packet loss, diu
and jitter are reduced. m
Delay- Data services that are sensitive to delay. For Low Lo Per
sensiti example, long delay on an online ordering system w mit
ve may reduce the revenue and efficiency of or
data enterprises. me
service diu
s m
Low- Services that are not important to enterprises, such Hig Hig Per
priorit as social network and entertainment applications. h h mit
y
service
s
NOTE
● In the multi-border node scenario, border nodes can be deployed in ring networking
mode, as described in Figure 2-162. If a non-border node exists on a ring network, the
non-border node cannot function as an external gateway to connect to the external
network.
● In the multi-border node scenario, only two border nodes can share the same egress.
NOTE
In the scenario where distributed VXLAN gateways are deployed and 802.1X users
go online in batches, you are advised to configure CAR for protocol packets
according to Table 2-84 to prevent the impact of excess user login on the network
and ensure network reliability.
(1) IPv4 services only: You are advised to perform configurations to allow a
maximum of 150 users (recommended: 60 wired users and 90 wireless users) to
go online per second. The numbers of users can be configured according to the
service requirements on the live network.
(2) IPv4/IPv6 dual-stack services: You are advised to perform configurations to
allow a maximum of 100 users (recommended: 40 wired users and 60 wireless
users) to go online per second. The numbers of users can be configured according
to the service requirements on the live network.
Context
On a large or midsize campus network, core switches are managed by the
controller using commands. It is recommended that aggregation and access
switches be managed in DHCP mode and WACs go online using commands. The
following describes the precautions for the deployment of aggregation/access
switches and WACs in the dual-border node scenario. For other details, see 2.3.3.2
Deployment Design.
Figure 2-164 VRRP group for dual core switches based on the auto-negotiated
management VLAN subnet
In the dual-border node scenario, standalone WACs are connected to core switches
in off-path mode, and the single-homed networking is used. The WACs are
brought online using commands, management gateways are deployed on the core
switches, and VRRP is used to enhance reliability. When deploying VRRP on the
management gateways, you need to associate the VRRP group with the uplink
route to the southbound network segment of the network management zone.
Figure 2-167 shows a networking example. For details about how to prevent a
Layer 2 loop on a single-homed network, see Precautions for Deploying
Aggregation and Access Switches.
Figure 2-167 Networking plan for deploying WACs in the dual-border node
scenario
4. The VLANIF 4023 interfaces on the WACs are the CAPWAP source interfaces.
In the WAC hot standby scenario, a VRRP group needs to be configured on
VLANIF 4023 with WAC_1 being the master device.
5. The core switches communicate with the WACs at Layer 3 through VLANIF
4024 and VLANIF 4025 respectively. The routes destined for the network
segments where the CAPWAP source interface addresses reside need to be
configured on the core switches, and the routes to the network segments
where the gateways of the Fit AP management subnets reside need to be
configured on the WACs.
6. The master device of the VRRP group is associated with the uplink route to
the network segment of the WAC CAPWAP source interface address.
Single- All edge devices and The number of edge devices is less than
area connected devices in the or equal to 100.
upstream direction are
assigned to Area 0.
To enhance reliability of OSPF, you are advised to enable the functions described
in the following table.
Functi Description
on
FRR OSPF IP FRR calculates a backup link in advance. With this function,
devices can fast switch traffic to the backup link without interrupting
traffic if the primary link fails. This protects traffic and thus greatly
improves OSPF network reliability.
Context
In the distributed gateway solution, northbound traffic forwarding from edge
nodes to border nodes on the overlay network and BGP configuration vary
depending on dual-border node networking and single-border node networking
scenarios, which are described as follows. For other information, see 2.3.4.3 Fabric
Network Design.
Egress Traffic Forwarding Mode Design from Edge Nodes to Border Nodes
On a fabric network with dual border nodes deployed in the distributed gateway
solution, it is recommended that the dual-homed networking be used between
core and aggregation devices so that Layer 3 VXLAN tunnels are established
between the border nodes and edge nodes. In this scenario, three solutions are
available to improve the reliability of egress traffic:
● Provide two uplink traffic paths of the edge node to implement ECMP-based
load balancing.
● Select different active egresses for different edge devices.
● Select different active egresses for different uplink network segments.
Figure 2-170 shows the overlay traffic forwarding path in the dual-border node
networking scenario.
Figure 2-170 Overlay traffic forwarding path in the dual-border node networking
scenario
You are advised to deploy two uplink traffic paths of the edge node to implement
ECMP load balancing. You can select the other two solutions based on service
requirements.
Table 2-86 Common protocols used in traditional network O&M that can be
configured on iMaster NCE-Campus
SSH ● Configures local accounts for login over SSH on network devices.
● Logs in to the CLI of a network device from iMaster NCE-Campus
using SSH.
Basic ● Monitors the basic status of sites, devices, and terminals, such
service as the online rate, CPU usage, and memory status of devices at
monitoring a site.
● Provides traffic statistics analysis and reports from multiple
dimensions (site, terminal, application, and SSID).
● Monitors user information, including the online status and
traffic statistics.
WLAN Network plan import: After the network plan file made by
topolog the WLAN Planner is imported, iMaster NCE-CampusInsight
y displays data such as sites, pre-deployed APs, obstacles,
background images, and scale planned in the file.
Network comparison: After pre-deployed APs are
associated with real APs, the planned data and actual data
are compared in terms of the power, channel, frequency
bandwidth, number of clients, negotiated rate, and signal
strength, and the comparison result is displayed.
Wi-Fi heatmap display: The radio heatmap can be
displayed based on the AP location.
User Applica Based on the monitoring and analysis of audio and video
applica tion service sessions, the SIP session statistics, service traffic
tion analysi trend, and session details list can be displayed, helping users
experie s quickly learn about the quality status of audio and video
nce services.
● The time on network devices must be synchronous with that on CampusInsight. If the
time difference between the network devices and CampusInsight is greater than 10
minutes, CampusInsight cannot display the data reported by the network devices.
Typically, you need to configure an NTP server with the same source to ensure time
synchronization between network devices and CampusInsight. During the installation of
CampusInsight, you need to enter the IP address of the external NTP clock source. In
addition, you need to configure the same IP address of the external NTP clock source on
the network devices.
● The wireless network uses the "WAC + Fit AP" architecture, and the performance data
collection function of the Fit APs must be enabled on the web system of the WAC. This
function cannot be enabled on iMaster NCE-Campus.
● To log in to CampusInsight through iMaster NCE-Campus as a proxy service, you need
to enable the intelligent analyzer agent feature on iMaster NCE-Campus in advance.
NOTE
iMaster NCE-Campus supports only single-fabric overlay network orchestration for multiple
campuses, and does not support automatic orchestration for multi-fabric VXLAN
interconnection of multiple campuses. You need to manually configure VXLAN
interconnection through the CLI. Manually configuring VXLAN interconnection services is
complex. You are advised to use the multi-campus single-fabric networking.
● Free mobility: It is recommended that the same free mobility policy matrix be
configured for the same VN in each fabric to ensure policy consistency
between the fabrics. You are advised to configure IP-group entry subscription
for the border nodes and configure the border nodes as policy enforcement
points.
● Network scale: Manually configuring VXLAN interconnection between
campuses is complex. The increasing number of interconnected campuses
leads to higher configuration complexity. Therefore, it is recommended that a
maximum of 10 campuses be interconnected through VXLAN.
NOTE
Multi-fabric interconnection through VXLAN does not support multicast overlay, cross-fabric
Layer 2 service access, and cross-fabric IPv6 service access.
scenarios), you need to consider the number of edge nodes at the HQ when
adding core devices of the branch to the fabric.
NOTE
1. When the core devices of the branch are added to the HQ fabric as edge nodes, they can
connect to external networks only through the border nodes of the HQ.
2. In the distributed gateway scenario where a single fabric is deployed, the branch core
switch can be used as the border node, and the branch can directly connect to the external
network. A single fabric supports a maximum of eight border nodes.
3. The controller does not support automatic underlay route orchestration for a cross-site
fabric.