Professional Documents
Culture Documents
HCIA-Security V3.0 Training Material
HCIA-Security V3.0 Training Material
Huawei e-Learning
https://ilearningx.huawei.com/portal/#/portal/ebg/51
Huawei Certification
http://support.huawei.com/learning/NavigationAction!createNavi?navId=_31
&lang=en
Find Training
http://support.huawei.com/learning/NavigationAction!createNavi?navId=_trai
ningsearch&lang=en
More Information
Huawei learning APP
HCIA-Security
Huawei Certification Network Security
Engineers
Issue: 3.0
1
Copyright © Huawei Technologies Co., Ltd. 2018. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any means without
prior written consent of Huawei Technologies Co., Ltd.
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not
be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all
statements, information, and recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://www.huawei.com
Email: support@huawei.com
Integrity
Ensures the accuracy and integrity of information and its processing method.
Availability
Ensures that authorized users can obtain desired information and use related assets.
Controllability
Non-repudiation
Watering hole: The attacker exploits the vulnerabilities of websites that targeted
individuals or organizations visit frequently and use these websites to distribute
malware. For example, on the intranet server that employees frequently visit, the
attacker replaces an internal shared document with the Trojan horse. All computers
that download the document as required will be infected with the Trojan horse and
send confidential information to the attacker.
Information system complexity: The information system may be attacked during the
design or operation process due to its vulnerabilities and defects. Major issues are as
follows:
Complex structure: The information system may need to support multiple types of
terminals (such as employee terminals, remote users, mobile terminals, routing
devices, and servers) and data services (such as service data, management data,
and voice data) on the network. All terminal and data types must be considered for
cyber security management.
B
International information security standardization began in the middle of the 1970s,
rapidly developed in the 1980s, and drew global attention in the 1990s. At present, there
are nearly 300 international and regional organizations establishing standards or
technical rules.
IEC was the first international organization established for the preparation and
publication of international standards for all electrical, electronic and related
technologies.
ITU is the United Nations specialized agency for information and communication
technologies. It allocates global radio spectrum and satellite orbits, develops global
telecommunication standards, works to improve telecommunication infrastructure in the
developing world, and promotes global telecommunication development.
Implement and operate the ISMS policy, controls, processes and procedures.
Assess and, where applicable, measure process performance against ISMS policy,
objectives and practical experience and report the results to management for review.
Take corrective and preventive actions, based on the results of the internal ISMS audit
and management review or other relevant information, to achieve continual
improvement of the ISMS.
ISO/IEC 27001 and ISO/IEC 27002, released in 2013, are the currently used standards.
Any company can implement an ISMS, but how? What requirements must be met? ISO
27000 provides detailed requirements which organizations can use to establish ISMSs.
ISO 27001 is to manage information security risks based on risk assessments and to
comprehensively, systematically, and continuously improve information security
management using the Plan, Do, Check, Action (PDCA) cycle. It can be used to establish
and implement ISMSs and ensure information security of organizations.
ISO 27001, an overall information security management framework based on the PDCA
cycle, focuses on the establishment of a continuous-cyclic long-term management
mechanism. Only certification to ISO/IEC 27001 is possible. Other ISO/IEC standards are
the specific clauses and operation guides for the certification. For example, ISO 27002
defines a specific information security management process under the guidance of ISO
27001.
The key check points in the ISO 27001 certification process are as follows:
Document review:
Security principles
Formal review:
Check the information asset identification and processing, and risk assessment and
handling forms.
Perform terminal security check, including the screen saver, screen lock, and
antivirus software installation and upgrade status.
Carry out the physical environment survey, including the field observation and
inquiry of equipment rooms and office environments.
Graded protection of information security refers to: graded security protection of crucial
government information, private and public information of legal
persons/organizations/citizens, and information systems that store, transmit, and
process the information; graded management of information security products in
information systems; graded response to and handling of information security incidents
in information systems.
Legal liabilities of graded protection:
A corporate sector that does not carry out assessment for graded protection will be
rectified according to relevant regulations. If it violates the provisions of China's
Cybersecurity Law enforced in June 2017, it will be punished according to relevant
laws and regulations. Article 21 of the Cybersecurity Law: The State implements a
tiered cybersecurity protection system. Article 59: Where network operators do not
perform cybersecurity protection duties provided for in Articles 21 and 25 of this
Law, the administrative department shall order corrections and give warnings;
where corrections are refused or it leads to endangerment of cybersecurity or other
such consequences, a fine of between 10,000RMB and 100,000RMB shall be
imposed, and persons who are directly in charge shall be fined between RMB
5,000RMB and 50,000RMB.
Development timeline:
February 18, 1994, Decree No. 147 of the State Council, Regulations of the People's
Republic of China for Safety Protection of Computer Information Systems
September 2003, No. 27 [2003] of the General Office of the CPC Central Committee,
Opinions for Strengthening Information Security Assurance Work
November 2004, No. 66 [2004] of the Ministry of Public Security, Notice of the
Ministry of Public Security, the State Secrecy Bureau, the State Cipher Code
Administration and the Information Office of the State Council on Issuing the
Implementation Opinions on the Graded Protection of Information Security
September 2005, No. 25 [2004] of the State Council Information Office, Notice on
Forwarding the Guide for Implementing Graded Protection of e-Government
Information Security
January 2006, No. 7 [2006] of the Ministry of Public Security, Notice of the Ministry
of Public Security, the State Secrecy Bureau, the State Cipher Code Administration
and the Information Office of the State Council on Issuing the Administrative
Measures for the Graded Protection of Information Security (for Trial
Implementation)
June 2007, No. 43 [2007] of the Ministry of Public Security, Notice of the Ministry of
Public Security, the State Secrecy Bureau, the State Cipher Code Administration and
the Information Office of the State Council on Issuing the Administrative Measures
for the Graded Protection of Information Security
2009, No. 1429 [2009] of the Ministry of Public Security, Guiding Opinions on the
Building and Improvement of Graded Protection of Information Systems
March 2010, No. 303 [2010] of the Ministry of Public Security, Notice on Promoting
the Assessment System Construction and Grade Assessment for Graded Protection
of Information Security
Grade I: Destruction of the information system would cause damage to the legitimate
rights and interests of citizens, legal persons and other organizations, but would cause
no damage to national security, social order or public interests.
Grade II: Destruction of the information system would cause severe damage to the
legitimate rights and interests of citizens, legal persons and other organizations or cause
damage to social order and public interests, but would not damage national security.
Grade III: Destruction of the information system would cause severe damage to social
order and public interests or would cause damage to national security.
Grade IV: Destruction of the information system would cause particularly severe damage
to social order and public interests or would cause severe damage to national security.
Grade V: Destruction of the information system would cause particularly severe damage
to national security.
The legislation in the Sarbanes-Oxley Act (SOX) stems from a December 2001 securities
scandal involving Enron, then one of the largest energy companies in the United States.
The company hid massive debts that, when revealed, sent stock prices tumbling. With
investor confidence "thoroughly destroyed", the United States Congress and government
rapidly introduced the SOX Act. The act promised "to protect investors by improving the
accuracy and reliability of corporate disclosures made pursuant to the securities laws,
and for other purposes."
Risk management and control: Establish an internal control system and process.
Answers:
AB
Traditional networks contain the core, aggregation, and access layers. The core layer
provides high-speed data channels, the aggregation layer converges traffic and control
policies, and the access layer offers various access modes to devices.
OSI model: Open System Interconnect Reference Model
The OSI model is designed to overcome the interconnection difficulties and low
efficiency issues associated with using various protocols by defining an open and
interconnected network..
The OSI reference model forms the basis for computer network communications. Its
design complies with the following principles:
There are clear boundaries between layers to facilitate understanding.
Each layer implements specific functions and does not affect each other.
Each layer is a service provider and a service user. Specifically, each layer provides
services to its upper layer and uses services provided by its lower layer..
The division of layers encourages the development of standardized protocols.
There are sufficient layers to ensure that functions of each layer do not overlap.
The OSI reference model has the following advantages:
Simplifies network operations.
Provides standard interfaces that support plug-and-play and are compatible with
different vendors.
Enables vendors to design interoperable network devices and accelerate the
development of data communications networks.
Prevents a change in one area of a network from affecting other areas. Therefore,
each area can be updated quickly and independently.
Simplifies network issues for easier learning and operations.
In the OSI model, units of data are collectively called Protocol Data Units (PDU). However,
each PDU is called a different name according to the layer at which it is sent:
Application layer (layer 7): data is called an Application Protocol Data Unit (APDU)
Presentation layer (layer 6): data is called a Presentation Protocol Data Unit (PPDU)
Session layer (layer 5): data is called a Session Protocol Data Unit (SPDU)
Each layer of the OSI model encapsulates data to ensure that the data can reach the
destination accurately and can be accepted and executed by the terminal host. A node
encapsulates the data to be transmitted by using a specific protocol header for
transmission. When data is processed at a layer, packets are also added to the tail of the
data , which is also called encapsulation.
The physical layer involves the transmission of bit streams over a transmission medium
and is fundamental in the OSI model. It implements the mechanical and electrical
features required for data transmission and focuses only on how to transmit bit streams
to the peer end through different physical links. The information contained in each bit
stream, for example, address or application data, is irrelevant at this layer. Typical devices
used at the physical layer include repeaters and hubs.
The main tasks of the data link layer are to control the physical layer and allow it to
present an error-free link to the network layer, detect and correct any errors, and
perform traffic control.
The network layer is responsible for forwarding packets and checks the network
topology to determine the optimal route for transmission. It is critical to select a route
from the source to the destination for data packets. A network layer device calculates the
optimal route to the destination by running a routing protocol (such as RIP), identifies
the next network device (hop) to which the data packet should be forwarded,
encapsulates the data packet by using the network layer protocol, and sends the data to
the next hop by using the service provided by the lower layer.
The transport layer is responsible for providing effective and reliable services (generally
refers to the applications at the application layer) to users.
In the session layer or upper layers, the data transmission unit is packet. The session
layer provides a mechanism for establishing and maintaining communications between
applications, including access verification and session management. For example,
verification of user logins by a server is completed at the session layer.
The presentation layer is generally responsible for how user information is represented.
It converts data from a given syntax to one that is suitable for use in the OSI system.
That is, this layer provides a formatted representation and data conversion service. In
addition, this layer is also responsible for data compression, decompression, encryption,
and decryption.
The application layer provides interfaces for operating systems or network applications
to access network services.
The Transfer Control Protocol/Internet Protocol (TCP/IP) model is widely used due to its
openness and usability. The TCP/IP protocol stack is implemented as standard protocols.
The TCP/IP model is divided into four layers (from bottom to top): link layer, internet
layer, transport layer, and application layer. Some documents define a model with five
layers, where the link layer is split into a link layer and a physical layer (equivalent to
layers 1 and 2 in the OSI model).
Each layer of the TCP/IP protocol stack has corresponding protocols, which are achieved
to generate network applications. Some protocols cannot be defined in a hierarchical
manner. For example, ICMP, IGMP, ARP, and RARP are deployed at the same layer as the
IP protocol at the Network layer. However, in some scenarios, ICMP and IGMP may be
placed on the upper layer of the IP protocol, and ARP and RARP are placed at the lower
layer of the IP protocol.
Application layer
HyperText Transfer Protocol (HTTP): It is used to access various pages on the web
server.
File Transfer Protocol (FTP): It is used to transfer data from one host to another.
Domain Name System (DNS): It is used to convert the domain name of the host to
an IP address.
Transport layer
Internet Protocol (IP): The IP protocol and routing protocol work together to find
an optimal path that can transmit packets to the destination. The IP protocol is not
concerned about the contents of data packets. It is a connectionless and unreliable
services.
Internet Control Message Protocol (ICMP): Defines the functions of controlling and
transferring messages at the network layer.
The network access layer consists of two sub-layers: Logic Link Control (LLC)
sublayer and Media Access Control (MAC) sublayer.
The sender submits the user data to the application, which then sends the data to the
destination. The data encapsulation process is as follows:
The user data is first transmitted to the application layer, and the application layer
information is added.
After the application layer processing is complete, the data is transmitted to the
transport layer. The transport layer information, such as TCP or UDP (the
application layer protocol specifies whether to use TCP or UDP) is then added.
After the processing at the transport layer is complete, the data is transmitted to
the Internet layer. The Internet layer information (such as IP address) is then added.
After the data is processed at the Internet layer, the data is transmitted to the network
access layer. The network access layer information (such as Ethernet, 802.3, PPP, and
HDLC) is added. Then, the data is transmitted to the destination as a bit stream.
Processing differs based on different devices. For example, a switch processes only the
data link layer information, whereas a router processes the network layer information.
The original user data can be restored only when the data reaches the destination.
After the user data arrives at the destination, the decapsulation process is performed as
follows.
Data packets are sent to the network access layer. After the network access layer
receives data packets, the data link layer information is removed after packet
resolution, and Internet layer information (such as IP address) is obtained.
After the Internet layer receives data packets, the Internet layer information is
removed after packet resolution, and upper-layer protocols (such as TCP) is
obtained.
After the transport layer receives data packets, the transport layer information is
removed after packet resolution, and upper-layer protocols (such as HTTP) is
obtained.
After the application layer receives data packets, the application layer information
is removed after packet resolution. The data displayed is the same as that received
from the host.
The application layer and transport layer provide end-to-end services. The Internet layer
and network access layer provide segment-to-segment services
Quintuple structure: Source IP address, destination IP address, protocol in use (for
example, 6 indicates TCP, and 17 indicates UDP), source port, and destination port.
Destination port: Generally, well-known application services have standard ports, such as
HTTP, FTP, and Telnet. Because some applications are not popular, the applications are
usually defined by development vendors to ensure that the service ports registered on
the same server are unique.
Source port: Generally, common application services, such as HTTP, FTP, and Telnet,
are assigned well-known port numbers (in the range from 0 to 1023). However,
some operating systems may use greater port numbers as their initial ports.
Because source ports are unpredictable, they are seldom involved in the ACL policy.
ICMP: ICMP is used to test network connectivity. Typical applications are Ping and
Tracert.
Host A sets the destination IP address in the ARP request packet to its own IP address
and broadcasts the packet on the network. If Host A receives an ARP reply, it knows that
the IP address is in use and can detect IP address conflict.
ICMP is one of the core protocols in the TCP/IP protocol stack. ICMP is used to send
control packets between IP network devices to transmit error, control, and query
messages.
A typical ICMP application is the ping command. Ping is a common tool for checking
network connectivity and collecting related information. In the ping command, users can
assign different parameters, such as the length and number of ICMP packets, and the
timeout period for waiting for a reply. Devices construct ICMP packets based on the
parameters to perform ping tests.
-c count: Specifies the number of times that ICMP Echo Request packets are sent.
The default value is 5.
-h ttl-value: Specifies the Time To Live (TTL) for ICMP Echo Request packets. The
default value is 255.
-t timeout: Specifies the timeout period of waiting for an ICMP Echo Reply packet
after an ICMP Echo Request packet is sent.
The ping command output contains the destination address, ICMP packet length, packet
number, TTL value, and round-trip time. The packet number is a variable parameter field
contained in an Echo Reply message (Type=0). The TTL and round-trip time are included
in the IP header of the message.
Tracert is another typical application of ICMP. Tracert traces the forwarding path of
packets hop by hop based on the TTL value in the packet header. To trace the path to a
specific destination address, the source end first sets the TTL value of the packet to 1.
After the packet reaches the first node, the TTL times out. Therefore, this node sends a
TTL timeout message carrying the timestamp to the source end. Then, the source end
sets the TTL value of the packet to 2. After the packet reaches the second node, the TTL
times out. This node also returns a TTL timeout message. The process repeats until the
packet reaches the destination. In this way, the source end can trace each node through
which the packet passes according to the information in the returned packet. This allows
the source end to calculate the round-trip time according to the timestamp information.
Tracert is an effective method to detect packet loss and delay, and helps administrators
discover routing loops on the network
Network segment routes: The destination is a network segment. The subnet mask
of an IPv4 destination address is less than 32 bits or the prefix length of an IPv6
destination address is less than 128 bits.
Host routes: The destination is a host. The subnet mask of an IPv4 destination
address is 32 bits or the prefix length of an IPv6 destination address is 128 bits.
Routes are classified into the following types based on whether the destination is directly
connected to a router:
Direct routes: A router is directly connected to the network where the destination is
located.
Routes are classified into the following types based on the destination address type:
Static routes are easy to configure, have low requirements on the system, and apply
to small, simple, and stable networks. However, static routes cannot automatically
adapt to network topology changes and manual intervention is required.
Dynamic routing protocols have their own routing algorithms. Dynamic routes can
automatically adapt to network topology changes and apply to networks with a
large number of Layer 3 devices. The configurations of dynamic routes are complex.
Dynamic routes have higher requirements on the system than static routes do and
consume both network and system resources.
According to the application range, dynamic routing protocols are classified into
the following types:
Interior Gateway Protocols (IGP): running in an AS. Common IGPs include the
RIP, OSPF, and IS-IS.
Exterior Gateway Protocols (EGP): running in different ASs. BGP is the most
frequently used EGP protocol.
According to the used algorithms, dynamic routing protocols are classified into the
following types:
Distance-vector protocol: includes RIP and BGP. BGP is also called a path-
vector protocol.
OSPF can trigger an update to rapidly detect and advertise topology changes within an
AS.
OSPF can solve common issues caused by network expansion. For example, if additional
routers are deployed and the volume of routing information exchanged between them
increases, OSPF can divide each AS into multiple areas and limit the range of each area.
OSPF is suitable for large and medium-sized networks. In addition, OSPF supports
authentication: packets between OSPF routers can be exchanged only after being
authenticated.
SNMP is a network management protocol widely used in TCP/IP networks. It enables a
network management workstation that runs the NMS to manage network devices.
The NMS queries and obtains network resource information through SNMP.
Network devices proactively report alarm messages to the NMS so that network
administrators can quickly respond to network issues
The NMS is network management software running on a workstation. It enables network
administrators to monitor and configure managed network devices.
An agent is a network management process running on a managed device. After the
managed device receives a request sent from the NMS, the agent responds with
operations. The agent provides the following functions: collecting device status
information, enabling the NMS to remotely operate devices, and sending alarm
messages to the NMS.
A MIB is a virtual database of device status information maintained on a managed device.
An agent searches the MIB to collect device status information.
Multiple versions of SNMP are available. Typically, these versions are as follows:
SNMPv1: Easy to implement but has poor security.
SNMPv2c: The security is low. It is not widely used.
SNMPv3: Defines a management framework to provide a secure access mechanism
for users.
SNMPv1: The NMS on the workstation and the Agent on the managed device exchange
SNMPv1 packets to manage the managed devices.
Compared with SNMPv1, SNMPv2c has greatly improved its performance, security, and
confidentiality.
SNMPv3 has an enhanced security and management mechanism based on SNMPv2. The
architecture used in SNMPv3 uses a modular design and enables administrators to
flexibly add and modify functions. SNMPv3 is highly adaptable and applicable to multiple
operating environments. It can not only manage simple networks and implement basic
management functions, but also provide powerful network management functions to
meet the management requirements of complex networks.
The eSight NTA provides users with reliable and convenient traffic analysis solutions,
monitors network-wide traffic in real time, and provides multi-dimensional traffic
analysis reports. This solution helps users detect abnormal traffic in a timely manner and
learn about both network bandwidth usage and traffic distribution. In addition, it helps
enterprises implement traffic visualization, fault query, and planning.
Features:
Traffic visualization: Monitors IP traffic in real time, displays the network traffic
trend, and helps administrators detect and handle exceptions in a timely manner.
Exception detectability: Through the NTA, users can analyze and audit the original
IP traffic to identify the root cause of abnormal traffic.
Proper planning: The traffic trend and customized reports provided by the NTA
provide reference for administrators to plan network capacity.
NetStream provides data that is useful for many purposes, including:
Network management and planning
Enterprise accounting and departmental charging
ISP billing report
Data storage
Data mining for marketing purposes
NetStream is implemented using the following devices:
NetStream Data Exporter (NDE): Samples the traffic and exports the traffic statistics.
NetStream Collector (NSC): Parses packets from the NDE and sends statistics to the
database for the NDA to parse.
NetStream Data Analyzer (NDA): Analyzes and processes the statistics, generates
reports, and provides a foundation for various services, such as traffic charging,
network planning, and monitoring.
The NetStream system works as follows:
NDE configured with the NetStream function periodically sends the collected traffic
statistics to the NSC.
NSC processes the traffic statistics, and sends them to the NDA.
NDA analyzes the data for applications such as charging and network planning.
To establish a connection, TCP uses a three-way handshake process. This process is used
to confirm the start sequence number of the communications parties so that subsequent
communications can be performed in an orderly manner. The process is as follows:
When the connection starts, the client sends a SYN to the server. The client sets the
SYN's sequence number to a random value a.
After receiving the SYN, the server replies with a SYN+ACK. The server sets the
ACK's acknowledgment number as the received sequence number plus one (that is,
a+1), and the SYN's sequence number as a random value b.
After receiving the SYN+ACK, the client replies with an ACK. The client sets the
ACK's acknowledgment number as the received sequence number plus one (that is,
b+1).
To terminate a connection, TCP uses a four-way handshake process. The process is as
follows:
The client sends a connection release packet (FIN=1) to the server and stops
sending data. The client sets the FIN's sequence number as a (seq=a) and enters
the FIN-WAIT-1 state.
After receiving the FIN, the server replies with an acknowledgement packet
(ACK=1). The server sets the ACK's acknowledgement number as the received
sequence number plus one (ack=a+1), sets the sequence number as b, and enters
the CLOSE-WAIT state.
After receiving the ACK, the client enters the FIN-WAIT-2 state and waits for the
server to send a FIN.
After the server finishes sending any remaining data, it sends a connection release
packet to the client: FIN=1; ack=a+1. Because the connection is in the half-closed
state, the server may send more data. Assume that the sequence number is seq=c.
The server then enters the LAST-ACK state and waits for acknowledgement from
the client.
After receiving the connection release packet from the server, the client replies with
an acknowledgement packet (ACK=1). The client sets the acknowledgement
number to ack=c+1 and sequence number to seq=a+1. The client then enters the
TIME-WAIT state.
After receiving the ACK from the client, the server enters the CLOSED state
immediately and ends the TCP connection.
HTTP/HTTPS: refers to Hypertext Transfer Protocol, which is a protocol used to browse
web pages.
FTP protocol: refers to File Transfer Protocol, which is used to upload and download file
resources.
DNS protocol: refers to Domain Name Resolution Protocol, which is used to resolve
domain names to IP addresses.
A root server is primarily used to manage the main directory of the Internet. The number
of root servers are limited to 13 server addresses in the world. Among the 13 nodes, 10
are set in the United States, and the other three are in the UK, Sweden and Japan.
Although the network has no borders, servers still have national boundaries. All root
servers are managed by the Internet domain name and number allocation agency ICANN,
which is authorized by the US government.
A top-level domain name server is used to store top-level domain names such as .com,
.edu, and .cn.
A recursive server is an authoritative server. It stores definitive domain name records (the
resolution relationship between a domain name and an IP address) for the zone in which
it servers. If every person accessing the Internet were to send requests to an
authoritative server, the server would be overloaded. Therefore, a cache server is
necessary.
A cache server is equivalent to a proxy of the authoritative server and reduces the
pressure of the authoritative server. Each time a user accesses the Internet, a request for
domain name resolution is sent to the cache server. Upon receiving this request for the
first time, the cache server requests the domain name and IP address resolution table
from an authoritative server, and then stores the table locally. Subsequently, if a user
requests the same domain name, the cache server directly replies to the user. The IP
address of a website does not often change. However, entries in the resolution table are
valid only for a certain period. When the validity period expires, the entry is automatically
aged. The system queries the authoritative server again if a user request is sent. This
aging mechanism ensures that the domain name resolution on the cache server is
updated periodically.
The resolution process of DNS is as follows:
The DNS client queries in recursive mode. The cache server first checks whether the
local DNS server has the domain name resolution cache.
If there is no local cache, the domain name is sent to the root server. After receiving
the www.vmall.com request, the root server checks the authorization of the .com
and returns the IP address of the top-level domain name server where the .com is
located.
The cache server continues to send a www.vmall.com resolution request to the top-
level domain name server. After receiving the request, the top-level domain name
server returns the recursive server IP address of the next-level .vmall.com.
After obtaining the IP address of www.vmall.com, the cache server sends the IP
address to the client and caches the IP address locally.
If a client requests the domain name resolution of www.vmall.com again, the cache
server directly responds with the IP address.
When FTP is used to transfer files, two TCP connections are used. The first is the control
connection between the FTP client and the FTP server. Enable port 21 on the FTP server
and wait for the FTP client to send a connection request. The FTP client enables a
random port and sends a connection setup request to the FTP server. The control
connection is used to transfer control commands between the server and the client.
The second is the data connection between the FTP client and the FTP server. The server
uses TCP port 20 to establish a data connection with the client. Generally, the server
actively establishes or interrupts data connections.
Because the FTP is a multi-channel protocol, a random port is used to establish a data
channel. If a firewall exists, the channel may fail to be set up. For details, see the
following sections.
In active mode, if a firewall is deployed, the data connection may fail to be established
because it is initiated by the server. Passive mode solves this issue. The active mode
facilitates the management of the FTP server but impairs the management of the client.
The opposite is true in the passive mode.
By default, port 21 of the server is used to transmit control commands, and port 20 is
used to transmit data.
The server enables port 21 to listen for data and waits to set up a control
connection with the client.
The client initiates a control connection setup request and the server responds.
The client sends a PORT command through the control connection to notify the
server of the temporary port number used for the client data connection.
The server uses port 20 to establish a data connection with the client.
The procedure for setting up an FTP connection in passive mode is as follows:
The server enables port 21 to listen for data and wait to set up a control connection
with the client.
The client initiates a control connection setup request and the server responds.
The client sends the PASV command through the control connection to notify the
server that the client is in passive mode.
The server responds and informs the client of the temporary port number used for
the data connection.
A data connection is set up between the client and the temporary port of the server.
WWW is short for World Wide Web, also known as 3W or Web. Hypertext is a holistic
information architecture, which establishes links for the different parts of a document
through keywords so that information can be searched interactively. Hypermedia is the
integration of hypertext and multimedia.
The Internet uses the combination of hypertext and hypermedia to extend the
information link to the entire Internet. A web is a kind of hypertext information system. It
enables the text to be switched from one position to another instead of being fixed at a
certain position. The multi-link is a unique feature.
HTTP relies on TCP to achieve connection-oriented state and does not have an
encryption and verification mechanism. As a result, the security is insufficient. HTTPS is a
secure version of HTTP and supports encryption. However, HTTPS can be used to hide
malicious content that cannot be identified by security devices, which poses security risks
on a network.
HTTP is the most widely used network protocol on the Internet. HTTP was originally
developed to provide a method for publishing and receiving HTML pages. Resources
requested by HTTP or HTTPS are identified by Uniform Resource Identifiers (URIs).
The server accepts the connection request and establishes a connection. (Steps 1
and 2 are known as TCP three-way handshake.)
The client sends HTTP commands such as GET (HTTP request packet) to the server
through this connection.
The server receives the command and transmits the required data to the client
(HTTP response packets) based on the command.
The server automatically closes the connection after the data is sent (TCP four-way
handshake).
The mail sending process is as follows:
The PC encapsulates the email content into an SMTP message and sends it to the
sender's SMTP server.
After receiving the request from the user, the POP3 server obtains the email stored
on the SMTP server.
The POP3 server encapsulates the email into a POP3 message and sends it to the
PC.
SMTP Server, POP3 Server, and IMAP are management software that provides services
for users and are deployed on hardware servers.
The differences between IMAP and POP3 are as follows: When POP3 is used, after the
client software downloads unread mails to the PC, the mail server deletes the mails. If
IMAP is used, users can directly manage mails on the server without downloading all
emails to the local PC.
Answer:
1. C
2. B
Router: device for communications across network segments
Switch: device for communications on the same network segment or across network
segments
NGFW: The next-generation firewall (NGFW) can be deployed at the network egress to
provide preliminary protection or to protect the data center from attack.
vNGFW: The virtual NGFW (vNGFW) is deployed on virtual machines (VMs) and has
similar functions to a hardware firewall.
A switch works at the data link layer and forwards data frames. After receiving the data
frame, a switch forwards a data frame according to the header information.
Next, let's take a small switch network as an example to explain the basic working
principles of a switch.
A switch has a MAC address table that stores the mapping between MAC addresses and
switch interfaces. A MAC address table is also called a Content Addressable Memory
(CAM) table.
As shown in the figure, a switch can perform three types of frame operations: flooding,
forwarding, and discarding.
Flooding: The switch forwards the frames received on an interface through all other
interfaces. (It does not forward frames through the interface that receives them).
Upon receipt of a unicast frame, the switch searches the MAC address table for the
destination MAC address of the frame.
If the MAC address cannot be found, the switch floods the frame.
If the MAC address is found, the switch forwards the frame if the MAC address is
not that of the interface on which the frame was received. Otherwise, the switch
discards the frame.
If a switch receives a broadcast frame, the switch does not check the MAC address table
but directly performs the flooding operation.
Upon receipt of a broadcast frame, the switch directly floods the frame without checking
the MAC address table. For a multicast frame, the switch performs complex processing
that is beyond the scope of this course. In addition, a switch has the capability to learn
information from received frames. Upon receipt of a frame, a switch checks the source
MAC address of the frame, maps this address to the interface that receives the frame,
and saves the mapping to the MAC address table.
In the initial state, a switch does not know any MAC addresses of the connected hosts.
Therefore, the MAC address table is empty. In this example, SWA is in the initial state.
Before receiving a data frame from Host A, SWA's MAC address table contains no entry
for Host A.
When Host A sends data to Host C, it sends an ARP request to obtain the MAC address
of Host C. In the ARP request, the destination MAC address is the broadcast address, and
the source MAC address is the MAC address of Host A. After receiving the ARP request,
SWA adds the mapping between the source MAC address and the receiving interface to
the MAC address table. The aging time of MAC address entries learned by X7 series
switches is 300 seconds by default. If SWA receives a data frame from host A again
within the aging time, SWA updates the aging time of the mapping between Host A's
MAC address and G0/0/1. After receiving a data frame whose destination MAC address is
00-01-02-03-04-AA, SWA forwards the frame through interface G0/0/1.
In this example, the destination MAC address of the ARP request sent by Host A is a
broadcast address. Therefore, the switch broadcasts the ARP request to Host B and Host
C through interfaces G0/0/2 and G0/0/3.
After receiving the ARP request, Host B and Host C query the ARP packet. Host C
processes the ARP request and sends an ARP reply. However, Host B does not reply. The
destination MAC address of the ARP reply is the MAC address of Host A and the source
MAC address is the MAC address of Host C. After receiving the ARP reply, SWA adds the
mapping between the source MAC address and the receiving interface to the MAC
address table. If the mapping exists in the MAC address table, the mapping is updated.
Then SWA queries the MAC address table, finds the corresponding forwarding interface
according to the destination MAC address of the frame, and forwards the ARP reply
through G0/0/1.
A router is a network layer device that forwards packets between different networks. As
shown in the figure, host A and host B reside on different networks. When host A wants
to communicate with host B, host A sends a frame to host B. Upon receipt of this frame,
the router that resides on the same network as host A analyzes the frame. At the data
link layer, the router analyzes the frame header and determines that the frame is sent to
itself. It then sends the frame to the network layer. At the network layer, the router
determines to which network segment the destination address belongs based on the
network layer packet header of the frame. It then searches the table and forwards the
frame through the corresponding interface to the next hop destined to host B.
After receiving a packet, a router selects an optimal path according to the destination IP
address of the packet. It then forwards the packet to the next router. The last router on
the path forwards the packet to the destination host. The transmission of data packets
on the network is similar to a relay race. Each router forwards data packets to the next-
hop router according to the optimal path, and the packets are forwarded to the
destination through the optimal path. In some cases, because certain routing policies are
implemented, the path through which the data packets pass may not be optimal.
A router can determine the forwarding path of data packets. If multiple paths exist to the
destination, the router determines the optimal next hop according to calculations
specific to the routing protocol in use.
The word "firewall" was first used in the construction field, where a firewall's primary
function is to isolate and prevent a fire from spreading. In the communications field, a
firewall device is usually deployed to meet certain requirements by logically isolating
networks. It blocks various attacks on networks and allows normal communication
packets to pass through.
In communications, a firewall is mainly used to protect one network area against network
attacks and intrusions from another network area. Because of its isolation and defense
capabilities, it can be flexibly applied to network borders and subnet isolation, for
example, enterprise network egress, internal subnet isolation, and data center border.
Firewalls are different from routers and switches. A router is used to connect different
networks and ensure interconnection through routing protocols so that packets can be
forwarded to the destination. A switch is usually deployed to set up a local area network
(LAN) and serve as an intermediate hub for LAN communications. A switch forwards
packets through Layer 2/Layer 3 switching. A firewall is deployed at the network border
to control access to and from the network. Security protection is the core feature of a
firewall. The primary function of routers and switches is forwarding, whereas that of
firewalls is controlling.
It is common for mid-range and low-end devices to integrate router and firewall
functionality. Huawei has released a series of such all-in-one devices.Currently, there is a
trend for mid-range and low-end routers and firewalls to integrate for function
complementary. Huawei has released a series of such all-in-one devices.
The earliest firewall can be traced back to the late 1980s. Broadly speaking, firewall
development can be divided into the following phases:
1989-1994:
Packet filtering firewalls ware developed in 1989 for simple access control.
This type of firewall is called the first-generation firewall.
Proxy firewalls were developed soon after and acted as a proxy for
communications between an intranet and an extranet at the application layer.
This type of firewall is referred to as the second-generation firewall. Proxy
firewalls have high security but low processing. In addition, developing a
proxy service for each type of application can be difficult. Therefore, a proxy is
provided for only a few applications.
In 1994, the industry released the first stateful inspection firewall, which
determined what action should be performed by dynamically analyzing packet
status. Because it does not need to proxy each application, a stateful
inspection firewall provides faster processing and higher security. This type of
firewall is called the third-generation firewall.
1995-2004:
At the same time, specific devices started to appear, for example, Web
Application Firewalls (WAFs) that protect web servers.
After 2004, the UTM market developed rapidly, and the UTM products
mushroomed, but new problems arose. First, the detection degree of the
application layer information was limited, and a more advanced detection
method is required, which makes the Deep Packet Inspection (DPI) technology
widely applied. Next was performance issues. When multiple functions are
running at the same time, the processing performance of the UTM
deteriorated greatly.
Data flows in the same security zone bring no security risks and therefore require
no security policies.
The firewall performs security checks and implements security policies only when
data flows between security zones.
All devices on the networks connected to the same interface must reside in the
same security zone, and one security zone may contain the networks connected to
multiple interfaces.
Untrust zone
DMZ
Trust zone
Local zone
All devices on the networks connected to the same interface must reside in the same
security zone. Each security zone may contain networks connected to multiple interfaces.
The interfaces can be physical or logical interfaces. Users on different network segments
connected to the same physical interface can be added to different security zones by
using logical interfaces such as subinterfaces or VLANIF interfaces.
Question: If different interfaces belong to the same security zone, does the inter-zone
security forwarding policy take effect?
VRP is a network operating system used Huawei routers, Ethernet switches, and service
gateways to implement network access and interconnection services. It provides a
unified user and management interface, implements control plane functionality, and
defines the interface specifications of the forwarding plane (so that the interaction
between a product's forwarding plane and the VRP control plane can be implemented). It
also implements the network interface layer to shield the differences between the link
and network layers of each product.
VRP commands use level-defined protection. The four command levels are visit,
monitoring, configuration, and management levels.
Visit level: Network diagnosis commands (such as ping and tracert) and commands
that are used to access external devices from the local device (for example, Telnet
client, SSH, and Rlogin). Commands at this level cannot perform file storage
configurations.
Monitoring level: Commands at this level are used for system maintenance or
service fault diagnosis, including the display and debugging commands.
Commands at the monitoring level cannot be saved in configuration files.
Configuration level: Service configuration commands, including routing commands
and commands at each network layer, are used to provide direct network services
for users.
Management level: Commands at this level affect normal system operation. Such
commands include file system, FTP, TFTP, Xmodem download, configuration file
switchover, standby board control, user management, command level setting, and
system internal parameter setting commands.
The system classifies login users into four levels, each of which correspond to a
command level. That is, after logging in to the system, a user can use only the
commands that are assigned to a level equal to or lower than the user's level. To switch a
user from a lower level to a higher level, run the following command: super password
[level user-level] {simple | Cipher} password.
Enter an incomplete keyword and press Tab. The system automatically executes partial
help:
If the match is unique, the system replaces the original input with a complete
keyword and displays the keyword on a new line, with the cursor a space behind.
If you enter an incorrect keyword and press Tab, the keyword is displayed in a new
line. The entered keyword does not change.
Configuration procedure:
Choose Network > Interface, and select the interface to be modified.
Configure an IP address for the interface and add the interface to the security zone.
Key commands:
Enter the view of an interface.
<USG>system-view
[USG]interface interface-type interface-number
Configure a Layer 3 or Layer 2 Ethernet interface.
Configure a Layer 3 Ethernet interface.
ip address ip-address { mask | mask-length }
Configure a Layer 2 Ethernet interface.
Portswitch
Assign the interface to a security zone.
Run the system-view command to enter the system view.
Run the firewall zone [ name ] zone-name command to create a security zone,
and enter the view of the security zone.
Run the add interface interface-type interface-number command to assign the
interface to the security zone.
To configure a static route, perform the following operations:
You can configure a static route to ensure that traffic sent between two entities always
follows this route. However, if the network topology changes or a fault occurs, the static
route does not change automatically and requires manual intervention.
The default route is used only when no matching access entry is available in the routing
table (that is, the routing table does not contain a specific route). The default route is a
route to the network 0.0.0.0/0 and is used if the destination IP address of a packet does
not match any access entry in the routing table. If no default route exists and the
destination IP address of the packet is not in the routing table, the packet is discarded. In
this case, an ICMP packet is returned to the source to report that the destination IP
address or network is unreachable.
Device Login Management
Telnet: Connect a PC to the device over the network. Then, log in to the device
through Telnet to perform local or remote configuration. The device performs user
authentication according to the configured login parameters. This login mode
enables remote management and maintenance of the device.
SSH: This login mode uses secure transmission channels to enhance security of data
exchange. It provides powerful authentication functions to ensure information
security and protect devices against attacks, such as IP spoofing attacks.
Web: Access the device through the web browser on a client to control and
manage the device.
Right-click My Computer, choose Properties from the shortcut menu, and click Device
Manager. Check parameters in the Device Manager window.
In the Serial window shown in the figure, set Serial line to connect to based on the port
used by the PC (or configuration device), specify PuTTY configuration parameters on the
left according to the parameter table on the right, and click Open.
The default user name and password for logging in to the USG configuration interface
are admin and Admin@123 respectively. The user name is case insensitive and the
password is case sensitive.
Configure a PC to obtain an IP address automatically. Connect the PC Ethernet interface
to the default management interface on the device directly or through a switch. Enter
https://192.168.0.1 in the PC's web browser to access the web login page.
The default user name and password is admin and Admin@123 respectively.
Enable the web management function:
[USG] aaa
[USG-aaa-manager-user-webuser] level 3
Configure web device management on the USG interface:
[USG] aaa
[USG-aaa-manager-user-vtyadmin] password
Enter Password
[USG-aaa-manager-user-vtyadmin]] level 3
Configure Telnet device management on the USG interface:
The configuration file contains the configurations that the device will load when it is
started. You can save configuration files on the device, modify and remove existing
configuration files, and specify which configuration file the device will load upon
each startup. System files include the USG software version and signature database
files. Generally, management of system files is required during software upgrades.
Upgrading system software: The system software can be uploaded to the device
through TFTP or FTP. Upgrade the system software to configure the software
system for the next startup.
A license is provided by a vendor to authorize the usage scope and validity period
of product features. It dynamically controls whether certain features of a product
are available.
Save the configuration file: Enable the firewall to use the current configuration as the
start configuration the next time it restarts.
Method 1 (command line): Run the save command in the user view.
Method 2 (web): Click Save in the upper right corner of the home page.
Erase the configuration file (restore to factory settings): After the configuration file is
erased, the firewall uses the default parameter settings the next time it restarts.
Method 1 (command line): Run the reset saved-configuration command in the user
view.
Method 3 (hardware reset button): If the device is not powered on, press and hold
the RESET button, and turn on the power switch. When the device indicator blinks
twice per second, release the RESET button. The device starts with the default
configuration.
Method 4 (hardware reset button): If the device is started normally, press and hold
the RESET button for more than 10 seconds. The device restarts and uses the
default configuration.
Configure the system software and configuration file for the next startup:
Command line: Run the startup system-software sysfile command in the user view.
Web: Choose System > Maintenance > System Update, and then select Next
Startup System Software.
Function: The firewall will be restarted and the restart will be recorded in logs.
Method 1 (command line): Run the reboot command in the user view.
If the device has insufficient storage space available, the device automatically
deletes the system software that is running.
Optional: Click the Export button sequentially to export the device's alarm, log, and
configuration information to the PC. You are advised to save the configuration
information to the terminal.
If the current network allows the device to restart immediately after upgrade, select
either Set as the next startup system software, and restart the system or Set as the
next startup system software, and do not restart the system according to
requirements.
The upgraded system software can be used only after the device restarts.
License files are stored as .dat files. The software file name cannot contain any Chinese
characters.
Configuration commands:
Run the license active license-file command to activate the specified license file.
A
Dyn is a DNS SaaS provider whose core service is providing managed DNS for its users.
The DDoS attacks severely affected DNS services, preventing Dyn users from accessing
their websites. Because Dyn serves many companies, the damage caused spread quickly,
causing serious harm. More than 100 websites became inaccessible due to these attacks
for as long as three hours. Amazon alone suffered a loss of tens of millions of dollars.
The "zombies" launching the attacks mainly consisted of network cameras, digital hard
disk recorders, and smart routers. The Mirai botnet infected millions of devices, of which
only one tenth were involved in this attack.
Currently, the Internet has many zombie hosts and botnets. Driven by the desire for
profit, DDoS attacks have become a major security threat to the Internet.
Look for zombies: By default, the remote login function is enabled for IoT devices to
facilitate remote management of the administrators. An attacker can scan IP addresses
to discover live IoT devices, which are then scanned for open Telnet ports.
Build a botnet: Some IoT device users use the default password directly or set a simple
password (a simple combination of user name/digits, such as "admin/123456") for their
devices. These passwords are easily cracked by an attacker through brute force. After
successfully cracking the password to an IoT device and logging in to it through Telnet,
the attacker remotely implants the Mirai malware into the IoT device to obtain absolute
control over the device.
After obtaining absolute control over the infected devices, in addition to using the
devices to launch DDoS attacks, the malware can also cause serious damage to the
systems, services, and data of the devices. For example, the malware can tamper
with data, steal privacy, modify configurations, and delete files, and may further
attack core service systems.
Load the attack module: The attacker loads the DNS DDoS attack module on the IoT
device.
Launch an attack: The attacker launches a DDoS attack against DNS service from Dyn in
the United States through the botnet, bringing down hundreds of customer websites.
IP spoofing is launched by exploiting the normal trust relationships between hosts. Hosts
with IP address-based trust relationships use IP address-based authentication to
determine whether to allow or reject the access of another host. Between two hosts with
a trust relationship, users can log in to one host from another without password
verification.
Crash the network where a trusted host resides to launch the attack without
resistance.
Connect to a port of the target host to guess the sequence and sequence
increment value.
Masquerade the source address as the address of a trusted host and send a data
segment with the SYN flag set to initiate a connection.
Wait for the target host to send a SYN-ACK packet to the compromised host.
Send the target host an ACK packet, with the source address masqueraded as the
address of a trusted host and sequence number as that expected by the target host,
plus 1.
After the connection is established, send commands and requests to the target
host.
A Distributed Denial of Service (DDoS) attack is a typical kind of traffic attack.
In a DDoS attack, the attacker resorts to every possible means to control a large number
of online hosts. These controlled hosts are called zombie hosts, and the network
consisting of the attacker and zombie hosts is called a botnet. An attacker launches
DDoS attacks by controlling many zombie hosts to send a large number of elaborately
constructed attack packets to the attack target. As a result, links are congested, and
system resources are exhausted on the attacked network. This prevents the attack target
from providing services for legitimate users.
DDoS attacks are divided into different types based on the types of exploited attack
packets. Currently, popular DDoS attacks include SYN flood, UDP flood, ICMP flood,
HTTP flood, HTTPS flood, and DNS flood.
The most common means of launching an SQL injection attack is to construct elaborate
SQL statements and inject them into the content submitted on web pages. Popular
techniques include using comment characters, identical relations (such as 1 = 1), JOIN
query using UNION statements, and inserting or tampering with data using INSERT or
UPDATE statements.
As a border device, the firewall can block unauthorized network access, web page and
email viruses, illegitimate applications, etc., to protect the intranet.
WAF is short for Web Application Firewall. It is a protection device that protects web
applications by executing a series of HTTP/HTTPS security policies.
Answers:
AB
ABD
According to the Survey Report on Cyber Security Awareness of Chinese Internet Users
launched by Qihoo 360, 24.1% have a unique password for each account, and 61.4% use
different passwords. However, 13.8% use the same password for all accounts, which has
high security risks.
Users face different security risks when using public Wi-Fi networks. Statistics show that
when connected to public Wi-Fi networks, most users will browse simple web pages,
watch videos, or listen to the music. Among them, 25.1% log in to personal mailboxes to
send emails and use social accounts for chatting, while 13.6% do online shopping and
banking. If a user accidentally connects to a phishing or hacked Wi-Fi, his/her operations
may easily lead to account password theft or even loss of money in his/her financial
accounts.
Social engineering first appeared as a formal discipline in the 1960s.
Pay attention to news regarding cyber security scams so you are always aware of
potential security issues.
Gartner is the world's first information technology research and analysis company. It is
also the world's leading research and advisory company.
Cloud workload protection platforms (CWPPs)
The definition provided by Gartner is abstract and complex. In simple terms, a
CWPP is a platform that protects services running on the public cloud and private
cloud. The current practice is to deploy an agent at all operating systems of the
service to communicate with the management console. In this way, a distributed
monitoring and centralized management client/server (C/S) architecture is formed,
allowing O&M personnel to conveniently monitor the security status of multiple
hosts at the same time and deliver handling policies.
Remote browser
The technology isolates the web page browsing sessions from endpoints. For
example, the simplest way is to adopt graphical login to remotely log in to a host to
browse web pages. Due to the isolation of the web browser by the graphical login,
even if the browser is attacked, it will not harm endpoints of the users. After the
browsing is complete, the host that performs the browsing task can be reset to the
safest state. This technology can be provided as a service and relies on the
virtualization technology. Enterprises lease remote browsing servers, and providers
maintain these browsing servers.
Deception
A number of fake servers are deployed in an enterprise to decoy or mislead
attackers to incorrectly judge the internal network topology of the enterprise, in
turn increasing attack costs and affectivity. If an attacker intrudes into a fake server,
an alarm will be generated. A fake server can even be directly embedded into a
switch.
Endpoint Detection and Response (EDR)
The antivirus software deployed on endpoints is the simplest example of EDR.
However, EDR provides more functions, such as identifying suspicious processes and
network connections of a device through behavioral analysis. In addition, EDR can
use big data technologies to collectively analyze behavior of multiple devices for
potential threats. Currently, many mainstream cyber security vendors have launched
EDR solutions.
Network traffic analysis
This technology detects abnormal network data through all-round enterprise or
campus traffic monitoring and big data analysis technologies. For example, the
technology mirrors all traffic passing through the switch to the analysis device for
comprehensive decoding and statistical analysis. The technology then visualizes and
displays the data to allow administrators to intuitively view the security posture of the
entire network.
Managed detection and response
In simple terms, managed detection and response is a "baby-sitter" service for small
and medium-sized enterprises that do not have security protection capabilities. For
example, small and medium-sized enterprises access the network through a proxy of
security vendors. Security vendors can analyze enterprises' network traffic 24/7
through the proxy and clean the threats in a timely manner. In addition, security
vendors can push security event warnings to enterprises so that small and medium-
sized enterprises do not need to purchase or deploy any devices themselves.
Microsegmentation
The microsegmentation here does not refer to isolation between servers in traditional
equipment rooms. It is isolation between applications. In the cloud era, instead of
hosts, applications that provide services are usually perceived by users. Isolation is
therefore evolved from the server level to the application level.
Cloud Access Security Brokers (CASBs)
This technology is used to protect cloud security. It obtains all accesses to cloud
services through the reverse proxy, and conducts traffic security detection and audit
to detect noncompliant and abnormal access in a timely manner, such as penetration
and leakage. It is similar to the previous managed detection and response. The
former is to protect service users and the latter is to protect service providers. This
solution is generally provided as a service.
Software defined perimeters (SDPs)
This concept is designed to resolve the problem of flexible resource access
management in the cloud era. It emphasizes on replacing traditional physical devices
with software, which is the same as Software Defined Network (SDN).
Container security
Traditional security is oriented to hosts and provided for each host. In the cloud era,
applications are containerized, and the concept of a host is weakened. Container
security becomes very important.
The previous section described Gartner top security technologies. Now, we will
summarize other future development trends.
In the future, security protection solutions may not consist of any devices. Instead,
remote security protection and analysis will be provided. Users' network access traffic is
directed to the data centers of the security vendors by proxy for analysis, filtering, and
cleaning. All customers need to do is to configure an address of a security proxy server.
MDR and CASB are examples of this type of security services.
In an enterprise, the antivirus software for endpoints will evolve into the EDR with a
distributed monitoring and centralized analysis architecture. This allows the enterprise to
analyze process behavior and context of all its hosts in a unified manner to more
efficiently detect potential threats.
The security check capability of endpoints is increasingly being used by traditional cyber
security vendors. In the past, endpoint security and network security were two separate
domains. Endpoint antivirus vendors only inspected files in endpoints, and network
security vendors focused on network traffic. Currently, these two functions are being
integrated. Due to the interworking of the endpoint security software and the network
defense device, direct connection of malicious traffic to the processes and files of the
endpoint is enabled, and threats are traced accurately. In the future, security software in
endpoints will cooperate more closely with network defense devices.
With the growth of microsegmentation and container security, the concept of host is
weakened while the concept of service is strengthened in the cloud era. Therefore, traffic
management must be implemented at the application level and container level. Network
topologies viewed by O&M personnel are not host-host topologies but service-service
and service-client topologies. In addition, the graphic theories and principles can be
better applied to security checks to detect abnormal communication paths of the cloud
data centers and potential threats in time.
All Gartner top security technologies utilize the cloud. Cloud-based deployment of
security protection solutions is therefore imperative. The future of security will be based
on software (Software-defined Security). All check devices will evolve into software and
run in containers or virtual hosts. Software-based solutions mean O&M personnel can
conveniently change the check methods for different application data flows. For example,
some application data flows need to be checked by the WAF, and some need virus
scanning or IPS checks. Based on the analysis of traffic and process behavior, intelligent
change can be implemented.
Answers:
ACD
D
Process control: Process control refers to controlling and managing resources of running
processes.
Process scheduling: When multiple programs run concurrently, each program requires
processor resources. The OS dynamically allocates processor resources to a process for
process running. This is called process scheduling.
Memory allocation: Memory allocation refers to the process of allocating memory during
process execution.
Memory protection: Before memory is allocated, the OS must be protected from the
impact of the user processes, and the running user process must be protected from
being affected by other user processes.
Memory expansion: The execution of large or multiple programs may require more
memory than the system supports or has installed. Therefore, memory management
needs to support memory expansion.
Buffer management: To mitigate the difference in speed between the CPU and I/O device
and improve the concurrent running of the CPU and I/O device, in modern OSes, almost
all I/O devices use buffers when exchanging data with the processor. The main task of
buffer management is to organize these buffers and provide means to obtain and
release buffers.
Device allocation: When a process sends an I/O request to the system, the device
allocation program allocates the device to the process in accordance to specific policies.
Device virtualization: Device virtualization is the act of creating multiple logical versions
of a physical device for multiple user processes to use.
Modern Windows operating systems use GUIs and are more user-friendly than text-
based DOS operating systems, which required users to enter instructions for use. As
computer hardware and software has developed, Microsoft Windows has evolved from
16 bits to 32 bits and then to 64 bits. Different versions have also evolved, from
Windows 1.0 to Windows 95, Windows 98, Windows ME, Windows 2000, Windows 2003,
Windows XP, Windows Vista, Windows 7, Windows 8, Windows 8.1, Windows 10, and
Windows Server. Microsoft continues to develop and improve the Windows operating
systems.
The latest stable Windows operating system is Windows 10, which was released on July
29, 2015. The latest stable operating system of Windows Server is Windows Server 2016,
which was released on September 26, 2016.
The four types of processes in the user mode are:
Fixed system processes, such as the login process and session manager. They are not
Windows services (that is, they are not started through the service control manager
[SCM]).
Service processes, such as the task dispatcher and printer service. Users can run these
processes after login. Many service applications, such as SQL server and exchange server,
run as services.
Applications: Applications can be 32-bit or 64-bit Windows, 16-bit Windows 3.1, 16-bit
MS-DOS, or 32-bit or 64-bit POSIX. Note that the 16-bit applications can run only on the
32-bit systems.
Note the "subsystem DLL" under the service processes and applications. In Windows,
applications can only invoke local Windows services through the subsystem DLLs. The
role of a subsystem DLL is to translate a documented function into a required non-
documented system service (undisclosed).
The kernel mode consists of the following components:
Device drivers, including hardware device drivers (translating user I/O to hardware
I/O) and software drivers (such as file and network drivers).
Windows and graphics systems, which implement GUI functions and process user
interfaces and rendering.
What is the difference between an operating system designed for servers and an
operating system designed for individuals?
The performance of an operating system designed for servers is more stable than
that of an operating system designed for individuals.
An operating system designed for servers supports special hardware, contains dedicated
functions and management tools for server operation, and has stricter requirements on
security and stability. Therefore, such an operating system may provide relatively lower
speed.
An operating system designed for individuals does not need, nor does it provide,
professional functions. However, it adds media management software and functions for
individual users.
Multics program
In the 1960s, computers were not very popular, and only a handful of people could
use them. Computer systems at that time only supported batch processing, that is,
users submitted a batch of tasks and then waited for the results. Users could not
interact with computers during task processing. A computer took a long time to
process a batch of tasks, during which period the computer could not be used by
others, resulting in a waste of computer resources.
To address this situation, in the mid-1960s, AT&T Bell Labs, the Massachusetts
Institute of Technology (MIT), and General Electric (GE) worked together to develop
an experimental time-sharing multitasking processing system called Multiplexed
Information and Computing Service (Multics), which was intended to allow multiple
users to access a mainframe simultaneously. Due to the project complexity and
dissatisfied with the project's progress, Bell Labs management ultimately withdrew.
UNIX
When the official Linux kernel 1.0 was published in 1994, Linus Torvalds was invited
to find a mascot for Linux. He chose a penguin as the mascot after remembering an
incident in an Australian zoo, where he was bitten by a penguin.
Another more widely accepted view is that the penguin represents the South Pole,
which is shared globally without any one country having ownership. That is, Linux
does not belong to any particular company. It is a technology shared by everyone.
Generally, the Linux system has four major parts: the kernel, shells, file systems, and
applications.
The kernel, shells, file systems, and applications are used to manage files and use the
system.
Linux kernel
The kernel is the core of the operating system and provides many basic functions. It
manages the processes, memory, device drivers, files, and network systems of the
operating system, and determines the system performance and stability.
The Linux kernel consists of components such as memory manager, process
manager, device drivers, file systems, and network manager.
Linux Shells
The shell is the user interface (UI) provided by the operating system for users to
interact with the kernel. It receives user-input commands and sends the commands
to the kernel for execution. A shell is a command interpreter. The shell
programming language shares many characteristics of common programming
languages. Shell programs written in this programming language have the same
functions as other applications.
Linux file systems
A file system controls how data is stored on storage devices such as disks. Linux
supports multiple popular file systems, such as EXT2, EXT3, FAT, FAT32, VFAT, and
ISO 9660.
Linux applications
Generally, a standard Linux system has a set of applications, including text editors,
programming languages, X Window, office suites, Internet tools, and databases.
Linux has two defining features:
Everything is a file.
The first feature means that, in Linux, the Linux kernel regards everything (including
commands, hardware and software devices, operating systems, and processes) as a file
of a specific type with its own features. Linux is based on Unix, which is largely because
they share this basic idea.
Free of charge
Linux is a free-of-charge operating system that can be downloaded from the
Internet or obtained in other ways. Its source code can be changed by users as
required. This is a unique advantage that attracts countless programmers
worldwide to modify and change Linux as they want, which in turn helps Linux
continue to develop.
Multi-user and multitasking
Linux allows multiple users to use the computer without other users affecting them.
This is because each user has their own special rights to their own file devices.
Multitasking is the most important feature of current computers. Linux allows
multiple applications to run concurrently without being affected by others.
User-friendly interfaces
The Linux operating system supports both the command line interface (CLI) and
GUI. On a CLI, users can enter commands. The Linux system also provides an X-
Window system similar to the Windows GUI. Through the X-Window system, users
can use a mouse to perform operations.
Supporting multiple platforms
The Linux system can run on multiple hardware platforms, such as platforms with
x86, 680x0, SPARC, and Alpha processors. Linux is an embedded operating system
that can run on various devices, such as handheld computers, set-top boxes (STBs),
and games consoles. The Linux kernel 2.4, released in January 2001, fully supported
the 64-bit Intel chips. Linux also supports the multi-processor technology, which
allows multiple processors to work at the same time, greatly improving system
performance.
Answer:
1. D
Today, servers are in wide use. Online games, websites, and most software need to be
stored on servers. Some enterprises may deploy their own servers and store the most
important work-related documents on hard disks of these servers.
All servers are, to put it simply, just like the computers we use from day to day, but with
better stability, security, and data processing performance. Our home computers can
also be used as servers if a server system is installed. However, as mentioned already,
servers have high requirements on hardware stability and quality. Common computers
cannot stay powered on for a long time, and important data is generally stored on
servers. Therefore, common computers are not suitable for use as servers.
Availability: A server must be reliable because it provides services to the clients on the
entire network, not to the users who log in to the server. The server must not be
interrupted as long as there are users on the network. In some scenarios, a server cannot
be interrupted even if nobody is using the server. This is because the server must
continuously provide services for users. Servers in some large enterprises, such as
website servers and web servers used by public users, need to provide 24/7 services.
Usability: A server needs to provide services to multiple users and therefore requires high
connection and computing performance. When using a PC, we sometimes feel it is slow.
If a server has the same performance as a PC, can it be accessed by so many users at the
same time? The answer is obviously no. Therefore, the performance of a server must be
much higher than that of a PC. To achieve high-speed performance, a symmetric
multiprocessor is installed and a large number of high-speed memory modules are
inserted to improve the processing capability.
Scalability: With the continuous development of services and the increasing number of
users, servers should be scalable. To ensure high scalability, a server must provide
scalable space and redundant parts (such as disk array space, PCI-E slots, and memory
slots).
Work group server: If the application is not complex, for example, no large database
needs to be managed, a work group server is usually used.
Enterprise-level server: Enterprise-level servers are mainly used in large enterprises and
industries with important services (such as finance, transportation, and communications),
for which a large amount of data must be processed and there is a high requirement for
fast processing as well as high reliability.
x86 server: A CISC server, that is, a PC server. Such a server uses Intel or other processors
that are compatible with the x86 instruction set.
Non-x86 servers: include mainframe, midrange, and Unix servers. They use RISC or EPIC
processors.
General-purpose server: Not designed for a specific service and can provide various
service functions.
Function server: Specially designed for providing one or several functions and supports
plug-and-play, eliminating the need for trained personnel to configure software and
hardware.
What is U?
Common Huawei rack servers include RH1288H, RH2288H, RH5288, RH2488/2488H, and
RH5885H.
In C/S mode, a file server is a computer used for central storage and data file
management. It enables other computers on the same network to access these files. A
file server allows users to share information on the network without the need for floppy
disks or other external storage media to physically move files. Any computer can be set
up as a file server, of which the simplest form is a PC. It processes file requests and sends
them over the network. On a more complex network, a file server may also be a
dedicated network attached storage (NAS) device. It can also be used as a remote hard
disk drive of another computer, and allows users on the network to store files on the
server in the same way as on their own hard disks.
A database server is built with a database system as its foundation. Such a server has the
features of a database system as well as its own unique functions. These functions are:
Database concurrent operations: Because more than one user accesses a database
simultaneously, the database server must support concurrent operations so that
multiple events can be processed at the same time.
An email system consists of three components: user agent, mail server, and mail transfer
protocol.
User agent: application that handles the sending and receiving of emails
Mail server: used to receive emails from a user agent and send the emails to the
receiving agent.
Download: The file is transferred from the FTP server to the PC.
FTP works in client/server mode. The client and server are connected using TCP. An FTP
server mainly uses ports 21 and 20. Port 21 is used to send and receive FTP control
information and keep FTP sessions open. Port 20 is used to send and receive FTP data.
Computers use an IP address to find websites on the internet. However, IP addresses can
be difficult to remember for users. Therefore, an IP address has a corresponding web
address, called a domain name. Computers use a DNS server to convert a domain name
into its corresponding IP address, and find its location on the network.
Note:
Worm: A worm is a virus program that can replicate itself and send copies from
computer to computer across network connections.
Virus: A virus is an aggressive program that embeds a copy of itself in other files to infect
computer systems.
An attack that causes denial of service is called a DoS attack. A DoS attack is designed to
disrupt a computer or network service.
Most DoS attacks are based on flooding a network with requests in order to disrupt its
systems; however, it is difficult for individual hackers to overload high-bandwidth
resources. To overcome this disadvantage, DoS attackers develop distributed denial-of-
service (DDoS) attacks.
In a DDoS attack, Trojan horses are used by hackers to control other computers. More
and more computers are turned into zombies and are exploited by hackers to launch
attacks. Hackers utilize many zombies to initiate a large number of attack requests to the
same target, and overwhelm its system. Because the requests come from multiple
computers, they cannot be stopped by locating a single source.
Vulnerabilities are unknown and cannot be discovered in advance.
Vulnerabilities are security risks, which may expose computers to attacks by viruses or
hackers.
In a DoS attack, the attacker obtains the control rights of certain services in the system
to stop the services.
Data leakage is mainly caused by hackers' accessing protected data, such as reading
restricted files and publishing server information.
The existence of vulnerabilities is one of the necessary conditions for successful network
attacks. The key to successful invasions is the early detection and exploitation of
vulnerabilities in the target network system.
Attackers who exploit remote attack vulnerabilities attack remote hosts on networks.
Low-level vulnerabilities can be exploited to read unrestricted files and leak server
information.
Of course, there are more vulnerability categories. For example, the status of a
vulnerability can be known, unknown, and zero-day. Vulnerabilities can also be classified
by user groups, such as Windows, Linux, Internet Explorer, and Oracle vulnerabilities.
Vulnerabilities are "inevitable". This is determined by the complexity of systems.
Vulnerability scanning identifies security weaknesses in remote target networks or local
hosts. It can be used for attack simulations and security audits.
Vulnerability scanning is a proactive measure and can effectively prevent hacker attacks.
However, hackers can also use the vulnerability scanning technique to discover
vulnerabilities and launch attacks.
Port scanning detects open ports on a host. Generally, a port segment or port is scanned
for a specified IP address.
Vulnerability scanning detects whether vulnerabilities exist in the target host system.
Generally, scanning is performed for specified vulnerabilities on the target host.
Ping sweep determines the IP address of the target host. Port scanning identifies open
ports on the target host. Operating system detection is performed based on the port
scanning result, and then vulnerability scanning is conducted based on the obtained
information.
Full connection scanning: The scanning host establishes a complete connection with a
specified port on the target host through a three-way TCP/IP handshake. If the port is in
the listening state, the connection is successful. Otherwise, the port is unavailable.
SYN scanning: The scanner sends an SYN packet to the target host. If an RST packet is
received in reply, it indicates that the port is disabled. If the response packet contains
SYN and ACK, the port is in the listening state. Then, the scanner can send an RST packet
to the target host, to stop the host's connection with the port.
Stealth scanning: The scanner sends a FIN packet to the target host. If the FIN packet
reaches a disabled port, the packet is discarded, and an RST packet is returned. If the
port is enabled, the FIN packet is simply discarded.
Active scanning: Based on networks. It attacks the system by executing some script files,
and records the system response. In this way, vulnerabilities can be detected.
A patch is a small piece of cloth used to mend or cover a hole in a garment or blanket. It
also refers to a small program that is released to solve issues (usually discovered by
hackers or virus designers) exposed during the use of a large software system (such as
Microsoft operating system). Bugs cannot be avoided in software. If any bugs are found,
a patch can be developed for the software and installed to fix them. Developers release
patches for download on their official websites.
WannaCry uses the vulnerabilities of port 445 in the Windows operating system to
propagate and self-replicate.
After a computer is penetrated by the ransomware, many types of files in the user host
system, such as photos, pictures, documents, audios, and videos, are encrypted. The file
name extension of the encrypted files is changed to .WNCRY, and a ransom dialog box is
displayed on the desktop, asking the victim to pay $300 (USD) of Bitcoin to the attacker's
bitcoin wallet. The ransom increases over time.
Users must periodically scan their computers, upgrade software to the latest versions,
check software configurations, disable insecure options, and pay attention to the
recommendations of security companies. These are effective means to avoid
vulnerabilities.
Answers:
A
ABCD
The firewall technology is a specific security technology. The term “firewall” was
originally used to describe the wall built between buildings to prevent fire from
spreading.
Control policies:
Advanced settings: specifies detailed inbound & outbound rules and connection security
rules.
Allow another app: adds an app or feature to Allowed apps and features.
You can select apps and features from Allowed apps and features and apply them to a
home/work (dedicated) network or a public network.
When a Windows firewall is enabled, you can determine whether to send a notification
when the firewall blocks new apps.
Enable the firewall for a type of network for security protection, or disable the firewall so
that all apps can pass through.
The window for enabling or disabling change notification is the same as that for enabling
or disabling the Windows firewall.
If firewall rules are not set properly, malicious network attacks may not be blocked, and
users may fail to access the Internet. If such a situation occurs, click Restore defaults to
restore the Windows firewall to the default settings.
If settings of Allow an app or feature to through Windows Firewall cannot meet your
requirements, you can access the Windows Firewall with Advanced Security window to
set more detailed rules.
Settings in this window allow you to customize inbound rules, outbound rules, and
connection security rules, and monitor the firewall.
Program: specifies a rule that controls connections for specific local programs or all
programs when they use public (or home) networks.
Port: specifies a rule that controls connections for specific local ports or all ports when
they use public (or home) networks.
Custom: specifies a rule that controls connections for specific local programs when they
use public (or home) networks through predetermined source and destination ports and
IP addresses.
A Linux firewall consists of two components: netfilter and iptables. Iptables is an interface
between a firewall and users, while netfilter provides firewall functions.
netfilter is a framework in the Linux kernel. It provides a series of tables. Each table
consists of several chains, and each chain consists of several rules.
Iptables is a user-level tool which can add, delete, and insert rules. These rules tell the
netfilter component how to process data packets.
Iptables contains five rule chains:
PREROUTING
INPUT
FORWARD
OUTPUT
POSTROUTING
These are the five rule chains defined by netfilter. Any data packet passing through will
reach one of these chains.
Generally, three chains are allowed in a filter table: INPUT, FORWARD, and OUTPUT.
Generally, three chains are allowed in a nat table: PREROUTING, OUTPUT, and
POSTROUTING.
All the five chains are allowed in a mangle table: PREROUTING, INPUT, FORWARD,
OUTPUT, and POSTROUTING.
When a data packet enters a network adapter, it is first matched with the PREROUTING
chain. The system determines the subsequent processing according to the destination
address of the packet. Possible processing:
If the destination address of the packet is the local host, the system sends the
packet to the INPUT chain to match the packet with rules in this chain. If the packet
matches a rule, the system sends the packet to the corresponding local process. If
no match is found, the system discards the packet.
If the destination address of the packet is not the local host, the packet will be
forwarded. The system directly sends the packet to the FORWARD chain to match
the packet with rules in this chain. If the packet matches a rule, the system sends
the packet to the corresponding local process. If no match is found, the system
discards the packet.
If the packet is locally generated, the system directly sends the packet to the
OUTPUT chain to match the packet with rules in this chain. If the packet matches a
rule, the system sends the packet to the corresponding local process. If no match is
found, the system discards the packet.
Scanners are the main part of antivirus software and are mainly used to scan viruses. The
antivirus effect of antivirus software depends on how advanced the scanner compilation
technology and algorithm are. Therefore, most antivirus software has more than one
scanner.
The virus signature database stores virus signatures, which are classified into memory
signatures and file signatures. Generally, file signatures exist in files that are not
executed. Memory signatures generally exist in a running application program.
If antivirus software has a strong unpacking capability, it unpacks the virus file, and then
scans and kills the virus. In this way, only one signature record is enough. This reduces
the occupation of system resources by the antivirus software, and greatly improves the
antivirus software's capability to scan and kill viruses.
Currently, a more advanced cloud antivirus technology can be used to access the virus
signature database on the cloud in real time. Users do not need to update their local
virus signature database frequently.
Clear: Clear worms from infected files to restore the files.
Delete: Delete virus files. These files are not infected but contain viruses. They cannot be
cleared.
Forbid access: Do not access virus files. After a virus file is detected, if you choose not to
process the file, the antivirus software may deny access to this file. When you attempt to
open such a file, an error message "not a valid win32 application" is displayed.
Isolate: After a virus file is deleted, the file is moved to the isolation area. You can
retrieve deleted files from the isolation area. Files in the isolation area cannot run.
No process: If you are not sure whether a file contains viruses, do not process it
temporarily.
BC
ABC
Originally, a firewall referred to a wall built between houses to prevent fire from
spreading. The firewall technology is an important part of security technology. This
course discusses hardware firewalls. Hardware firewalls integrate security technologies to
protect private networks (computers). To do this, they use the dedicated hardware
structure, high-speed CPUs, and embedded OS, and support various high-speed
interfaces (LAN interfaces). Hardware firewalls are independent of OSs (such as HP-UNIX,
SUN OS, AIX, and NT) and computers (such as the IBM6000 and common PC). Hardware
firewalls are used to solve network security problems in a centralized manner. They are
applicable to various scenarios and provide efficient filtering. In addition, they provide
security features such as access control, identity authentication, data encryption, VPN
technology, and address translation. Users can configure security policies based on their
network environments to prevent unauthorized access and protect their networks.
The modern firewall system should not be just an "entrance barrier". Firewalls should be
the access control points of networks. All data flows entering and leaving the network
should pass through the firewall that serves as a gateway for incoming and outgoing
information. Therefore, a firewall not only protects the security of an intranet on the
Internet, but also protects the security of hosts on the intranet. All computers in a
security zone configured on a firewall are considered "trustworthy" and communications
between them are not affected by the firewall. However, communications between
networks that are separated by a firewall must follow the policies configured on the
firewall.
Firewalls have been developed for three generations, and their classification methods are
various. For example, firewalls can be classified into hardware firewalls and software
firewalls by form or standalone firewalls and network firewalls by protection object. In
general, the most popular classification method is by access control mode.
Network firewalls can protect the entire network in a distributed manner. Their features
are as follows:
The packet filtering firewall is simple in design, easy to implement, and cost-effective.
Packet filtering does not check session status or analyze data, which makes it easy
for attackers to escape. For example, an attacker sets the IP address of their host to
an IP address permitted by a packet filter. In this way, packets from the attacker's
host can easily pass through the packet filter.
Note: A multi-channel protocol example is FTP. Based on the negotiation of the control
channel, FTP generates the dynamic data channel port. Then, data exchange is
performed on the data channel.
The proxy applies to the application layer of the network. The proxy checks the services
directly transmitted between intranet and Internet users. After the request passes the
security policy check, the firewall establishes a connection with the real server on behalf
of the Internet user, forwards the Internet user's request, and sends the response packet
returned by the real server to the Internet user.
The proxy firewall can completely control the exchange of network information and the
session process, achieving high security. Its disadvantages are as follows:
Software implementation limits its processing speed. Therefore, the proxy firewall is
vulnerable to DoS attacks.
A stateful inspection firewall uses various session tables to trace activated TCP
sessions and pseudo UDP sessions. The ACL determines the sessions to be
established. A data packet is forwarded only when it is associated with a session. A
pseudo UDP session monitors the status of the UDP connection process. The
pseudo UDP session is a virtual connection established for the UDP data flow when
UDP packets are processed (UDP is a connectionless protocol).
The stateful inspection firewall intercepts data packets at the network layer, extracts
the status information required by security policies from each application layer, and
saves the information to the session table. The session table and subsequent
connection requests related to the data packets are analyzed to make proper
decisions.
Stateful inspection firewalls have the following advantages:
High security: The connection status list is dynamically managed. After a session is
complete, the temporary return packet entry created on the firewall is closed,
ensuring real-time security of the internal network. In addition, the stateful
inspection firewall uses real-time connection status monitoring technology to
identify connection status factors such as response in the session table, which
enhances system security.
Mode 1: A firewall only forwards packets and does not perform routing. The two service
networks interconnected by the firewall must be in the same network segment. In this
mode, the upstream and downstream interfaces of the firewall work at Layer 2 and do
not have IP addresses.
This networking mode avoids issues caused by topology changes. Deploy a firewall like a
bridge on the network. There is no need to modify existing configurations. The firewall
filters IP packets and protects users on the intranet.
Mode 2: A firewall is located between the intranet and the Internet. The upstream and
downstream service interfaces on the firewall work at Layer 3 and must be configured
with IP addresses in different network segments. The firewall performs routing on the
intranet and Internet, like a router.
In this mode, the firewall supports more security features, such as NAT and UTM, but the
original network topology must be modified. For example, the intranet users must
modify their gateway configurations, or the route configuration on a router must be
modified. Therefore, the design personnel should consider factors such as network
reconstruction and service interruption when selecting a networking mode.
As a network security protection mechanism, packet filtering mainly controls the
forwarding of various traffic on the network.
As the packet-filtering firewall matches packets with packet-filtering rules one by one
and checks the packets, the forwarding efficiency is low. Currently, a firewall usually uses
the stateful inspection mechanism to check the first packet of each connection. If the
first packet passes the check (matching a packet-filtering rule), the firewall creates a
session and directly forwards subsequent packets according to the session.
The basic function of firewalls is to protect a network from being attacked by any untrust
network while permitting legitimate communication between two networks. Security
policies check passing data flows. Only the data flows that match the security policies are
allowed to pass through firewalls.
By using firewall security policies, you can control the access rights of the intranet to the
Internet and control the access rights of the subnets of different security levels on the
intranet. In addition, security policies can control the access to a firewall, for example, by
restricting the IP addresses that can be used to log in to the firewall through Telnet and
the web UI, and by controlling the communication between the NMS/NTP server and the
firewall.
Security policies define rules for processing data flows on a firewall. The firewall
processes data flows according to the rules. Therefore, the core functions of security
policies are as follows: Filter the traffic passing through the firewall according to the
defined rules, and determine the next operation performed on the filtered traffic based
on keywords.
In firewall application, security policies are a basic means of network security access to
the data flows passing through the firewall, and determine whether subsequent
application data flows are processed. The firewall analyzes traffic and retrieves the
attributes, including the source security zone, destination security zone, source IP
address, source region, destination IP address, destination region, user, service (source
port, destination port, and protocol type), application, and schedule.
Early packet-filtering firewalls match packets one by one with packet filtering rules.
Firewalls check all received packets to decide whether to allow them to pass through.
This mechanism greatly affects the forwarding efficiency and create forwarding
bottlenecks on networks.
Therefore, more and more firewalls use the stateful inspection mechanism for packet
filtering. The stateful inspection mechanism checks and forwards packets based on data
flows; a firewall checks the first packet of a data flow with packet-filtering rules, and
records the result as the status of the data flow. For subsequent packets of the data flow,
the firewall determines whether to forward (or perform content security detection) or
discard the packets according to the status. This "status" is presented as a session entry.
This mechanism improves the detection rate and forwarding efficiency of firewall
products and has become the mainstream packet filtering mechanism.
Generally, a firewall checks five elements (quintuple) in an IP packet. They are the source
IP address, destination IP address, source port number, destination port number, and
protocol type. By checking the quintuple of each IP packet, the firewall can determine
the IP packets in one data flow. In addition to the quintuple, an NGFW also checks users,
applications, and schedules of packets.
Generally, in the three-way handshake phase, fields in addition to the quintuple in TCP
data packets are calculated and checked. After the three-way handshake succeeds, the
firewall matches subsequent packets with the quintuple in the session table to determine
whether to allow the packets to pass through.
Inspection on the packets that match a session takes much shorter time than on the
packets that do not match any sessions. After the first packet of a connection is
inspected and considered legitimate, a session is created and most subsequent packets
are not inspected. This is where a stateful inspection firewall outperforms a packet
inspection firewall in inspection and forwarding efficiency.
For TCP packets:
If stateful inspection is enabled and the first packet is a SYN packet, a session is
created. If the first packet is not a SYN packet and does not match any session, the
packet is discarded, and no session is created.
UDP is a connectionless protocol. If a UDP packet does not match any session and
passes the inspections, a session is created.
If stateful inspection is enabled, an ICMP echo message does not match any
session, and no ICMP reply is sent in response, the ICMP echo message is
discarded.
If stateful inspection is disabled, an ICMP echo message does not match any
session, and no ICMP reply is sent in response, the ICMP echo message is
processed as the first packet.
Sessions are the basis of a stateful inspection firewall. A session is created for each data
flow passing through the firewall. With the quintuple (source and destination IP
addresses, source and destination ports, and protocol number) used as the key value, a
dynamic session table is created to ensure the security of data flows forwarded between
zones. The NGFW extends quintuple, with two elements (user and application) added.
Source IP address
Source port
Destination IP address
Destination port
Protocol number
User
Application
Description of the display firewall session table command output
Distinguish between applications (such as web IM and web game) using the same
protocol (such as HTTP), achieving refined network management.
Inspect content security and block viruses and hacker intrusions to better protect
internal networks.
Procedure for creating a security zone:
Click Add.
If the action is permit, the firewall inspects the traffic content. If the traffic passes the
content security inspection, the traffic is allowed to pass through. If not, the traffic is
denied.
Click Add and set the parameters of the address (or address group).
Click Add to set the parameters of the region (or region group).
Procedure for configuring services and service groups on the web UI:
Click Add.
Click OK.
Procedure for configuring a security policy on the web UI:
Click Add.
Click OK.
Configuration roadmap:
Plan security policies: The network segment 192.168.5.0/24 is permitted, but several
IP addresses within the range are denied. In this case, configure two forwarding
policies. The first forwarding policy denies the specific IP addresses and the second
forwarding policy allows the entire network segment. If the configuration sequence
is reversed, the special IP addresses will match the permit policy and the packets
pass through the firewall.
Determine security zones, connect interfaces, and assign the interfaces to the
security zones.
Use security policies to determine the permissions of user groups and then those of
privileged users. You must specify the source security zones and addresses of users,
destination security zones and addresses of users, services and applications that
the users can access, and schedules in which the policies take effect. To allow a
certain type of network access, set the action of the security policy to permit. To
disable network access, set the action of the security policy to deny.
Determine which types of traffic needs content security inspection and what items
need to be inspected.
List the parameters in the security policies, sort the policies from the most specific
to the least specific, and configure security policies in this order.
The configuration of the ip_deny address group is as follows:
FTP establishes a TCP control channel with predefined ports and a dynamically
negotiated TCP data channel. For a common packet filtering firewall, you cannot obtain
the port number of the data channel when configuring security policies, and therefore
cannot determine the ingress of the data channel. In this case, precise security policies
cannot be configured. ASPF resolves this problem. It detects application-layer
information and dynamically creates and deletes temporary rules based on packet
content to allow or deny packets.
According to the figure, the server map entry is generated during the dynamic detection
of the FTP control channel. When a packet passes the firewall, ASPF compares the packet
with the specified access rules. If the rules permit, the packet is checked; otherwise, the
packet is discarded. If the packet is used to establish a new control or data connection,
ASPF dynamically creates a server map entry. Return packets can pass through the
firewall only when they have a matching server map entry. When processing return
packets, the firewall updates the status table. When a connection is closed or times out,
the corresponding status table is deleted to block unauthorized packets. Therefore the
ASPF technology can accurately protect the network even in complicated application
scenarios.
The server map is a mapping relationship. If a data connection matches a dynamic server
map entry, the firewall does not need to search for a packet filtering policy. This
mechanism ensures normal forwarding of some special applications. In another case, if a
data connection matches the server map table, the IP address and port number in the
packet are translated.
The server map is used only for checking the first packet. After a connection is
established, packets are forwarded based on the session table.
Currently, the firewall generates server map entries in the following situations: server
map entries generated when the firewall forwards the traffic of multi-channel protocols,
such as FTP and RTSP, after ASPF is configured; triplet server map entries generated
when the firewall forwards the traffic of the Simple Traversal of UDP Through NAT
(STUN) protocols, such as MSN and TFTP, after ASPF is configured; static server map
entries generated when NAT server mapping is configured; dynamic server map entries
generated when NAT No-PAT is configured; dynamic server map entries generated when
NAT full-cone is configured; dynamic server map entries generated when PCP is
configured; static server map entries generated when server load balancing (SLB) is
configured; dynamic server map entries generated when NAT Server is configured in DS-
Lite scenarios; static server map entries generated when static NAT64 is configured.
After NAT Server is configured, Internet users can initiate access requests to the intranet
server. The IP addresses and ports of the users are unknown, but the IP address of the
intranet server and the port are known. Therefore, after NAT Server is successfully
configured, the firewall automatically generates the server map entry to save the
mapping relationship between the public and private IP addresses. The firewall translates
the IP address of the packet and forwards the packet according to the mapping
relationship. A pair of forward and reverse static server map entries are generated for
each valid NAT Server configuration. For SLB, as multiple intranet servers use the same
public IP address, the firewall creates server map entries similar to those of NAT Server. If
the number of intranet servers is N, the firewall creates one server map entry for forward
traffic and N server map entries for reverse traffic.
If you configure NAT and specify the No-PAT parameter, the firewall translates only the
IP addresses but not the port numbers. All port numbers used by private IP addresses
are mapped to the port number used by the public address. Internet users can initiate
connections to any port used by an intranet user. After NAT No-PAT is configured, the
firewall creates server map entries for the data flows to maintain the mapping between
the private and public IP addresses. Then the firewall translates IP addresses and
forwards packets according to the mappings.
Port identification, also called port mapping, is used by the firewall to identify
application-layer protocol packets that use non-standard ports. Port mapping supports
the following application-layer protocols: FTP, HTTP, RTSP, PPTP, MGCP, MMS, SMTP,
H323, SIP, and SQLNET.
Port mapping is implemented based on ACLs. Only the packets matching an ACL rule are
mapped. Port mapping uses basic ACLs (ACLs 2000 to 2999). In ACL-based packet
filtering, the firewall matches the destination IP addresses of packets with the source IP
addresses in basic ACLs.
An ACL is a collection of sequential rules used by a device to filter network traffic. Each
rule contains a filter element that is based on criteria such as the source IP address,
destination IP address, and port number of a packet. An ACL classifies packets based on
rules. After the rules are applied to a router, the router determines whether a packet is
permitted or denied in accordance with these rules.
ACLs are classified into the following types:
Basic ACLs (2000 to 2999): A basic ACL matches traffic only by source IP address
and schedule. It applies to simple matching scenarios.
Advanced ACLs (3000 to 3999): Traffic is matched by source IP address, destination
IP address, ToS, schedule, protocol type, priority, ICMP packet type, and ICMP
packet code. In most functions, advanced ACLs can be used for accurate traffic
matching.
MAC address-based ACLs (4000 to 4999): Traffic is matched by source MAC
address, destination MAC address, CoS, and protocol code.
Hardware packet filtering ACLs (9000 to 9499): After a hardware packet filtering ACL is
delivered to an interface card, the interface card filters packets using hardware, which is
faster than common software-based packet filtering and consumes less system resources.
Hardware packet filtering ACLs can match traffic based on the source IP address,
destination IP address, source MAC address, destination MAC address, CoS, and protocol
type.
Port mapping applies only to the data within an interzone; therefore, when configuring
port mapping, you must configure security zones and the interzone.
The firewall supports the persistent connection function only for TCP packets.
When stateful inspection is disabled, the firewall also creates session entries for non-first
packets. In this case, you do not need to enable the persistent connection function.
A
In the early 1990s, relevant Request For Comments (RFC) documents began raising the
possibility of IP address exhaustion. As more and more IPv4 addresses are requested,
driven in part by the Internet's rapid growth due to TCP/IP-based web applications,
sustainable development of the Internet is becoming a major challenge. To address this
challenge, IPv6 is developed as the successor to IPv4. In contrast to IPv4, which defined
an IP address as a 32-bit value, IPv6 addresses have a size of 128 bits. For network
applications, IPv6 has a significantly larger address space compared to IPv4. However,
IPv6 has a long way to go before it can completely replace IPv4, due to the immature
technologies and huge update costs associated with IPv6.
Because IPv6 will not completely replace IPv4 immediately, certain workarounds are
required to extend the use of IPv4. For example, classless inter-domain routing (CIDR),
variable length subnet mask (VLSM), and NAT can be used.
Private addresses are used to implement address reuse and increase the utilization of IP
address resources. Defined in RFC 1918, the following private addresses are reserved for
private networks:
Class A: 10.0.0.0 to 10.255.255.255 (10.0.0.0/8)
Class B: 172.16.0.0 to 172.31.255.255 (172.16.0.0/12)
Class C: 192.168.0.0 to 192.168.255.255 (192.168.0.0/16)
Private addresses are used on private networks, whereas public addresses are used on
public networks (for example, the Internet). To allow communication between private
and public addresses, NAT must be used to translate the addresses.
In addition to address reuse, NAT continues to evolve and provides other advantages.
The main advantages and disadvantages of NAT are as follows:
Advantages:
Numerous hosts on a local area network (LAN) can use a few public addresses to
access external resources, and internal World Wide Web (WWW), File Transfer
Protocol (FTP), and Telnet services can be used by external networks.
Internal and external network users are unaware of the IP address translation
process.
Privacy protection is provided for internal network users. External network users
cannot directly obtain the IP addresses and service information of internal network
users.
Multiple internal servers can be configured for load balancing, reducing the
pressure of each server in case of heavy traffic.
Disadvantages:
NAT cannot be performed if the packet header is encrypted. For example, for an
encrypted FTP connection, the port command cannot translate IP addresses.
Network supervision becomes more difficult. For example, tracing a hacker who
attacks a server on the public network from a private network is difficult because
the IP address of the hacker has been NAT.
NAT translates the IP addresses in IP packet headers to other IP addresses so that users
on internal network can access external networks. Generally, every NAT device maintains
an address translation table. The IP addresses of packets that pass through the NAT
device and require address translation will be translated against this table. The NAT
mechanism involves the following processes:
Translate the IP addresses and port numbers of internal hosts into the external
addresses and port numbers of the NAT device.
Translate the external addresses and port numbers into the IP addresses and port
numbers of internal hosts.
That is, NAT implements translation between private address+port number and public
address+port number.
NAT devices are located between internal and external networks. The packets exchanged
between internal hosts and external servers all pass through the NAT devices. Common
NAT devices include routers and firewalls.
NAT is divided into three categories based on application scenarios.
Source NAT: enables multiple private network users to access the Internet at the
same time.
Address pool mode: The public addresses in the address pool are used to
translate users' private addresses. This mode applies when many private
network users access the Internet.
Outbound interface address mode (easy IP): The IP addresses of internal hosts
are translated into the public address of an outbound interface on the public
network. This mode applies when the public address is dynamically allocated.
Static mapping (NAT server): maps one private address to one public address.
This mode applies when public network users access servers on private
networks.
Source NAT translates the source IP address (the internal host's address) in IP packet
header into a public address. This enables numerous internal hosts to access external
resources through limited public addresses and effectively hides the host IP addresses on
the LAN.
The address pool mode without port translation is implemented using a NAT address
pool that contains multiple public addresses. Only IP addresses are translated, and only
one private address is mapped to a public address. If all addresses in the address pool
are allocated, NAT cannot be performed for the remaining internal hosts until the
address pool has available addresses.
The address pool mode with port translation is implemented using a NAT address pool
that contains one or more public addresses. Addresses and port numbers are both
translated so that private addresses share one or more public addresses.
Because addresses and port numbers are both translated, multiple users on a private
network can share one public address to access the Internet. The firewall can distinguish
users based on port numbers, so numerous users can access the Internet at the same
time. This technology uses Layer 4 information to extend Layer 3 addresses. Theoretically,
65,535 private addresses can be translated into the same public address because 65,535
ports are available for each address. The firewall can map data packets from different
private addresses to different port numbers of one public address. Compared with one-
to-one or multi-to-multi address translation, this mode greatly improves IP address
utilization. Therefore, the address pool mode with port translation is most commonly
used.
Easy IP translates private addresses into the public address of the outbound interface,
without the need of configuring a NAT address pool. Addresses and port numbers are
both translated so that private addresses share public addresses of outbound interfaces.
In an Ethernet data frame, the IP header contains a 32-bit source address and a 32-bit
destination address, and the TCP header contains a 16-bit source port number and a 16-
bit destination port number.
Many protocols use the data payload of IP packets to negotiate new ports and IP
addresses. After the negotiation is complete, communications parties establish new
connections for transmitting subsequent packets. The negotiated ports and IP addresses
are often random. Therefore, an administrator cannot proactively configure NAT rules,
because errors may occur with these protocols during NAT.
Common NAT can translate the IP addresses and port numbers in UDP or TCP packet
headers, but not fields in application layer data payloads. In many application layer
protocols, such as multimedia protocols (H.323 and SIP), FTP, and SQLNET, the TCP/UDP
payload contains address or port information that cannot be NAT. NAT ALG can parse
the application layer packet information of the multi-channel protocol and translate
required IP addresses and port numbers or specific fields in payloads to ensure proper
communication at the application layer.
For example, the FTP application requires both data connection and control connection.
The data connection is dynamically established according to the payload field in the
control connection. Therefore, the ALG needs to translate the payload field information
to ensure the proper establishment of the data connection.
The ASPF function is proposed to implement the forwarding policy of the application
layer protocol. ASPF analyzes application layer packets and applies corresponding packet
filtering rules, whereas NAT ALG applies corresponding NAT rules to application layer
packets. Generally, ASPF interworks with NAT ALG. Therefore, you can run only one
command to enable both functions at the same time.
As shown in this figure, the host on the private network needs to access the FTP server
on the public network. The mapping between the private address 192.168.1.2 and the
public address 8.8.8.11 is configured on the NAT device. If the ALG does not process the
packet payload, the server cannot perform addressing based on the private address after
receiving the PORT packet from the host. As a result, a data connection cannot be
established. The communication process consists of four stages:
The host and FTP server establish a control connection through the TCP three-way
handshake.
The host sends a Port packet carrying the specified destination address and port
number to the FTP server to establish a data connection.
The ALG-enabled NAT device translates the private address and port number
carried in the packet payload to the public address and port number. That is, the
NAT device translates the private address 192.168.1.2 in the payload of the received
PORT packet into the public address 8.8.8.11 and the port number 1084 to 12487.
The FTP server parses the PORT packet and initiates a data connection to the host,
with the destination address of 8.8.8.11 and destination port number of 12487.
Generally, the source port number of the packet is 20. However, the source port
numbers of data connections initiated by some servers are larger than 1024
because the FTP protocol does not have strict requirements. In this example, the
WFTPD server is used and the source port number is 3004. Since the destination
address is a public address, the data connection can be established and the host
can access the FTP server.
In the NAT server function, NAT hides the topology of an internal network. That is, NAT
masks the internal hosts. In practice, external users may need to access the internal
hosts, for example, a WWW server. External hosts, however, do not have routes destine
for the internal hosts. In this case, the NAT server function can be applied.
NAT allows you to add internal servers flexibly. For example, a public address such as
202.202.1.1 or an IP address and port number such as 202.202.1.1:8080 can be used as
the external address of the web server.
When external users access internal servers, the following operations are involved:
The firewall translates destination addresses of external users' request packets into
private addresses of internal servers.
The firewall supports security zone-based internal servers. For example, if the firewall
provides access to external users on multiple network segments, you can configure
multiple public addresses for an internal server based on security zone configurations. By
mapping different levels of the firewall's security zones to different external network
segments and configuring different public addresses for the same internal server based
on security zones, you can enable external users on different network segments to
access the same internal server.
Generally, if strict packet filtering is configured, the device permits only internal users to
proactively access external networks. In practice, however, this may prevent successful
file transfers in FTP. For example, when FTP in port mode is used, the client needs to
proactively initiate a control connection to the server, and the server needs to proactively
initiate a data connection to the client. If packet filtering configured on the device allows
packets through in only one direction, FTP file transfer will fail.
To resolve such issues, the USG device introduces the server map table. The server map
is based on triplets and is used to record data connection mappings negotiated using
control data or address mappings configured for NAT to allow external users to access
internal networks.
If a data connection matches an entry in the server map table, the device will forward the
associated packet without looking up the session table.
After the NAT server is configured, the device automatically generates server map entries
that record the mappings between public and private addresses.
If no-reverse is not specified, a pair of forward and return static server map entries is
generated for each valid NAT server. If no-reverse is specified, the valid NAT server
generates only the forward static server map entry. If a NAT server is deleted, the
associated server map entries are deleted at the same time.
After No PAT is configured, the device creates a server map table for data flows
generated by the configured multi-channel protocol.
When an internal server proactively access an external network, the device cannot
translate the private address of the internal server into a public address. Therefore, the
internal server cannot initiate a connection to the external network. In this case, you can
specify the no-reverse parameter to prevent internal servers from proactively accessing
external networks.
If an internal server advertises multiple public addresses for external networks through
the NAT server function with no-reverse specified, the internal server cannot access
external networks proactively. To enable an internal server to access an external network,
configure a source NAT policy. This policy is configured between the security zone of the
internal server and the security zone of the external network to translate the private
address of the internal server to a public address. The source NAT policy can reference
the global address or another public address.
The source security zone is usually the zone where the pre-NAT private address resides.
In this example, it is the trust zone. The destination security zone is usually the zone
where the post-NAT public address resides. In this example, it is the untrust zone.
During NAT server configuration, the external address is the public IP address that the
internal server provides for external users.
[USG6600]security-policy
[USG6600-policy-security-rule-natpolicy]source-address 192.168.0.0 24
[USG6600-policy-security-rule-natpolicy]action permit
Source NAT is configured to implement NAT for internal users attempting to access the
external network. The data flows a high-level security zone to a low-level security zone;
Therefore, the source address should be a network segment that belongs to the internal
network, and the address pool allocated to internal users should be an external network
segment for access to Internet resources.
If multiple internal servers use the same public address, you can run the nat server
command multiple times to configure them, and distinguish them using protocols.
When either communication party accesses the other party in a twice NAT scenario, the
destination address is not a real address but a NATed address. Generally, internal
networks belong to high-priority zones, and external networks belong to low-priority
zones. When an external user in a low-priority zone accesses the public address of an
internal server, the destination address of the packet is translated into the private
address of the internal server. The route destined for the public address must be
configured on the internal server.
To avoid configuring a route destined for the public address, you can configure NAT
from a low-priority zone to a high-priority zone. If NAT is required for access within the
same security zone, configure intrazone NAT.
In NAT server configuration, the internal server can send the response packet only after
the route destined for the public address is configured. To simplify configuration without
configuring the route destined for the public address, you can configure the firewall to
translate the source address of the external user. The source address after NAT must be
on the same network segment as the private address of the internal server. In this way,
the internal server sends the response packet to the device by default, and the device
then forwards the response packet.
If both parties that require address translation are in the same security zone, intrazone
NAT is involved. When both the user and FTP server are in the Trust zone, the user
accesses the public address of the FTP server. In this way, all the packets exchanged
between the user and FTP server pass through the firewall, and the internal server and
intrazone NAT must be configured.
Intrazone NAT is used when the internal user and server are in the same security zone,
but the internal user is required to access only the public address of the server. In
intrazone NAT, the destination address of the packet sent to the internal server must be
translated from a public address into a private address, and the source address must be
translated from a private address into a public address.
AB
As a security device, a USG is usually located at a service connection point, between a to-
be-protected network and the unprotected network. If only one USG is deployed at a
service connection point, network services may be interrupted due to a single point of
failure no matter how reliable the USG is. To prevent network service interruptions due
to a single point of failure, we can deploy two firewalls to form a dual-system hot
standby.
A common solution to a single point of failure in standard router networking is to set up
a protection mechanism based on the failover between links running a dynamic routing
protocol. However, this protection mechanism has limitations. If the dynamic routing
protocol is unavailable, services may be interrupted due to a link fault. To address this
problem, another protection mechanism, Virtual Router Redundancy Protocol (VRRP), is
introduced. VRRP is a basic fault-tolerant protocol. It brings a shorter failover duration
compared with the broadcast packets that depend on dynamic routing protocols, and
provides link protection even if no dynamic routing protocol is available.
VRRP group: A group of routers in the same broadcast domain form a virtual router. All
the routers in the group provide a virtual IP address as the gateway address for the
intranet.
Master router: Among the routers in the same VRRP group, only one router is active, the
master router. Only the master router can forward packets with the virtual IP address as
the next hop.
Backup router: Except for the master router, all other routers in a VRRP group are on
standby.
The master router periodically sends a Hello packet to the backup routers in
multicast mode, and the backup routers determine the status of the master router
based on the Hello packet. Because VRRP Hello packets are multicast packets, the
routers in the VRRP group must be interconnected through Layer 2 devices. When
VRRP is enabled, the upstream and downstream devices must have the Layer 2
switching function. Otherwise, the backup routers cannot receive the Hello packets
sent by the master router. If the networking requirement is not met, we should not
use VRRP.
If multiple zones on firewalls require the hot standby function, you must configure
multiple VRRP groups on each firewall.
As USG firewalls are stateful firewalls, they require the forward and reverse packets to
pass through the same firewall. To meet this requirement, the status of all VRRP groups
on a firewall must be the same. That is, all VRRP groups on the master firewall must be in
the master state, so that all packets can pass through the firewall, and the other firewall
functions as the backup firewall.
As shown in the figure, assume that the VRRP status of USG A is the same as that of USG
B. Therefore, all interfaces of USG A are in the master state, and all interfaces of USG B
are in the backup state.
PC1 in the Trust zone accesses PC2 in the Untrust zone. The packet forwarding path
is (1)-(2)-(3)-(4). USG A forwards the access packet and dynamically generates a
session entry. When the reverse packet from PC2 reaches USG A through (4)-(3), it
matches the session entry and therefore reaches PC1 through (2)-(1). Similarly, PC2
and the server in DMZ can communicate with each other.
Assume that the VRRP status of USG A is different from that of USG B. For example,
if the interface connecting USG B to the Trust zone is in the backup state but the
interface in the Untrust zone is in the master state, PC1 sends a packet to PC2
through USG A, and USG A dynamically generates a session entry. The reverse
packet from PC2 returns through the path of (4)-(9). However, USG B does not have
any session entry for this data flow. If there is no packet filtering rule on USG B to
allow the packet to pass, USG B will discard the packet. As a result, the session is
interrupted.
Cause of the problem: The packet forwarding mechanisms are different:
Router: Each packet is forwarded based on the routing table. After a link switchover,
subsequent packets can still be forwarded.
Stateful firewall: If the first packet is allowed through, the firewall creates a
quintuple session connection accordingly. Subsequent packets (including reverse
packets) matching this session entry can pass through the firewall. If a link
switchover occurs, subsequent packets cannot match the session entry, resulting in
service interruption.
Note that if NAT is configured on a router, similar problems occur because a new entry is
created after NAT.
The requirements for the application of VRRP on firewalls are as follows:
Multiple VRRP groups on a firewall can be added to a VGMP group, which manages the
VRRP groups in a unified manner. VGMP controls the status switchover of VRRP groups
in a unified manner, ensuring the consistent status of the VRRP groups.
You can specify the VGMP group status to determine the active or standby firewall.
If the VGMP group on a firewall is in the active state, all VRRP groups in the VGMP group
are in the active state. This firewall is the active firewall, and all packets pass through this
firewall. In this case, the VGMP group on the other firewall is in the standby state, and
the firewall is the standby firewall.
Each firewall has an initial VGMP group priority. If an interface or a board of the firewall
is faulty, the VGMP group priority of the firewall decreases.
The initial VGMP group priority of the USG6000 and NGFW Module is 45000. The initial
VGMP group priority of the USG9500 depends on the number of cards on the line
processing unit (LPU) and the number of CPUs on the service processing unit (SPU).
Similar to VRRP, the VGMP active firewall regularly sends Hello packets to the VGMP
standby firewall to inform the latter of its running status, including the priority and the
status of member VRRP groups. The member status is dynamically adjusted, so that the
two firewalls can perform active/standby switchovers.
Different from VRRP, after the VGMP standby firewall receives a Hello packet, it replies
with an ACK message, carrying its own priority and status of member VRRP groups.
By default, VGMP Hello packets are sent every second. If the standby firewall does not
receive any Hello packets from the active firewall after three Hello packet periods, the
standby firewall regards that the peer fails, and then switches to the active state.
Status consistency management
Whenever the status of a VRRP group changes, the VGMP group must be notified
of the change. The VGMP group determines whether to allow the master/backup
switchover of the VRRP group. If the status switchover is necessary, the VGMP
group instructs all its VRRP groups to perform the switchover. Therefore, after a
VRRP group is added to a VGMP group, its status cannot be switched separately
from the group.
Preemption management
VRRP groups are capable of preemption. If the faulty master firewall recovers, so
does the priority of the firewall. Therefore, the firewall can become the master
firewall again through preemption.
After a VRRP group is added to a VGMP group, the preemption function of the
VRRP group becomes invalid. The VGMP group determines whether to preempt.
The preemption function of VGMP groups is similar to that of VRRP groups. If the
faulty VRRP group in a VGMP group recovers, the priority of the VGMP group
restores to the original value. In this case, the VGMP group determines whether to
preempt to be the active firewall.
In the hot standby networking, if a fault occurs on the active firewall, all packets are
switched to the standby firewall. As USG firewalls are stateful firewalls, if the standby
firewall does not have the connection status data (session table) of the original active
firewall, traffic switched to the standby firewall cannot pass through the firewall. As a
result, the existing connection is interrupted. To restore services, the user must re-initiate
the connection.
The HRP module provides the basic data backup mechanism and transmission function.
Each application module collects the data that needs to be backed up by the module and
submits the data to the HRP module. The HRP module sends the data to the
corresponding module of the peer firewall. The application module parses the data
submitted by the HRP module, and adds it to the dynamic running data pool of the
firewall.
Backup data: TCP/UDP session table, server-map entries, dynamic blacklist, NO-PAT
entries, and ARP entries.
Backup direction: The firewall with the active VGMP group backs up the required data to
the peer.
Backup channel: Generally, the ports that directly interconnect the two firewalls are used
as the backup channel, which is also called the heartbeat link (VGMP uses this channel
for communication).
Usually, backup data accounts for 20% to 30% of service traffic. You can determine the
number of member Eth-Trunk interfaces based on the amount of backup data.
Invalid: The physical status is Up and protocol status is Down. The local heartbeat
interface is incorrectly configured. For example, the heartbeat interface is a Layer 2
interface, or no IP address is configured for the heartbeat interface.
Down: The physical and protocol statuses of the local heartbeat interface are both Down.
Peerdown: The physical and protocol statuses are both Up. The local heartbeat interface
cannot receive heartbeat link detection reply packets from the peer heartbeat interface.
In this case, the firewall sets the status of the local heartbeat interface to peerdown. Even
so, the local heartbeat interface continues sending heartbeat link detection packets and
expects to resume the heartbeat link when the peer heartbeat interface is brought Up.
Ready: The physical and protocol statuses are both Up. The local heartbeat interface
receives heartbeat link detection reply packets from the peer heartbeat interface. In this
case, the firewall sets the status of the local heartbeat interface to ready, indicating that
it is ready to send and receive heartbeat packets. In addition, it continues sending
heartbeat link detection packets to keep the heartbeat link status.
Running: When multiple local heartbeat interfaces are in the ready state, the firewall sets
the status of the first configured one to running. If only one interface is in the ready state,
the firewall sets its status to running. The running interface is used to send HRP
heartbeat packets, HRP data packets, HRP link detection packets, VGMP packets, and
consistency check packets.
Other local heartbeat interfaces in the ready state serve as backups and take up services
in sequence (based on the order of configuration) when the running heartbeat interface
or the heartbeat link fails.
To conclude, heartbeat link detection packets are used to detect whether the peer
heartbeat interface can receive packets and determine whether the heartbeat link is
available. The local heartbeat interface sends heartbeat link detection packets as long as
its physical and protocol statuses are both Up.
As described in previous sections, HRP heartbeat packets are used to detect whether the
peer device (peer VGMP group) is working properly. These packets can be sent only by
the running heartbeat interface in the VGMP group on the active device.
Automatic backup
After automatic backup is enabled, every time you execute a command that can be
backed up on a firewall, the command is immediately backed up to the other
firewall.
After automatic backup is enabled, the active device periodically backs up status
information that can be backed up to the standby device. Therefore, the status
information of the active device is not immediately backed up after its creation.
Instead, the information is backed up to the standby device around 10 seconds
after its creation.
Sessions created by traffic destined for the firewall, for example, sessions
created for administrator logins to the firewall
Sessions created by UDP first packets but not matching subsequent packets
(these can be backed up in quick session backup)
Manual batch backup
Manual batch backup needs to be triggered by the configuration of the manual batch
backup command. This backup starts immediately and applies to scenarios where
manual backup is required when the configurations of two devices are asynchronous.
After the manual batch backup command is executed, the designated active device
immediately synchronizes its configuration commands to the designated standby
device.
After the manual batch backup command is executed, the designated active device
immediately synchronizes its status information to the designated standby device
with no need to wait for an automatic backup period.
Quick session backup
Quick session backup applies when the forward and reverse paths are inconsistent
on load balancing networks. Inconsistent forward and reverse paths may occur on
load balancing networks because both devices are active and able to forward
packets. If status information is not synchronized in a timely manner, reverse
packets may be discarded if they do not match any sessions, causing service
interruption. Therefore, quick session backup is required by the firewalls to back up
status information in real time.
For timely synchronization, this function synchronizes status information but not
configuration. The synchronization of configuration commands can be undertaken
by using automatic backup.
After quick session backup is enabled, the active firewall can synchronize all status
information, including those not supported by automatic session backup, to the
standby firewall. Therefore, sessions can be synchronized to the standby firewall
immediately when they are set up on the active firewall.
Automatic synchronization of active/standby firewall configurations after restart
In the hot standby networking, if one firewall is restarted, the other firewall
processes all services during the restart. In this period, the firewall that processes
the services may have configurations added, deleted, or modified. To ensure that
the active and standby firewalls have the same configurations, after the firewall is
restarted, configurations are automatically synchronized from the firewall that
processes services.
Only configurations that can be backed up can be synchronized, such as security
policies and NAT policies. Configurations that cannot be backed up, such as OSPF
and BGP, remain unchanged.
Configuration synchronization can take up to one hour, subject to the amount of
configuration. During the synchronization, you are not allowed to execute
configuration commands that can be backed up between firewalls.
In dual-system hot standby networking, the firewalls usually work in routing mode, and
the downstream switches separately connect to the firewalls through two links. In normal
cases, USG A functions as the active firewall. If the uplink or downlink of USG A goes
Down, USG B automatically becomes the active firewall, and switch traffic is transmitted
through USG B.
By default, the master VRRP group sends VRRP packets every second. You can adjust the
interval for sending VRRP packets in the interface view. Run the following command to
change the interval for sending VRRP packets:
VRRP can work with IP-link. If the uplink is disconnected, a master/backup VRRP
switchover is triggered. Run the following command to configure an IP-link in the
interface view:
The preemption function of VGMP groups is enabled by default, and the default
preemption delay is 60 seconds. Run the following command to set a preemption delay
for a VGMP group:
After HRP backup is enabled on both USGs, the two USGs negotiate an active device
(with HRP_A displayed) and a standby device (with HRP_S displayed). After the
negotiation is complete, the active device begins to synchronize configuration
commands and status information to the standby device.
If the standby device can be configured, all information that can be backed up can be
directly configured on the standby device, and the configuration on the standby device
can be synchronized to the active device. If conflicting settings are configured on the
active and standby devices, the most recent setting overrides the previous one.
When USGs work on a load-balancing network, the forward and reverse paths of
packets may be inconsistent. To prevent service interruptions, you must enable
quick session backup to ensure that session information on a USG can be
synchronized to the other USG.
Configuration of VRRP group 2 on USG_A:
[USG_B]hrp enable
hrp interface GigabitEthernet 1/0/6 //Configure this interface as the heartbeat interface.
View the status of the standby firewall:
Click Check to check the consistency of configurations on the active and standby
firewalls.
Questions and answers:
Single-Choice: A
The growing number of application-layer attacks not only brings additional threats to
network security but also places further strain on network access control. Enterprises
want the capability to precisely identify users, ensure legitimate applications operate
normally, and block applications that may bring security risks. However, IP addresses and
ports are no longer sufficient to distinguish users and applications. Traditional access
control policies based on quintuples cannot adapt to changes in the current network
environment.
Example:
When accessing the Internet, a user needs to enter a user name and a password for
authentication.
After authentication, the firewall starts to authorize the user and grant permissions
for access to different resources, such as baidu.com or google.com.
During user access, accounting is performed to record the user's operations and
online duration.
Authentication mode:
What I know: includes the information that a user knows (password and PIN)
What I have: includes the information that a user has (token cards, smart cards, and
various bank cards)
What I am: includes unique biometric features that a user has (fingerprint, voice, iris,
and DNA)
Authorizes users to access certain services, including public services and sensitive
services.
Authorizes users to use certain commands for device management. For example,
authorizes users to use only display commands but not delete or copy commands.
The accounting function covers the following aspects:
Local authentication:
Configures user information, including the user name, password, and attributes of
local users, on a Network Access Server (NAS). Local authentication offers fast
processing and low operation cost. However, the capacity to store user information
is limited by the hardware.
Server authentication:
Configures user information, including the user name, password, and attributes, on
a third-party authentication server. AAA can remotely authenticate users through
the Remote Authentication Dial In User Service (RADIUS) protocol or the Huawei
Terminal Access Controller Access Control System (HWTACACS) protocol.
RADIUS is one of the most common protocols used to implement AAA. It is widely
applied to the NAS system and defines how user authentication and accounting
information and results are transferred between the NAS and RADIUS server. The NAS
transfers user authentication and accounting information to the RADIUS server. The
RADIUS server receives connection requests from users, authenticates the users, and
then returns authentication results to the NAS.
RADIUS uses User Datagram Protocol (UDP) at the transport layer to provide excellent
real-time performance. In addition, RADIUS supports a retransmission mechanism and
backup server mechanism to ensure high availability.
The process of transmitting RADIUS messages between the server and the client is as
follows:
When logging in to a network device, such as the USG or an access server, the user
sends the user name and password to the network device.
After the RADIUS client (a NAS) on this network device receives the user name and
password, it sends an authentication request to the RADIUS server.
If the request is valid, the server completes authentication and sends the required
authorization information to the client. If the request is invalid, the server sends the
authorization failure information to the client.
A RADIUS message contains the following fields:
Code: refers to the message type, such as an access request or access permit.
Access-Request
Access-Accept
Accounting-Request (start)
Accounting (start)-Response
Accounting-Request (stop)
Accounting (stop)-Response
Access ends.
Code: indicates the packet type, which occupies 1 byte. The definitions are as follows:
Information about an enterprise employee, for example, a name, email address, and
mobile number
Physical information of a device, for example, the IP address, location, vendor, and
purchase time
A user enters the user name and password to initiate a login request. The firewall
establishes a TCP connection with the LDAP server.
The firewall sends a binding request message carrying the administrator's DN and
password to the LDAP server. This message is used to obtain the query permission.
After the binding succeeds, the LDAP server returns a binding reply message to the
firewall.
The firewall uses the user name entered by the user to send a user DN search
request message to the LDAP server.
The LDAP server searches for the user based on the user DN. If the search succeeds,
the LDAP server sends a search reply message.
The firewall sends a user DN binding request message to the LDAP server. This
message contains the obtained user DN and the entered password. The LDAP
server checks whether the password is correct.
After the binding succeeds, the LDAP server returns a binding reply message to the
firewall.
After the authorization succeeds, the firewall notifies the user that the login
succeeds.
Local authentication
A user sends the user name and password that identify the user to the firewall
through the portal authentication page. The firewall stores the password and
performs authentication. This method is called local authentication.
Server authentication
A user sends the user name and password that identify the user to the firewall
through the portal authentication page. The firewall does not store the password.
Instead, the firewall sends the user name and password to a third-party
authentication server for it to perform authentication. This method is called server
authentication.
SSO
A user sends the user name and password that identify the user to a third-party
authentication server. After the user passes the authentication, the third-party
authentication server sends the user's identity information to the firewall. The
firewall only records the user's identity information but does not perform
authentication. This process is called SSO.
SMS authentication
A user accesses the portal authentication page and requests an SMS verification
code. Authentication succeeds after the user enters the correct verification code on
the portal authentication page. This process is called SMS authentication.
In user management, users are allocated to different user groups. They are authenticated,
labelled, and assigned with different permissions and applications for the purpose of
security.
Example:
Employees (users) are added to user groups. For users and user groups, network
behavior control and audit can be performed, and policies can be customized on a
GUI. In addition, reports are provided to present user information, and Internet
access behavior analysis is performed for tracing and auditing user behavior
(instead of IP addresses). This facilitates policy-based application behavior control
in scenarios where users' IP addresses frequently change.
Similar horizontal groups can be used for enterprises that use third-party authentication
servers to store organizational structures. For policies based on cross-department
security groups, the security groups created on the firewall must be consistent with the
organizational structures on the authentication servers.
Authentication domain
The firewall identifies the authentication domains contained in user names and assigns
users that require authentication to the corresponding authentication domains. The
firewall then authenticates users based on the authentication domain configurations.
The planning and maintenance of the organizational structure is critical in ensuring that
differentiated network access permissions can be properly assigned to users or
departments. The firewall provides an organizational structure tree that resembles a
common administrative structure, which facilitates planning and management.
A user or user group can be referenced by security policies or traffic limiting policies, so
that user-specific access and bandwidth control can be implemented.
Console
Web
Telnet
FTP
SSH
An Internet access user is the identity entity for network access and also a basic
unit for network permission management. The device authenticates the user
accessing the Internet and performs the control action specified in the policy
applied to the user.
A remote access user is mainly used to access intranet resources after accessing the
device through:
SSL VPN
L2TP VPN
IPSec VPN
PPPoE
For device management, and administrator can log in through:
Console
The console port provides the CLI mode for device management. It is usually used
when the device is configured for the first time or if the configuration file of a
device is lost.
If the device fails to start normally, you can diagnose the fault or enter the
BootROM system through the console port to upgrade the system.
Web
The web UI enables you to log in to the device remotely through HTTP/HTTPS for
device management.
Telnet
FTP
The FTP administrator uploads and downloads files in the device's storage space.
SSH
Console:
Telnet:
[USG] aaa
[USG -aaa-manager-user-client001]level 3
[USG] aaa
After the preceding configurations are complete, run the client software supporting
SSH to establish an SSH connection.
Enable the web management function.
[USG] aaa
[USG-aaa]manager-user webuser
[USG-aaa-manager-user-webuser]service-type web
[USG-aaa-manager-user-webuser]level 3
SSO of Internet access users: Users authenticated by other authentication systems do not
need to be authenticated again by the firewall. The firewall can obtain information
linking the authenticated users to their IP addresses to implement user-specific policy
management.
This method applies to scenarios where an authentication system has been deployed
before user authentication is deployed on the firewall.
TSM SSO: A user is authenticated by Huawei TSM (Policy Center or Agile Controller).
RADIUS SSO: A user accesses the NAS which forwards the user's authentication
request to the RADIUS server for authentication.
Built-in portal authentication for Internet access users: The firewall provides a built-in
portal authentication page (https://Interface IP address:8887 by default) to authenticate
users. The firewall forwards the authentication request to the local user database or
authentication server. This method applies to scenarios where the firewall authenticates
users.
Redirected authentication: When a user accesses the HTTP service, the firewall
pushes the authentication page to the user to trigger user authentication.
When a user accesses the HTTP service, the firewall pushes the user-defined portal
authentication page to the user to trigger user authentication.
Authentication exemption for Internet access users: Users can be authenticated and
access network resources without entering user names and passwords. Authentication
exemption does not mean that users are not authenticated. In authentication exemption,
users do not need to enter users names or passwords, and the firewall can obtain
information for identifying a user via their IP address to implement user-specific policy
management.
User names are bidirectionally bound with IP/MAC addresses. The firewall identifies
the bindings to automatically authenticate users. This method applies to top
executives.
SMS authentication: The firewall authenticates users based on verification codes. A user
obtains an SMS verification code on the SMS authentication portal page provided by the
firewall and enters the verification code for authentication. After passing authentication,
the user logs in to the firewall as a temporary user.
Redirected authentication: When a user accesses the HTTP service, the firewall
pushes the authentication page to the user to trigger user authentication.
Remote access user authentication: The firewall authenticates VPN access users during
the connection. To authenticate the VPN access users before they access network
resources, you can configure secondary authentication.
The user logs in to the AD domain. Then the AD server returns a login success
message and delivers a login script to the user.
The user's PC executes the login script and sends the user login information to the
AD monitor.
The AD monitor connects to the AD server to query information about the user. If
the user's information is displayed, the user login information is forwarded to the
firewall.
The firewall extracts the user-IP address mapping from the user login information
and adds the mapping to the online user list.
If the packets exchanged between the user and the AD server, between the user and the
AD monitor, and between the AD monitor and the AD server need to pass the firewall,
ensure that the authentication policies do not authenticate the packets and the security
policies allow the packets through.
The detailed login process is as follows:
A user logs in to the AD domain. The AD server records user login information into
a security log.
The AD monitor regularly queries security logs generated by the AD server from
the time when the AD SSO is enabled.
The AD monitor forwards the user login message to the firewall. The user goes
online through the firewall.
When the firewall is deployed between users and the AD server, the firewall can obtain
authentication packets. If the authentication packets do not pass through the firewall,
the messages carrying authentication results from the AD server must be mirrored to the
firewall.
The firewall cannot obtain user logout messages. Users go offline only when their
connections time out.
Authentication packets may be maliciously tampered with, and user identifies may
be forged. Therefore, exercise caution when using this mode.
In addition to AD SSO, the firewall also provides TSM SSO and RADIUS SSO.
After receiving HTTP packets whose destination port is 80 from an Internet access user, a
firewall redirects the user to an authentication web page and triggers identity
authentication. The user can access network resources after being authenticated.
The firewall supports user-defined Portal authentication. There are currently two types of
user-defined Portal authentication.
SSL VPN
Users log in to the authentication page provided by the SSL VPN module to trigger
authentication. After authentication is successful, the users can access the
headquarters' network resources.
L2TP VPN
In automatic LAC dial-up mode: At the access phase, the LAC at the branch office
triggers authentication through dial-up and establishes an L2TP VPN tunnel with
the LNS. At the resource access phase, users in branch offices can trigger user-
initiated or redirected authentication. After authentication is successful, the users
can access the headquarters' network resources.
IPSec VPN
After a branch office establishes an IPSec VPN tunnel with headquarters, users in
the branch office can trigger user-initiated or redirected authentication. After the
authentication succeeds, the users can access headquarters' network resources.
The Secure Sockets Layer (SSL) VPN, as a VPN technology based on Hypertext Transfer
Protocol Secure (HTTPS), works between the transport layer and the application layer to
provide confidentiality. It provides web proxy, network extension, file sharing, and port
forwarding services.
The handshake procedure for SSL communications is as follows:
The SSL client initiates a connection to the SSL server and requests that the server
authenticates itself.
The server authenticates itself by sending its digital certificate.
The server sends a request for authentication of the client's certificate.
After the authentication succeeds, the hash function used for the integrity check
and as the message encryption algorithm are negotiated. Generally, the client
provides the list of all encryption algorithms it supports, and then the server selects
the most powerful one.
The client and server generate a session key as follows:
The client generates a random number, encrypts it using the public key of the
server (obtained from the server certificate), and sends it to the server.
The server replies with random data (when the client's key is available, the
client's key is used; otherwise, data is sent in plain text).
The hash function is used to generate a key from random data.
As shown in the figure, an enterprise has deployed a firewall as the VPN access gateway
that connects the intranet to the Internet. After remote employees access the firewall
through an SSL VPN, they can use the network extension service to access network
resources.
Redirected authentication: When a user accesses HTTP service and the access data flow
matches an authentication policy, the firewall pushes an authentication page to the user.
Authentication exemption: A user can access network resources without entering a user
name or password if specified in the authentication exemption policy. The firewall
identifies these users based their IP/MAC address bindings.
SSO: The login of SSO users is not under the control of authentication policies, but user-
specific policy control can only be implemented when user service traffic matches an
authentication policy.
The following types of traffic do not trigger authentication even if they match the
specified authentication policy:
The DNS packet from an HTTP service data flow that triggers authentication. This
immunity only lasts until the user is authenticated and logs in.
Portal authentication
Authentication exemption
No authentication is implemented on data flows that meet conditions. This action applies
to the following scenarios:
Data flows, such as data flows between intranets do not require to be authenticated
by the firewall either.
The firewall has a default authentication policy with all matching conditions set to any
and the action set to No authentication
Configure a user or user group: Before implementing user- or user group-based
management, you must create a user or user group first. A user or user group can be
manually configured, imported locally, or imported from a server.
Manually configure a user or user group:
The firewall has a default authentication domain. You can create users or user
groups as subordinates of the authentication domain. If other authentication
domains are required, configure them first.
This step is mandatory when you need to create user groups based on the
enterprise's organizational structure and to manage network permission allocation
based on user groups.
To perform local password authentication, you must create a local user and
configure the local password.
Import locally
Local import supports the import of user information in CSV files and database
DBM files to the local device.
Import from server
Third-party authentication servers are used in many scenarios. Lots of companies'
networks have authentication servers, which store information about all users and
user groups. Importing users from the authentication server in batches refers to
importing user or user group information on the authentication server to the
device through the server import policy.
Configuring authentication options involves the configuration of global parameters, SSO,
and customized authentication pages.
Global parameter configuration mainly applies to local authentication and server
authentication. The configuration includes:
Set the password strength, specify password change upon first login, and configure
password expiration.
Define the maximum number of failed login attempts, lock duration after the
maximum number of failed login attempts is reached, and online user timeout
period.
SSO includes AD SSO, TSM SSO, and RADIUS SSO. This course details only on AD SSO.
You can customize the logo, background image, welcome message, or help message of
the authentication page as required.
When an Internet access user or a remote access user who has accessed the firewall uses
the redirected authentication mode to trigger authentication, the authentication policy
must be used.
The firewall has a default authentication policy with all the matching conditions set to
any and the action set to not authenticate.
This section uses the RADIUS server and AD server as examples to describe how to
configure servers.
When a RADIUS server is used to authenticate the user, the firewall acts as the proxy
client for the RADIUS server and sends the user name and password to the server for
authentication. Parameters set on the firewall must be consistent with those set on the
RADIUS server.
During AD server configuration, the system time and time zone of the firewall must be
the same as those on the AD server.
Group/User
Before the device can perform user-specific and user group-specific management,
users and user groups must be existing on the device. You can manually create a
user or user group at the Group/User node.
The root group is a default group and cannot be deleted. You cannot rename the
root group but can assign it with a description for identification.
All the other user groups have the same ultimate owning group, the root group.
Select an authentication domain for which the user group is created. By default,
only the default authentication domain is available.
Creating a user
Creating a user applies to the circumstance under which users are created one by
one instead of in batches. Besides all the configuration items involved in Creating
Multiple Users, the operation of creating a user also includes the setting of the
display name and the bidirectional IP/MAC address binding.
If you select this option, the login name of a user can be used by multiple users to
log in concurrently. That is, this login name can be used concurrently on multiple
PCs.
If you deselect this option, the login name can be used only on one PC at a time.
IP/MAC binding
Indicates the method of binding the user and the IP/MAC address.
If you select Unidirectional binding, the user must use the specified IP/MAC
address for authentication. However, other users can also use the same
IP/MAC address for authentication.
If you select Bidirectional binding, the user must use the specified IP/MAC
address for authentication, and other users cannot use the same IP/MAC
address for authentication. If an IP/MAC address and a user are bidirectionally
bound, the users that are unidirectionally bound to the IP/MAC address will
fail to log in.
IP/MAC address
Indicates the IP address, MAC address, or IP/MAC address pair bound to the user.
Portal authentication requires a portal server to complete the authentication. The portal
server needs to provide and push an authentication page to users. Currently, the firewall
can interconnect to Huawei Agile Controller or Policy Center.
Destination zone: indicates the security zone where the AD server resides.
1. D
2. ABC
Malware is the most common security threat, and includes viruses, worms, botnets,
rootkits, Trojans, backdoor programs, vulnerability exploit programs, and wap malicious
programs. Besides malware, the impact of greyware is increasing and security threats
correlated to crimes have been critical to network security.
Instead of facing only virus attacks, users now have to fend off combinations of network
threats, including viruses, hacker intrusions, Trojan horses, botnets, and spyware. Current
defense mechanisms struggle to mitigate such attacks.
Vulnerabilities lead to severe security risks:
There is now a global black industry chain based on DDoS attacks that has the aim
of financial gain. Also, there is a huge number of botnets on networks.
DDoS attacks will occupy bandwidth to bring the network down and exhaust server
resources to prevent the server from responding to user requests or to crash the
system, ultimately causing services to fail.
A virus is a type of malicious code that infects or attaches to application programs or
files and spreads through protocols, such as email or file sharing protocols, threatening
the security of user hosts and networks.
Viruses perform various types of harmful activities on infected hosts, such as exhausting
host resources, occupying network bandwidth, controlling host permissions, stealing
user data, and even corrupting host hardware.
Viruses, Trojan horses, and spyware invade an intranet mainly through web
browsing and mail transmission.
Viruses can crash computer systems and tamper with or destroy service data.
It's difficult for desktop antivirus software to globally prevent the outbreak of
viruses.
Typical intrusions:
Real-time blocking: The IPS detects and blocks network attacks in real time,
whereas the IDS can only detect attacks. Therefore, the IPS improves system
security to the maximum extent.
Self-learning and self-adaptation: The IPS minimizes the rate of false negatives and
false positives through self-learning and self-adaptation to reduce the impact on
services.
User-defined rules: The IPS supports the customization of intrusion prevention rules
to give the best possible response to the latest threats.
Service awareness: The IPS can detect exceptions or attacks at the application layer.
Threat name: IPS signatures describe attack behavior features. The firewall
compares the features of packets with the signatures to detect and defend against
attacks.
Event count: The field is used for merging logs. Whether logs are merged is
determined by the merge frequency and conditions. The value is 1 if logs are not
merged.
Intrusion target: Indicates the attack target of a packet detected based on the
signature, which can be:
Information
Low
Medium
High
Operating system: Indicates the operating system attacked by the packet detected
based on the signature, which can be:
Android
iOS
Unix-like
Windows
Signature category: Indicates the threat category to which the packet attack
detected based on the signature belongs.
Alert
Block
Ways of categorizing computer viruses:
A worm is a variant of the virus. It is an independent entity that does not need to
be parasitic. It can replicate itself and spread by exploiting system or intentional
vulnerabilities, impacting the performance of the entire network and the computer
system even more severely.
When we talk about using an antivirus, we are referring to the mitigation of malicious
code.
Single-device antivirus can be implemented by installing antivirus software or
professional antivirus tools. Virus detection tools detect malicious code, such as viruses,
Trojan horses, and worms. Some detection tools can also provide the recovery function.
Norton Antivirus from Symantec is a common antivirus software program, and the
Process Explorer (see figure) is a professional antivirus tool.
Intranet users can access the Internet and need to frequently download files from
the Internet.
As shown in the figure, the NIP serves as a gateway device that isolates the intranet from
the Internet. There are user PCs and a server on the intranet. Intranet users can
download files from the Internet, and Internet users can upload files to the intranet
server. To secure the files to be uploaded or downloaded, the antivirus function should
be configured on the NIP.
After the antivirus function is configured, the NIP only permits secure files to be
transferred into the intranet. If a virus is detected in a file, the NIP applies the action,
such as block or alert, to the file.
Currently, device vendors (including UTM and AVG) provide two antivirus scanning
modes: proxy scanning and flow scanning.
A flow antivirus gateway has high performance and low system overhead but low
detection rate, failing to cope with shelled and compressed files.
The intelligent awareness engine (IAE) carries out in-depth analysis into network traffic to
identify the protocol type and file transfer direction.
Checks whether antivirus supports this protocol type and file transfer direction.
The firewall performs virus detection for files transferred using the following
protocols:
File Transfer Protocol (FTP)
Hypertext Transfer Protocol (HTTP)
Post Office Protocol - Version 3 (POP3)
Simple Mail Transfer Protocol (SMTP)
Internet Message Access Protocol (IMAP)
Network File System (NFS)
Server Message Block (SMB)
The firewall supports virus detection for files that are:
Uploaded: Indicates files sent from the client to the server.
Downloaded: Indicates files sent from the server to the client.
Checks whether the whitelist is matched. The NIP does not perform virus detection on
whitelisted files.
A whitelist comprises whitelist rules. You can configure whitelist rules for trusted
domain names, URLs, IP addresses, and IP address ranges to improve the antivirus
detection rate. A whitelist rule applies only to the corresponding antivirus profile.
Virus detection:
The IAE extracts signatures of a file for which antivirus is available and matches the
extracted signatures against virus signatures in the virus signature database. If a
match is found, the file is identified as a virus and processed according to the
response action specified in the profile. If no match is found, the file is permitted.
When the detection interworking function is enabled, files that do not match the
virus signature database are sent to the sandbox for in-depth inspection. If the
sandbox detects a malicious file, it sends the file signature to the NIP. The NIP
saves the malicious file signature to the local interworking detection cache. If the
NIP detects the malicious file again, it will take the action defined in the profile.
Huawei analyzes and summarizes common virus signatures to construct the virus
signature database. This database defines common virus signatures, each of which
is assigned a unique virus ID. After loading this database, the device can identify
viruses defined in the database.
The following describes the firewall’s response after identifying a transferred file as a
virus:
The firewall checks whether this virus-infected file matches a virus exception. If so,
the file is permitted.
Virus exceptions refer to whitelisted viruses. To prevent file transfer failures
resulting from false positives, whitelist virus IDs that users identify as false positives
are added to exceptions to disable the virus rules.
If the virus does not match any virus exception, the firewall checks whether it
matches an application exception. If so, it is processed according to the response
action (allow, alert, or block) for the application exception.
Response actions for application exceptions can be different from those for
protocols. Various types of application traffic can be transmitted over the same
protocol.
Because of the preceding relationship between applications and protocols,
response actions for protocols and applications are configured as follows:
If only the response action for a protocol is configured, all applications with
traffic transmitted over this protocol inherit the response action of the
protocol.
If response actions are configured for a protocol and the applications with
traffic transmitted over the protocol, the response actions for the applications
take effect.
If the file matches neither virus exceptions nor application exceptions, the response
action corresponding to its protocol and transfer direction specified in the profile is
employed.
Actions taken by the firewall when a virus is detected:
Alert: The device permits the virus-infected file and generates a virus log.
Block: The device blocks the virus-infected file and generates a virus log.
Declare: For a virus-infected email message, the device permits it but adds
information to the email body to announce the detection of viruses and generates
a virus log. This action applies only to SMTP and POP3.
Delete Attachment: The device deletes malicious attachments in the infected email
message, permits the message, generates a log, and adds information to the email
body to announce the detection of viruses and deletion of attachments. This action
applies only to SMTP and POP3.
After a virus is detected by the firewall, you can view the detailed antivirus results in the
service log.
After antivirus for HTTP and email protocol is configured, you can view relevant
information in the access page or email body.
Answers:
CD
D
Encryption is the process of making information only readable to certain receivers and
incomprehensible to other users. It achieves this by enabling the original content to be
shown only after the correct key is used to decrypt the information. Encryption protects
data from being obtained and read by unauthorized users. It prevents interception and
theft of private information over networks. Encryption guarantees the confidentiality,
integrity, authenticity, and non-repudiation of information.
Asymmetric encryption, also called public key encryption, is a form of encryption using a
public key and a private key that are mathematically related. The public key can be
transferred openly between the two parties or released in the public database. The
private key, however, is confidential. The data encrypted with the public key can be
decrypted only by the private key, and the data encrypted with the private key can be
decrypted only by the public key.
Users A and B negotiate a symmetric key in advance. The encryption and decryption
process is as follows:
User A uses the symmetric key to encrypt data and sends the encrypted data to
user B.
After receiving the encrypted data, user B decrypts the data using the symmetric
key and obtains the original plaintext.
User A obtains the public key of user B in advance. The encryption and decryption
process is as follows:
User A uses user B's public key to encrypt data and sends the encrypted data to
user B.
After receiving the encrypted data, user B decrypts the data using their private key
and obtains the original plaintext.
Symmetric key cryptography features high efficiency, simple algorithms, and low system
overhead. It is suitable for encrypting a large volume of data. However, it is difficult to
implement because the two parties must exchange their keys securely before
communication. In addition, it is difficult to expand because each pair of communicating
parties needs to negotiate keys, and n users needs to negotiate n*(n-1)/2 different keys.
Attackers cannot use one key in a key pair to figure out the other key. The data
encrypted by a public key can only be decrypted by the private key of the same user.
However, it takes a long time for the public key cryptography to encrypt a large amount
of data, and the encrypted packets are too large, consuming much bandwidth.
Public key cryptography is suitable for encrypting sensitive information such as keys and
identities to provide higher security.
A digital envelope contains the symmetric key encrypted using the peer's public key.
When receiving a digital envelope, the receiver uses its own private key to decrypt the
digital envelope and obtains the symmetric key.
Assume that user A has the public key of user B. The encryption and decryption process
is as follows:
User A uses a symmetric key to encrypt data.
User A uses the public key of user B to encrypt the symmetric key and generate a
digital envelope.
User A sends the digital envelope and encrypted data to user B.
User B uses its own private key to decrypt the digital envelope and obtains the
symmetric key.
User B uses the symmetric key to decrypt the data and obtains the original data.
The digital envelope has the advantages of both symmetric key cryptography and public
key cryptography. That is, it speeds up key distribution and encryption while improving
key security, extensibility, and efficiency.
However, the digital envelope still has a vulnerability. The attacker may obtain
information from user A, use its own symmetric key to encrypt the forged information,
use the public key of user B to encrypt its own symmetric key, and send the information
to user B. After receiving the information, user B decrypts it and considers the
information to be sent from user A. To address this problem, the digital signature is used,
ensuring that the received information was sent from the correct sender.
Digital signature is generated by the sender by encrypting the digital fingerprint using its
own private key. The receiver uses the sender's public key to decrypt the digital signature
and obtain the digital fingerprint.
A digital fingerprint, also called information digest, is generated by the sender using the
hash algorithm on plaintext information. The sender sends both digital fingerprint and
plaintext to the receiver, who uses the same hash algorithm to calculate the digital
fingerprint on the plaintext. If the two fingerprints are the same, the receiver knows that
the information has not been tampered with.
Assume that user A has the public key of user B. The encryption and decryption process
is as follows:
User A uses the public key of user B to encrypt data.
User A performs hash on the plaintext and generates a digital fingerprint.
User A uses its own private key to encrypt the digital fingerprint, generating the
digital signature.
User A sends both the ciphertext and digital signature to user B.
User B uses the public key of user A to decrypt the digital signature, obtaining the
digital fingerprint.
After receiving the ciphertext from user A, user B uses its own private key to
decrypt the information, obtaining the plaintext information.
User B performs hash on the plaintext and generates a digital fingerprint.
User B compares the generated fingerprint with the received one. If the two
fingerprints are the same, user B accepts the plaintext; otherwise, user B discards it.
The digital signature proves that information is not tampered with and verifies the
sender's identity. The digital signature and digital envelope can be used together.
However, the digital signature still has a vulnerability. If the attacker modifies the public
key of user B and user A obtains the modified key, the attacker can intercept information
sent from user B to user A, sign the forged information using its own private key, and
send the forged information encrypted using user A's public key to user A. After
receiving the encrypted information, user A decrypts the information and verifies that
the information has not been tampered with. In addition, user A considers the
information to be sent by user B. The digital certificate can fix this vulnerability.
According to encryption objects, there are two main types of symmetric cryptography
algorithms:
Stream algorithms
The stream algorithm continuously inputs elements and generates one output
element at a time. A typical stream algorithm encrypts one-byte of plaintext at a
time, and the key is input into a pseudo random byte generator to generate an
apparently random byte stream, which is called a key stream. A stream algorithm is
generally used for data communication channels, browsers, or network links.
Common stream algorithms: RC4 is a stream algorithm designed by Ron Rivest for
RSA Security in 1987. Its key is a stream cipher of a changeable size. Byte-oriented
operations encrypt information as a whole in real time. It works around 10 times
more quickly than DES cryptography.
Block algorithm
Plaintext blocks and the key are input in the encryption algorithm. The plaintext is
divided into two parts, which are combined into ciphertext blocks after n rounds of
processing, and the input of each round is the output of the preceding round. The
subkey is also generated by the key. The typical size of a block is 64 bits.
Block algorithms are classified into the following types:
Data Encryption Standard (DES): DES was developed by the National Institute of
Standards and Technology (NIST). DES is the first widely used cryptographic
algorithm to use the same key for encryption and decryption. DES is a block
algorithm, in which a 64-bit plaintext and a 56-bit key are input to generate a 64-
bit ciphertext (data is encrypted to a 64-bit block). The password capacity is 56 bits
only, delivering insufficient security. In response, the 3DES algorithm is proposed.
Triple DES (3DES): The 3DES algorithm uses a 128-bit key. Data is first encrypted using a
56-bit key, then encoded using another 56-bit key, and finally encrypted using the first
56-bit key. In this way, 3DES uses a valid 128-bit key. The greatest advantage of 3DES is
that the existing software and hardware can be used, and it can be easily implemented
based on DES.
Advanced Encryption Standard (AES): The AES algorithm uses a 128-bit block and
supports 128-bit, 192-bit, and 256-bit keys. In addition, it can be used on different
platforms. A 128-bit key can provide sufficient security and takes less time for processing
than longer keys. To date, the AES does not have any serious weakness. DES can still be
used due to the production of a large number of fast DES chips. However, AES will
gradually replace the DES and 3DES to enhance security and efficiency.
International Data Encryption Algorithm (IDEA): The IDEA is a symmetric block cipher
algorithm, with a 64-bit plaintext and a 128-bit key input to generate a 64-bit ciphertext.
The IDEA is widely used. For example, SSL includes the IDEA in its cryptographic
algorithm library.
RC2, designed by Ron Rivest for RSA Security, is a cryptographic algorithm with a key of
a changeable size. It is a ciphertext in blocks, which means that data is encrypted to 64-
bit blocks. It can use keys of different sizes, from zero to infinity, and the encryption
speed depends on the key size.
RC5 is a new block cipher algorithm designed by Rivest for RSA Security in 1994. Similar
to RC2, RC5 is also a ciphertext in blocks, but uses different block and key sizes. In
addition, it runs a different number of rounds. It is suggested to use RC5 with a 128-bit
key and run 12 to 16 rounds. It is a cipher algorithm with changeable block sizes, key
sizes, and number of rounds.
RC6, unlike other new cryptographic algorithms, covers the whole algorithm family. RC6
was introduced in 1998 following RC5, which was found to have a theoretical
vulnerability in encryption for a special round. RC6 was designed to tackle this
vulnerability.
State-approved algorithms are commercial block algorithms compiled by China's
National Password Administration. The block length and key length are both 128 bits.
SM1 and SM4 can meet high security requirements.
Of these, DES, 3DES, and AES are the most commonly used.
The algorithms commonly used in public key cryptography include Diffie-Hellman (DH),
Ron Rivest, Adi Shamirh, LenAdleman (RSA), and Digital Signature Algorithm (DSA).
The DH algorithm is usually used by the two parties involved to negotiate a symmetric
encryption key (same key used for encryption and decryption). In essence, the two
parties share some parameters and generate their respective keys, which are the same
key according to mathematical principles. This key is not transmitted over links, but the
parameters exchanged may be transmitted over links.
The RSA algorithm is named after Ron Rivest, Adi Shamirh, and Leonard Adleman, who
jointly developed it at the Massachusetts Institute of Technology (MIT) in 1977. RSA is
currently the most influential public key cryptography algorithm. It can resist all known
password attacks and has been recommended by ISO as the public key data encryption
standard. In addition, it is the first algorithm that can be used for both encryption and
digital signature.
DSA is a variant of the Schnorr and ElGamal signature algorithms and used by NIST as
the Digital Signature Standard (DSS). It plays an important role in ensuring data integrity,
privacy, and non-repudiation. DSA is based on discrete logarithm problems in finite field
and delivers the same level of security as RSA. In DSA digital signature and
authentication, the sender uses his/her own private key to sign the file or message. After
receiving the message, the receiver uses the public key of the sender to verify the
authenticity of the signature. DSA is only an algorithm. In contrast to RSA, DSA cannot
be used for encryption, decryption, or key exchange. It is used only for signature and is
much faster than RSA in this regard.
Commonly used public key cryptography algorithms include Diffie-Hellman (DH), Ron
Rivest, Adi Shamirh, LenAdleman (RSA), and Digital Signature Algorithm (DSA).
The Message Digest Algorithm 5 (MD5) is a hash function used in a variety of security
applications to check message integrity. It calculates data as another fixed-length value.
It can "compress" large-volume information into a confidential format before the
information is signed by the digital signature software with the private key. In addition to
digital signatures, it can also be used for secure access authentication.
The secure hash algorithm (SHA) is applicable to the digital signature algorithm defined
in the digital signature standard.
SHA-1: SHA was developed by NIST. SHA-1 is a revision of SHA and was published
in 1994. Defined in RFC 2404, SHA-1 generates 160-bit message digests. SHA-1 is
slower but more secure than MD5. SHA-1 generates a long signature, prevents key
cracking, and discovers the shared key efficiently.
SHA-2: SHA-2 is a more advanced version of SHA-1. SHA-2 has a longer key than
SHA-1 and is therefore more secure. SHA-2 includes SHA-256, SHA-384, and SHA-
512, with 256-bit, 384-bit, and 512-bit keys respectively.
SM3 is a commercial algorithm compiled by China's National Password Administration. It
is used to verify the digital signature, generate and verify message authentication codes,
and generate random numbers. It can meet the security requirements of multiple
password applications.
These algorithms each have their own strengths and weaknesses. MD5 is faster than
SHA-1, but less secure. SHA-2 and SM3 have a longer key than SHA-1, making them
more difficult to crack and therefore more secure.
Answers:
CD
AB
The digital certificate is similar to a passport or identity card. People are requested to
show their passports when entering foreign countries. The digital certificate shows the
identity of a device or user that requests access a network.
It ensures that one public key is possessed by only one owner.
Certificate types:
Self-signed certificate: A self-signed certificate, which is also called a root certificate,
is issued by an entity to itself. In this certificate, the issuer name and subject name
are the same. If an applicant fails to apply for a local certificate from the CA, it can
generate a self-signed certificate. The self-signed certificate issuing process is
simple. Huawei devices do not support lifecycle management (such as certificate
renewal and revocation) for self-signed certificates.
CA certificate: CA's own certificate. If a PKI system does not have a hierarchical CA
structure, the CA certificate is the self-signed certificate. If a PKI system has a
hierarchical CA structure, the top CA is the root CA, which owns a self-signed
certificate. An applicant trusts a CA by verifying its digital signature. Any applicant
can obtain the CA's certificate (including the public key) to verify the local
certificate issued by the CA.
Local certificate: A certificate issued by a CA to the applicant.
Local device certificate: A certificate issued by a device to itself according to the
certificate issued by the CA. The issuer name in the certificate is the CA server's
name. If an applicant fails to apply for a local certificate from the CA, it can
generate a local device certificate. The local device certificate issuing process is
simple.
An X.509 v3 digital certificate contains mandatory information such as public key, name,
and digital signature of the CA, and optional information such as validity period of the
key, issuer (CA) name, and serial number.
Meaning of each field in the digital certificate:
Version: version of X.509. Generally, the v3 (0x2) is used.
Serial Number: a positive and unique integer assigned by the issuer to the
certificate. Each certificate is uniquely identified by the issuer name and the serial
number.
Signature Algorithm: signature algorithm used by the issuer to sign the certificate.
Issuer: name of the device that has issued a certificate. It must be the same as the
subject name in the digital certificate. Generally, the issuer name is the CA server's
name.
Validity: time period during which a digital certificate is valid, including the start
and end dates. Expired certificates are invalid.
Subject: name of the entity that possesses a digital certificate. In a self-signed
certificate, the issuer name is the same as the subject name.
Subject Public Key Info: public key and the algorithm with which the key is
generated.
Extensions: a sequence of optional fields such as key usage and CRL distributing
address.
Signature: signature signed on a digital certificate by the issuer using the private
key.
As network and information technology develops, e-commerce is increasingly used and
accepted. However, e-commerce has the following problems:
To address these problems, PKI uses public keys to implement identity verification,
confidentiality, data integrity, and non-repudiation of transactions. Therefore, PKI is
widely used in network communication and transactions, especially by e-government
and e-commerce.
The core of PKI is digital certificate lifecycle management, including applying for, issuing,
and using the digital certificates. During the lifecycle, PKI uses the symmetric key
cryptographic, public key cryptographic, digital envelope, and digital signature.
End entity: An end entity, or PKI entity, is the end user of PKI products or services. It can
be an individual, an organization, a device (for example, a router or firewall), or process
running on a computer.
Certificate Authority (CA): The CA is the trusted entity that issues and manages digital
certificates. The CA is an authoritative, trustworthy, and fair third-party organization.
Generally, a CA is a server, for example, a server running Windows Server 2008.
The CA on the top of the hierarchy is the root CA and the others are subordinate
CAs.
The root CA is the first CA (trustpoint) in the PKI system. It issues certificates
to subordinate CAs, computers, users, and services. In most certificate-based
applications, the root CA can be traced through the certificate chain. The root
CA holds a self-signed certificate.
A subordinate CA can only obtain a certificate from its upper-level CA. The
upper-level CA can be the root CA or another subordinate CA authorized by
the root CA to issue certificates. The upper-level CA is responsible for issuing
and managing certificates of lower-level CAs, and the CAs at the bottom issue
certificates to end entities. For example, CA 2 and CA 3 are subordinate CAs,
holding the certificates issued by CA 1. CA 4, CA 5 and CA 6 are also
subordinate CAs, holding the certificates issued by CA 2.
Certificate application: Certificate application is certificate enrollment. It is a process in
which an entity registers with a CA and obtains a certificate from the CA.
Certificate issue: If an RA is available, the RA verifies the PKI entity's identity information
when the PKI entity applies for a local certificate from CA. After the PKI entity passes
verification, the RA sends the request to the CA. The CA generates a local certificate
based on the public key and identity information of the PKI entity, and then returns the
local certificate information to the RA. If no RA is available, the CA verifies the PKI entity.
Certificate storage: After the CA generates a local certificate, the CA/RA distributes the
certificate to the certificate/CRL database. Users can download or browse a directory of
the certificates in the database.
Certificate download: A PKI entity can download a local certificate, a CA/RA certificate, or
a local certificate of another PKI entity from the CA server using SCEP, CMPv2, LDAP,
HTTP, or out-of-band mode.
Offline: The PKI entity produces the local certificate enrollment request in PKCS#10
format and saves it as a file. Then the user transfers the file to the CA server in out-of-
band mode (such as web, disk, or email).
On a PKI network, a PKI entity applies for a local certificate from the CA and the
applicant device authenticates the certificate.
The PKI entity applies for a CA certificate (CA server's certificate) from the CA.
When receiving the application request, the CA sends its own certificate to the PKI entity.
If the PKI entity uses SCEP for certificate application, it computes a digital
fingerprint by using the hash algorithm on the received CA certificate, and
compares the computed fingerprint with the fingerprint pre-defined for the CA
server. If the fingerprints are the same, the PKI accepts the CA certificate; otherwise,
it discards the CA certificate.
The PKI entity sends a certificate enrollment message (including the public key carried in
the configured key pair and PKI entity information) to the CA.
If the PKI entity uses SCEP for certificate application, it encrypts the enrollment
message using the CA certificate's public key and signs the message using its own
private key. If the CA server requires a challenge password, the enrollment message
must contain a challenge password, which must be the same as the CA's challenge
password.
Administrators can use HTTPS to securely log in to the WebUI of the HTTPS server for
device management.
To improve security of SSL connections, specify local certificates issued by the web
browser-trusted CA for HTTPS clients on the devices. Then the web browser can verify
local certificates, avoiding malicious attacks and ensuring secure login.
The devices function as egress gateways of network A and network B. Intranet users of
the two networks communicate through the Internet.
To ensure data security over the Internet, the devices set up IPsec tunnels with the peer
ends. Generally, IPsec uses the pre-shared key (PSK) to negotiate IPsec tunnels. However,
using a PSK on a large network is not secure in PSK exchange and causes heavy
configuration workloads. To address this problem, the devices can use PKI certificates to
authenticate each other in IPsec tunnel setup.
SSL VPN enables travelling employees to access intranets.
They can enter usernames and passwords to access the intranets, but this method has
low security. If the username and password of an employee are leaked, attackers may
access the intranets, causing information leakage. To improve network access security,
devices can authenticate users using PKI certificates.
Answer:
D
D
VPN: To ensure data confidentiality, many VPN technologies need to use encryption and
decryption technologies, such as IPsec VPN and SSL VPN.
IPv6: To prevent the device from being spoofed, secure neighbor discovery (SEND)
router authorization can be configured on the device. The digital certificate technology
can be used for selecting legitimate gateway devices.
HTTPS login: The administrator can use HTTPS to securely log in to the web UI of the
HTTPS server and manage network devices. To improve security of SSL connections, the
CA trusted by the web browser is configured to issue local certificates for the HTTPS
client. Then the web browser can verify local certificates, avoiding malicious attacks and
ensuring secure login.
System login authorization: A digest algorithm processes the user password to generate
a digest, which is stored and compared with the user-supplied password the next time
the user logs in.
This course introduces several encrypted VPNs and some common VPN technologies.
Traditional VPN networking mainly uses the private line VPN and client device-based
encrypted VPN. A private line VPN is a Layer 2 VPN constructed by renting digital data
network (DDN) circuits, ATM permanent virtual circuits (PVCs), and frame relay (FR) PVCs.
The backbone network is maintained by telecom carriers, and the customer is
responsible for managing its own sites and routes. On a client device-based encrypted
VPN, all VPN functions are implemented by the client device, and the VPN members are
interconnected over the Internet (untrusted). The private line VPNs are costly and
provide poor scalability, while client device-based encrypted VPNs pose high
requirements on the user's device and skills.
According to the IETF draft, an IP-based VPN "is an emulation of a private Wide Area
Network (WAN) facility using IP facilities." That is, it is a point-to-point private line
emulated on the Internet using tunneling technologies. "Virtual" means that users use
the toll lines of the Internet to set up their own private networks, without requiring
dedicated physical toll lines. "Private network" means that users can customize a network
best suited to their needs.
A L3VPN works at the network layer of the protocol stack. There are two major
types of L3VPN:
In an IPSec VPN, the IPsec header and IP header work at the same layer; packets
are encapsulated in IP-in-IP mode, or the IPsec header and IP header encapsulate
the payload at the same time.
GRE VPN is another major type of L3VPN technology. GRE VPN emerged earlier
and its implementation mechanism is simpler. A GRE VPN allows the packets of one
protocol to be encapsulated in those of any other protocol. GRE VPN is less secure
than IPsec VPN due to having limited, simple security mechanisms.
L2VPN
A L2VPN works at the data link layer of the protocol stack. Protocols used by
L2VPN include the Point-to-Point Tunneling Protocol (PPTP), Layer 2 Forwarding
(L2F, Layer 2 Forwarding), and Layer 2 Tunneling Protocol (L2TP, Layer 2 Tunneling
Protocol).
In this class, I will describe the most commonly used client-initiated VPN scenarios.
L2TP
A tunneling protocol set for transparently transmitting PPP packets between a user
and an enterprise server. It provides support for the tunnel transmission of packets
at the PPP link layer.
Main Usage
Employees at enterprise branch offices and employees on the move can remotely
access the headquarters through virtual tunnels over the Internet.
In a Client-Initiated VPN, a tunnel is established between each access user and the LNS.
Each tunnel carries only one L2TP session and PPP connection.
When a user initiates a connection to the LNS, the establishment of an L2TP tunnel
between the LNS and the user is triggered.
An L2TP session is created for the user in the tunnel established in step 1.
The user can access intranet resources through the PPP connection to the LNS.
When PC_A communicates with PC_B over the GRE tunnel, FW_A and FW_B forward
packets as follows:
After receiving the original packet sent by PC_A to PC_B, FW_A searches its routing
table for a matching route.
According to the search results, FW_A sends the packet to the tunnel interface for
GRE encapsulation. The tunnel interface adds a GRE header and then a new outer IP
header.
FW_A searches its routing table again for a route to the destination address (2.2.2.2)
in the new IP header of the GRE packet.
After receiving the GRE packet, FW_B determines whether or not the packet is a
GRE packet. The new IP header in the GRE packet has the Protocol field. If the
Protocol field value is 47, the packet is a GRE packet, in which case FW_B forwards
the packet to the tunnel interface for decapsulation. The tunnel interface removes
the outer IP header and GRE header to restore the original packet. If the packet is
not a GRE packet, FW_B forwards the packet as a common packet.
FW_B searches its routing table for a route to the destination address of the
original packet and then forwards the packet over the route.
Both the L2TP VPN and GRE VPN transmit data in plaintext, failing to ensure security for
users or enterprises.
ESP is mainly used to encrypt data, authenticate the origin of data, verify data integrity,
and prevent packet replay.
Security functions provided by AH and ESP depend on the authentication and encryption
algorithms used by IPsec.
The keys used for IPsec encryption and authentication can be manually configured or
dynamically negotiated using the Internet Key Exchange (IKE) protocol. In this class, I will
describe how to establish an IPsec tunnel manually.
An SA defines a set of parameters for data transmission between two IPsec peers,
including the security protocol, characteristics of data flows to be protected, data
encapsulation mode, encryption algorithm, authentication algorithm, key exchange
method, IKE, and SA lifetime.
Tunnel mode is more secure than the transport mode. It can completely
authenticate and encrypt original IP packets, hiding the IP addresses, protocol
types, and port numbers in original IP packets.
The authentication mechanism allows the data receiver to identify the data sender in IP
communications and determines whether data is tampered with during transmission.
IPsec uses the Keyed-Hash Message Authentication Code (HMAC) function for
authentication. The HMAC function verifies the integrity and authenticity of data packets
by comparing digital signatures.
IPsec implements encryption for secure transmission. However, IPsec encryption and
authentication have problems in some scenarios, for example, NAT traversal. Because of
its unique attributes, the SSL VPN takes effect only at the application layer and does not
require a user security VPN client. Therefore, its application scope is wider, and it is more
convenient.
SSL provides secure connections for HTTP and is widely used in various fields, such as e-
commerce and online banking, to ensure secure data transmission.
User authentication: The virtual gateway authenticates the client identity.
Web proxy: implements clientless web access, fully reflecting how easy the SSL VPN is to
use and distinguishes the SSL VPN from other VPNs. It forwards the page request
(HTTPS) of a remote browser to the web server, and then sends the response of the
server to the end user. In this way, it implements URL permission control, (controlling
access to a specific page). The implementation of the web proxy includes web rewriting
and web link.
File sharing: enables users to access the shared resources on different server systems
(such as Windows systems that support SMB and Linux systems that support NFS)
through web pages. It supports the SMB (Windows) and NFS (Linux) protocols.
Port forwarding: used in scenarios (such as in the C/S architecture) where access using
web technologies is not supported.
After L2TP is configured, configure a security policy to allow users to communicate with
intranet servers and allow L2TP packets to pass through.
The VPN client settings must be the same as those on the firewall.
Network 1 and Network 2 are required to communicate through a GRE tunnel.
Configure a security policy to allow Network 1 and Network 2 to communicate and allow
GRE packets to pass through.
B
ABCD
Basic conditions for security operations:
Disaster recovery plan: When a disaster interrupts services, the disaster recovery
plan should be able to work and support recovery operations.
Investigation and forensics: When the threat and damage caused by some
information security incidents are serious enough to require the access of law
enforcement agencies, investigators must investigate carefully to ensure that the
correct steps are performed.
BCP team setup: According to the preceding business and organization analysis, the
business continuity is closely related to operations departments, service departments,
and senior management of enterprises. Therefore, the members of these departments
must participate in the BCP development and maintenance team. This team must include
the following personnel:
Requirements of laws and regulations: Laws and regulations are different for business
continuity in different countries and regions. These laws and regulations ensure the
vitality of the national economy while requiring enterprises to comply with the standards
of business continuity.
Priority determination: It is important to determine the priority of a business when a
disaster occurs. The business priority can be quantitatively analyzed using the Maximum
Tolerable Downtime (MTD).
Risk identification: The organization identifies possible risks, including natural and man-
made risks. In this phase, only a qualitative analysis is required to lay a foundation for
subsequent assessment.
Possibility assessment: The possibility of risks that threaten the organization occurring is
evaluated.
Resource priority: Prioritize the business continuity planning resources based on different
risks.
Policy development: Determine the mitigation measures for each risk based on the
business impact assessment result.
Plan implementation: Use specific resources to develop plans based on policies as much
as possible to reach the preset goals.
Preparation and handling: Provide necessary resources and protection measures for the
formulation, maintenance, and implementation of the business continuity planning.
These resources include people, buildings/equipment, and infrastructure.
Training and education: Provide training on the business continuity planning for all
related personnel involved in BCP so that they can understand the tasks and respond to
emergencies in an orderly manner.
Planning approval: After the business continuity planning is designed, obtain approval
from the senior management of the organization.
Detect: Personnel monitor and analyze data to detect security incidents, such as
collecting logs. For details, see Data Monitoring and Data Analysis in the following
chapter.
Respond: After the detection and verification of the incident, activate the response
program. The computer security incident response team needs to assist in investigating,
assessing damage, collecting evidences, reporting incidents, recovering programs,
restoring, learning lessons, and conducting root cause analysis. Respond to the security
incident as soon as possible to reduce the damage. For details, see the Emergency
Response.
Mitigate: Mitigation is also a way of responding to emergencies. It is used to prevent the
impact of incidents, for example, by interrupting the connection between the infected
and the enterprise network to isolate the issue.
Report: When an incident occurs, it needs to be reported to the organization and
sometimes needs to be reported to the outside world. Minor security incidents may not
need to be reported to the senior management of the organization, but senior
administrative personnel must be notified of critical incidents in order to adjust the
response policy and contain the impact.
Recover: Restore the system to the normal state. However, evidence collection should be
performed before system restoration.
Remediate: In this phase, root cause analysis is performed to fix system vulnerabilities
and prevent similar incidents from happening again.
Lessons learned: Summarize the incident, learn lessons, and apply the output of this
phase to the detection and maintenance phases of the subsequent business continuity
planning.
Disasters include:
Natural disasters: Earthquakes, floods, fires.
Man-made disasters: Terrorist acts, power interruption.
Other public facilities and infrastructure faults: Software/hardware faults,
demonstrations, and intentional damages.
Recovery policy: Back up important data and facilities to improve the system recovery
capability and fault tolerance capability, thereby ensuring high service availability and
improving service quality.
Back up the storage policy
Site recovery policy
Mutual assistance agreement
Execute the disaster recovery plan: For details, see Emergency Response.
Test the disaster recovery plan: A disaster recovery plan must be tested periodically to
ensure it works, especially if there have been organizational changes. The test types are
as follows:
Read-through tests
Structured tests
Simulation tests
Parallel tests
Short and medium tests
The investigation method must comply with laws and regulations.
Civil investigation: Civil investigations usually do not involve the work of internal
employees and legal teams.
Evidence type:
To combat network security risks, there must be accessible solutions for customers to
improve their information security architecture based on security assessments. The aim is
to help customers strengthen security but still maintain a high level of performance.
Criteria:
Manual audit: Manually inspect target systems, including the host system, service
system, database, network device, and security device.
-O obtains the fingerprint using TCP/IP to determine the OS type of the host
-D Scan with deceit patterns and write down all deceptive addresses you have
specified in the connection record of the remote host.
nmap –sn [IP section]: performs fast ping scan on a network segment.
nmap –A IP: performs scan to determine the operating system of the host.
Sparta is an easy-to-use GUI tool. It integrates port scanning and brute-force cracking
functions.
Configure Burp Suite and set the browser proxy before using Burp Suite. Additionally,
ensure that the domains and URLs to be scanned are present on the site map of Burp
Target so that full or partial scan can be performed.
You can right-click a vulnerability and choose Set severity from the short-cut menu. Then
choose a vulnerability level. You can also choose Set confidence to mark the presence of
vulnerabilities or mark false vulnerabilities.
During security assessment and scan, carry out a penetration test authorized by
customers on key IP addresses. Simulate the attack and vulnerability discovery
technologies that may be used by hackers to perform an in-depth test on the security of
target systems and find out the most vulnerable areas. Try to carry out a thorough and
accurate test on these key IP addresses. If a major or critical vulnerability is found, fix it in
a timely manner.
Observing port: connected to a monitoring device and used to send packets from the
mirrored port to the monitoring device.
Logs are stored in hard disks or SD cards. If no hard disk or SD card is available, logs
cannot be viewed or exported. Different device models support different logs and
reports. For details, see Huawei product documentation.
Log type:
System logs: The administrator can obtain operational logs and hardware logs to
locate and analyze faults.
Service logs: The administrator can obtain relevant network information to locate
and analyze faults.
Alarms: Alarm information, including the alarm severity, source, and description,
can be displayed on the WebUI.
Traffic logs: The administrator can obtain traffic characteristics, used bandwidth,
and validity of security policies and traffic policies.
Threat logs: The administrator can obtain detection and defense details about
network threats, such as viruses, intrusion, DDoS, Trojan horses, botnets, worms,
and APT. Threat logs help understand historical and new threats, and adjust the
security policies to improve defense.
URL logs: The administrator can obtain the URL accessing status (permitting,
alerting, or blocking) and relevant causes.
Content logs: The administrator can check the alarm and block events generated
when users transfer files or data, send and receive email, and access websites to
obtain behavior security risks and relevant causes.
Operational logs: The administrator can view operation information, such as login,
logout, and device configuration, to learn the device management history.
User activity logs: The administrator can obtain the online records of a user, for
example, login time, online duration or freezing duration, and IP address used for
login. The administrator can also study user activities on the current network,
identify abnormal user login or network access behaviors, and take the
corresponding countermeasures.
Policy matching logs: The administrator can obtain the security policies matched by
the traffic to determine whether the security policies are configured correctly and
meet the requirements. Policy matching logs can be used to locate faults.
Sandbox detection logs: The administrator can view sandbox detection information,
such as the file name, file type, source security zone, and destination security zone.
Based on the sandbox detection information, the administrator can handle
exceptions in a timely manner.
Mail filtering logs: The administrator can check the mail sending and receiving
protocols, number and size of mail attachments, and causes of mail blocking, and
then take measures.
Audit logs: The administrator can learn FTP behavior, HTTP behavior, and mail
sending/receiving behavior, QQ online/offline behavior, keyword searching, and
validity of audit policies. (QQ is an instant messaging software service developed by
a Chinese company.)
The firewall outputs system logs through the information center. The information center
is the information hub for system software modules on the firewall. System information
can be filtered to find specific information.
Information is graded by eight levels based on its severity. The more critical the
information, the lower its level.
Emergency (0): A fault causes the device to malfunction. The system can recover
only after the device is restarted. For example, the device is restarted because of
program exceptions or memory usage errors.
Alert (1): A fault needs to be rectified immediately. For example, the memory usage
of the system reaches the upper limit.
Critical (2): A fault needs to be analyzed and handled. For example, the memory
usage exceeds the lower limit, the temperature exceeds the lower limit, BFD finds
that a device is unreachable, or an error message is detected (the message is
sourced from the device).
Notice (5): Key operations that are required to keep the device functioning properly,
such as shutdown command execution, neighbor discovery, or the protocol status
change.
Service logs on the firewall include threat logs, content logs, policy matching logs, mail
filtering logs, URL filtering logs, and audit logs.
The firewall can output service logs on the WebUI, log server, or information center. The
administrator can view the service logs to obtain the service running status and network
status.
Windows event log files are in essence databases that include system, security, and
application records. The recorded event contains nine elements: date/time, event type,
user, computer, event ID, source, category, description, and data.
The header field includes the source, time, event ID, task type, and event result (success
or failure) in fixed formats.
The description field varies according to events. This field consists of fixed description
information and varying information.
As discussed, proactive analysis uses security assessment methods, such as security scan,
manual audit, penetration test, questionnaire, and interview survey, to obtain valuable
information and work out a security assessment report.
Log information is analyzed during passive collection. The log records the key events
that occur. To analyze the events, check Who, When, Where, What, and How.
Key Log Analysis Points
When: time.
For User Datagram Protocol (UDP) attacks, fingerprint learning technology is used to
analyze and obtain the characteristics of attack packets and provide a basis for defense.
Sessions can also be created to permit the UDP packets from the real source and discard
the UDP packets from the counterfeited source.
The figure shows how to filter security logs. For Event Level, select Critical or Warning, in
the Event sources field, enter Application Error, and in the Keywords field, enter Audit
Failure.
For Windows event logs, we can quickly obtain required information based on Event ID.
Each event ID indicates a unique meaning.
These events use Windows 2008 R2 as an example.
These events use Windows 2008 R2 as an example.
Event 1 records the old system time, new system time, and the name of the user who
changed the system time.
Event 4616 records the old system time, new system time, the name of the user who
changed the system time, and the process used to change the time.
These events use Windows 2008 R2 as an example.
Event 20001 records the drive installation of plug-and-play devices (such as USB flash
drive and hard disk). The recorded information includes the device brand, model, and SN.
The event can be used to locate the USB storage media inserted by users.
Answers:
D
D
Cybercrimes have the following characteristics: Criminal subjects are professional,
criminal behavior is intelligentized, criminal objects are complicated, criminal targets are
diverse, and consequences are covert. These characteristics distinguish cybercrimes from
traditional criminal crimes.
Cybercrimes have increased year on year over the past decade or so. They bring huge
economic loss and other severe consequences, and can severely threaten a nation's
security and social order..
Other forms of cybercrimes:
Network sniffing
Spoofing
Connection hijacking
Malicious damage
Buffer overflow
DoS/DDoS
Social engineering
Digital evidence can be presented in various forms, such as text, graphs, images,
animations, audio, and videos. Multimedia forms of computer evidence covers almost all
traditional types of evidence.
Digital evidence may be obtained from a variety of sources, such as:
Computer forensics includes two phases: physical evidence collection and information
discovery.
Physical evidence collection is the search for and retention of related computer
hardware at the scene of the cybercrime or intrusion.
Information discovery is the extraction of evidence (that is, digital evidence) from
original data (including files and logs) for proof or refutal.
ISO
The IT Security techniques subcommittee of the ISO, ISO/IEC JTC 1/SC 27, released
the Guidelines for identification, collection, acquisition and preservation of digital
evidence (ISO/IEC27037: 2012) in October 2012. The Guidelines stipulates the
definition, handling requirements, handling procedure, and key components
(including the continuity of evidence, evidence chain, security of the scene, and
roles and responsibilities in evidence collection) of digital evidence.
National Institute of Standards and Technology (NIST)
2014: SP 800-72 Guidelines on PDA Forensics and PDA Forensic Tools: an Overview
and Analysis; 2005: Cell Phone Forensic Tools: an Overview and Analysis (updated in
2007); 2006: SP800-86 Guide to Integrating Forensic Techniques into Incident
Response; 2007: SP800-101 Guidelines on Cellullar Phone Forensics, updated to
SP800-101 Guidelines on Cell Phone Forensics in 2013; 2009: Mobile Forensic
Reference Materials: a Methodology and Reification; 2014: NIST Cloud Computing
Forensic Science Challenges.
British Standard Institute (BSI)
Since 2003, the BSI has released a series of national standards, such as BIP
0008:2003 Evidential Weight and Legal Admissibility of Information Stored
Electronically, BIP 0008-2:2005 Evidential Weight and Legal Admissibility of
Information Communicated Electronically, BS 10008:2008 Evidential Weight and
Legal Admissibility of Electronic Information (updated in 2014), and BIP 0009:2008
Evidential Weight and Legal Admissibility of Electronic Information - Compliance
Workbook For Use With BS 10008.
Comprehensiveness
Search all files in the target system. Display the content of hidden, temporary, and
swap files used by the operating system or applications, and analyze data in special
areas of disks.
CD-ROM tool: CD-R Diagnostics can display data that cannot be viewed in normal cases.
Text search tool: dtSearch is used for text search, especially in Outlook .pst files.
Disk erasing tool: This type of tool is mainly used to erase residual data from the disks of
analysis machines before they are used in forensic analysis. Simply formatting such
drives is insufficient. For example, NTI's DiskScrub software can be used to completely
wipe data on a disk.
Driver image programs: Driver image software, such as SafeBackSnapBack, Ghost, and
dd, can create a bit-for-bit image of an entire driver for forensic analysis.
Chip forensics: When a communications device cannot be used due to either intentional
or unintentional damage, chip forensics can be performed to extract information from
the device.
Cloud forensics: When data is deleted, cloud forensics can be used to locate the cloud
service provider to restore the data.
IoT forensics: When a networked device is intruded, IoT forensics can obtain related data
using sniffing and forensic technologies such as IoT black-box and distributed IDS.
SCA forensics: SCA is an attack against encryption devices. It exploits the leak of side-
channel information, such as timing information, power consumption, or electromagnet
radiation during device operation.
Symmetric encryption: In a symmetric encryption algorithm, only one key is used. Both
parties use this key to encrypt and decrypt data. Therefore, the decryption party must
know the encryption key in advance.
In digital signature technologies, digest information is encrypted using the private key of
the sender and then sent to the receiver together with the original text. The receiver can
decrypt the encrypted digest only by using the public key of the sender. The receiver
uses a hash function to generate a digest of the original text and then compares this
digest with the decrypted digest. If the two digests are the same, the received
information has not been tampered with during transmission. In this way, digital
signatures can verify information integrity.
A digital certificate is a file that contains information about the owner of a public key and
the public key, and is digitally signed by a CA. An important feature of a digital certificate
is that it is valid only within a specific period of time. Digital certificates can be used for
sending secure mail, accessing secure sites, and online electronic transaction and trading,
such as online securities transactions, online bidding and procurement, online office
work, online insurance, online taxing, online signing, and online banking.
Relevance
Relevance is the association of evidence to case facts. Digital evidence that may
have a substantial impact on the facts of a case shall be judged by the court as
relevan.
Objectivity
Objectivity can also be called authenticity. Digital evidence must remain unchanged
during whole process, from initial collection to submission.
Legitimacy
Legitimacy of status: requires that electronic data should have multiple backups, be
kept away from the high magnetic field, high temperature, dust, squeezing, and
damp, and be kept consistent with the original status of the target system or have
minimum changes.
Infer the possible author based on the obtained documents, words, syntax, and
writing (coding) style.
Discover the relationship between different pieces of evidence obtained from the
same event.
Attack sources can be devices, software, and IP addresses.
Link test: Link tests (also called segment-by-segment tracing) determine the source of
attacks by testing network links between routers, usually starting with the router closest
to the victim host. A tester performs hop-by-hop tests, testing whether a router's uplink
carries attack data. If a spoofing packet is detected, the tester will log in to the uplink
router to continue monitoring packets. This process continues until the attack source is
reached.
Packet recording: Packets are recorded on the key router of the Internet, and then data
mining technologies are used to extract information about the attack source. This
technique can produce valuable results and accurately analyze attack services (even after
the attack stops). However, it places high requirements on record processing and
storage capabilities. In addition, legal and confidentiality requirements must be carefully
considered when storing and sharing the information with ISPs.
Packet marking: Packets can be marked on each router through which they traverse. The
simplest method to mark packets is to use the record routing option (specified in RFC
791) to store the router address in the option field of the IP header. However, this
method increases the length of packets at each router and may lead to packet
fragmentation. In addition, attackers may pad fields reserved for routing with fake data
to avoid tracing.
Spam tracing: Shallow mail behavior parsing can check and analyze the server
connection count, sender's IP address, sending time, sending frequency, and number of
recipients. It can also check the shallow mail subject and detect sending behavior. In
addition, the SMTP MTA host can perform transparent parsing on the source of the mail
to identify illicit behavior, such as anonymity, forgery, and abuse. In this way, the host
can reject the mail or limit the frequency of delayed sending.
Answers:
D
C
The Morris Worm Incident was a wake-up call to the public about computer network
vulnerabilities. This incident caused a panic in the United States, and convinced people
the more computers are used, the higher the possibility of computer network attacks.
These days, with computers more tightly connected than ever before and networks open
to more people, Morris-like worm programs are inevitable. If such a program is exploited,
the damage can be large. CERT setup marks the transformation of information security
from traditional static protection to sound dynamic protection.
FIRST is the premier organization and recognized global leader in incident response, and
brings together a variety of computer security incident response teams. FIRST members
work together to handle computer security incidents and promote incident prevention
plans.
FIRST members develop and share technical information, tools, methods, processes,
and best practices.
FIRST members use their comprehensive knowledge, skills, and experience to foster
a safer global electronic environment.
China has created additional professional emergency response organizations, such as
National Computer Network Intrusion Prevention Center, National 863 Program
Computer Intrusion Prevention, and Antivirus Research Center. Many companies also
offer paid cyber security response services.
As a national emergency center, CNCERT/CC:
Carries out prevention, discovery, warning, and coordination of Internet cyber security
incidents according to the principle of "proactive prevention, timely discovery, quick
response, and recovery".
Test and assessment: As a professional organization for cyber security test and
assessment, CNCERT/CC provides security test and assessment services for
governments and enterprises in accordance with relevant standards by adopting
scientific methods, standard procedures, and fair and independent judgment.
CNCERT/CC also organizes efforts to formulate standards for communications
network security, and telecommunication network and Internet security protection.
It also technically monitors and analyzes national Internet financial risks.
The Cybersecurity Law of the People's Republic of China is hereinafter referred to as
"Cybersecurity Law".
The following laws and regulations are complements to the Cybersecurity Law:
Some other laws and regulations are being planned, and will contribute to a more
comprehensive cyber security law system.
Cyber security incidents are as follows:
Malicious program: Computer virus, worm, Trojan horse, botnet, hybrid program
attack, or malicious code embedded in web page
Cyber attack: DoS attack, backdoor attack, vulnerability attack, network scanning
and eavesdropping, phishing, or interference
The relevant cyber security incident emergency command center executes the
corresponding emergency plan, organizes response work, and performs risk
assessment/control and emergency preparations.
The national technical support team for cyber security emergency must be
always available and check that emergency vehicles, devices, and software
tools are in good condition.
Level-III response to yellow signal warning and level-IV response to blue signal
warning:
Intrusion detection
The protection, detection, response, and recovery in the PDRR model constitute an
information security process.
Protection: takes measures (such as patching, access control, and data encryption)
to defend against all known security vulnerabilities.
Detection: detects the defense system bypass behavior and locates the identity of
the intruder, including the attack source, attack status, and system loss.
Recovery: restores the system after an intrusion incident occurs. The defense
system must be updated to prevent the same type of intrusion incident reoccurring.
In remote emergency response, emergency response teams obtain temporary host or
device accounts from the customer network personnel, and log in to the hosts/devices
for detection and service support. After the incidents are resolved, the emergency
response teams provide detailed emergency response reports.
If remote login fails or the incidents cannot be resolved, confirm local emergency
response with customers.
The emergency response process varies according to situations. The emergency
response service personnel need to flexibly handle security incidents but must record all
process changes.
Reference files:
Proactive discovery: Incidents are found by the intrusion detection device and
global warning system
Determine the person responsible for handling the incident, and provide necessary
resource support.
Estimate the impact and severity of the incident to determine a proper emergency
response plan.
Check the following: affected hosts and networks, network intrusion extent,
permissions obtained by the attacker, security risks, attack means, and spread
scope of the exploited vulnerabilities.
Modify the filtering rules of all firewalls and routers to deny the traffic from
suspicious hosts.
D
B
Recommendations
Huawei Learning Website
http://learning.huawei.com/en
Huawei e-Learning
https://ilearningx.huawei.com/portal/#/portal/ebg/51
Huawei Certification
http://support.huawei.com/learning/NavigationAction!createNavi?navId=_31
&lang=en
Find Training
http://support.huawei.com/learning/NavigationAction!createNavi?navId=_trai
ningsearch&lang=en
More Information
Huawei learning APP