You are on page 1of 403

• The analog video security system consists of main components such as the analog camera,

VCR, analog switch matrix, and cathode ray tube (CRT) display. The analog camera
transmits video images to the analog switching matrix through coaxial cables or
analog optical transceivers for browsing, and transmits the video images to the VCR
for storage. The analog control keyboard can be used to switch camera images on the
monitor and control the PU.
• The processing of CCD is more complex than CMOS because CCD is on silicon
semiconductors, while CMOS is on metal-oxide semiconductors. The imaging effect of
CMOS is poorer than that of CCD because each pixel of CMOS requires an A/D
conversion cable and a digital signal amplifier. A CMOS sensor has low power
consumption because electric charge is directly generated by CMOS while analog
signals is outputted by CCDs.
• The Neural Network Processing Unit (NPU) uses circuits to simulate the human neuron
structure. It is suitable for deep learning algorithm training.

• A Graphics Processing Unit (GPU) has a large number of computing units and ultra-
long pipelines. It is suitable for processing a large amount of same-type data.

• A Field-Programmable Gate Array (FPGA) features static reprogramming and dynamic


system reconstruction. It enables hardware functions to be modified through
programming like software. In addition, it has the powerful parallel processing
capability, which can greatly improve the speed of complex computing.

• Digital Signal Processing (DSP) can process signals through numerical calculation.

• An Application-Specific Integrated Circuit (ASIC) is a chip customized for a particular


use, for example, dedicated audio and video processors.
• During the observation on vehicles on roads at night, cameras supporting highlight
compensation can capture vehicle license plates clearly. These cameras are suitable for
scenarios where license plates must be clearly seen in backlight and high image
resolution is required, such as toll stations and entrances and exits of parking lots.
• There are three CCD electronic couplings inside the camera that sense blue, green, and
red light. In a preset case, the electronic amplification ratios of the three photosensitive
circuits are the same, that is, 1:1:1. The adjustment of white balance is to change the
proportional relation according to the adjusted scene. For example, if the proportion of
blue, green, and red light of the adjusted object is 2:1:1, the proportion of white
balance after adjustment is 1:2:2. The proportion of obvious blue light decreases and
the proportion of green light increases, in this way, the adjusted object passes through
the white balance adjustment circuit to the shot image, so that the blue, green, and
red proportions are the same. That is, if the adjusted white is blue, the white balance
adjustment changes the normal proportional relationship to weaken the amplification
of the blue circuit, and increases the proportion of green and red to make the image
still white.
• Defogging technologies
▫ Optical defogging: Near-infrared rays can penetrate a certain amount of fog and dust.
Optical defogging uses the principle that near-infrared rays can diffract small particles to
achieve accurate and fast focus. The key to the technology is the lens and the filter. The
image definition is improved by using the optical imaging principle.

▫ Algorithm-assisted defogging: also called video image antireflective technology. It


enables blurred images caused by fog, moisture, and dust to be clear. It emphasizes
certain features in the images and suppresses uninterested features, improving image
quality and enhancing information volume.

▫ Photoelectric defogging: Combining the preceding two technologies, the embedded chip
and ISP/DSP are used to implement color image output. The color image output is
suitable for lenses and sensors.
• Compression algorithms of encoders are similar to automobile engines. A high-
performance engine has a complex structure and high cost, but saves fuel. If you
mount a high-performance engine on an automobile, the one-time purchase cost is
slightly higher, but the lifelong use cost of the automobile is reduced because of fuel
saving. The H.264 and H.265 encoding algorithms are complex and have high chip cost.
However, with the image quality remaining the same, these algorithms require much
lower bandwidth and storage cost.

• Compared with H.264, H.265 can reduce the video size by about 39-44% while
ensuring the same video quality.
• An HDD consists of mechanical components and electronic components as follows:

▫ Magnetic head: It reads and writes data.

▫ Actuator: It drives the arm to move the magnetic head to a specific position.

▫ Platter group: It is the data carrier.

▫ Actuator axis: It enables platters to keep rotating at a high speed.

▫ Control circuit: It implements system control, speed adjustment, and drive


functions.

• An SSD has a simple structure. It consists of a control chip, a cache chip, and flash
memory chips used to store data. It has high read and write speed and low power
consumption.

▫ Main control chip: is used to properly allocate data load on flash memory chips
and connect flash memory chips to external interfaces.

▫ Cache chip: assists the main control chip in data processing.

▫ Flash memory chip: is used to store data.


• XOR: Exclusive OR
• DAS is the earliest storage architecture among these three architectures. In DAS,
storage devices are directly connected to a computer through SCSI or fiber channels.
DAS is installed on PCs and servers. Because servers are independent of each other,
storage devices are separated and cannot be shared, leading to heavy server load.
• SAN is a network-centric storage architecture that consists of multiple types of devices.
All storage devices can be centrally shared and managed. In addition, SAN provides the
redundancy backup function.
• NAS is similar to a dedicated file server. Different from a traditional generic server,
NAS provides only file system functions and cross-platform file sharing services. Data is
transmitted between clients and storage devices without being forwarded by servers.
The server only controls and manages data; therefore, the response is faster and the
data bandwidth is better.

• File system interfaces: NFS and CIFS


• Columns (cameras) indicate the input of a matrix switcher. The number of rows
indicates the number of monitors. The point where each row and column meet in a
matrix represents an I/O status of the system. Therefore, the images of all channels
can be displayed on any monitor, and all monitors can display the images of any
channel without affecting each other.
• LCD consists of an LCD panel and light emitting devices. The LCD panel is composed of
two parallel pieces of glass between which liquid crystals are placed to refract light
and produce images.

• LED and LCD differ in the LCD panel and backlight type. LED has larger pixels, higher
screen saturation, and higher brightness.

• DLP technology is used to process signals digitally before projection. This technology
can achieve high fidelity and vivid color of the image.
• Pan/Tilt/Zoom (PTZ): In security management application, the PTZ represents full-
sphere movements of a PTZ camera (up and down/left and right) and lens zooming
and focusing control.
• The video security center devices and their functions are as follows: The decoder
decodes video and the matrix distribute video streams to the corresponding screens.

• The data center is responsible for storing data, and the device is a server cabinet. The
environment must meet the requirements for cooling and fire prevention.

• This is a picture of the data center. The application scenario of the data center is
totally different from that of video security center.
• ONVIF, started in May 2008 by Axis Communications, Bosch, and Sony, is a global and
open industry forum with the goal of facilitating the development and use of a global
open standard for the interface of physical IP-based security products. By March 2011,
279 companies have joined ONVIF.

• In November 2008, the forum officially released the first version of ONVIF
specifications.

• In November 2010, the forum released the second version of ONVIF specifications. The
specifications involve device discovery, real-time audio and video, PTZ camera control,
recording control, and video analysis.
• Status Quo

• >10,000 equipment (such as cameras) manufacturers.

• >1000 video integrators of public safety industry.

• >100 video security platform providers.

• Different vendors use different communication modes and protocols.

• Result

• Devices from different vendors cannot communicate with each other.


• NVT: Core + Streaming + Media + Device I/O + Imaging + PTZ + Video Analytics

• NVD: Core + Streaming + Receiver + Display + Device I/O

• NVS: Core + Streaming + Recording Search + Replay Control + Device I/O + Receiver +
Recording Control

• NVA: Core + Streaming + Receiver + Video Analytics + Video Analytics Device + Device
I/O

• NVC: All
• Profile S: A collection of configurations for stream media, such as recording,
transmission, and real-time browsing of PUs. Additional functions, such as PTZ control
and relay output.

• Profile G: A collection of configurations for storage, such as how to store, play back,
and search for videos.

• Profile C: A collection of configurations for access control.


• Up to 16.5 million Model T vehicles have been sold around the world.
• Relationship between data, information, knowledge, and intelligence.

▫ Data is a fact and is the raw record of everything in the world.

▫ Information is arranged data that has meanings and can be used.

▫ Knowledge is a systematic conclusion drawn from comprehensive analysis on


much information, which can guide decision-making.

▫ Intelligent outputs are reasoned as the principle and rule based on a great deal
of knowledge. It can be used for innovation.
 In the 1990s, Walmart successfully introduced the Apriori algorithm (an association
analysis algorithm) to analyze the POS data. During the data processing, Walmart gets
a piece of important information that beer and diapers often appear in the same
shopping basket. In view of this situation, Walmart observes customers' consumption
behavior from the perspective of customers' psychological factors and determines that
there is a real relationship between beer and diaper. At this time, the relationship
between beer and diaper has become the knowledge. Accordingly, Walmart adjusts the
location of beer and diapers, and achieves favorable sales volume.
• The core value of artificial intelligence is the automation of knowledge work.
• Image classification: Maps images to different categories, which can be used for image
search and archiving.

• Object detection & recognition: Detects, locates, and identifies different objects in
images, including detection of digits, characters, and pedestrians. This function can be
used for OCR, unmanned driving, and intelligent image tailoring.

• Semantic segmentation: Predicts the label of each pixel in the image by segmentation
and recognition. It can be used for unmanned driving, augmented reality, and situation
awareness.
• The coaxial cable is a transmission medium used in the early stage. There are two
types of coaxial cable standards: 10Base2 and 10Base5. Both of the two standards
support a 10 Mbit/s transmission rate, and the maximum transmission distance is 185
m and 500 m, respectively. Generally, the 10Base2 coaxial cable uses the BNC
connector, and the 10Base5 coaxial cable uses the type-N connector. The diameter of
the coaxial cable used by the 10Base5 standard is 9.5 mm, and that of the coaxial
cable used by the 10Base2 standard is 5 mm. Therefore, the former is also called
thicknet, and the latter is called thinnet. Currently, a 10 Mbit/s transmission rate
cannot meet enterprise network requirements. Therefore, coaxial cables are seldom
used on enterprise networks.

• The transmission rate supported by optical fibers can be 10 Mbit/s, 100 Mbit/s, 1 Gbit/s,
10 Gbit/s, or even higher. Optical fibers are classified into single-mode optical fibers
and multi-mode optical fibers according to the modes of transmitting optical signals.
The single-mode optical fiber allows only one mode of light to propagate. Without
intermodal dispersion, it applies to long-distance high-speed transmission. The multi-
mode optical fiber allows multiple modes of light to propagate over the same fiber.
Due to the serious signal pulse broadening caused by large intermodal dispersion, it is
mainly used for short-distance transmission in LANs. There are many types of optical
fiber connectors, including ST, FC, SC, and LC connectors.

• Twisted pair cables and coaxial cables both use electrical signals to transmit data, and
optical fibers use optical signals to transmit data.
• With the development of enterprise networks, more and more users need to access the
networks. The large number of access ports provided by switches can meet this
requirement. In addition, switches completely solve the conflict problem of the early
Ethernet, greatly improving the Ethernet performance and security.

• Switches work at the data link layer and perform operations on data frames. After
receiving data frames, switches forward the frames according to the header
information.

• Next, let's take a small switching network as an example to explain the basic working
principles of switches.
• 1. In a bus topology, all devices on the network are directly connected to the public bus
through corresponding hardware interfaces. Nodes communicate with each other in
broadcast mode. The information sent by a node can be received by other nodes on
the bus. Advantages: It is a common topology for LANs because of its simple structure,
easy cabling, high reliability, and easy expansion. Disadvantages: All data is
transmitted through the bus, which becomes a bottleneck of the entire network. Fault
diagnosis is difficult. Ethernet is the most famous network with the bus topology
structure.

• 2. In a star topology, each node is connected to the central node through a separate
communication line. Advantages: simple structure, easy implementation and
management, and easy detection and elimination of faults in connection points.
Disadvantages: The central node is a bottleneck of the entire network. If the central
node is faulty, the network will break down.

• 3. In a ring topology, each node forms a closed loop through the communication line,
and data in the loop can only be transmitted unidirectionally. Advantages: simple
structure; optical fibers are recommended; long-distance transmission, with a
calculable transmission latency. Disadvantages: Each node on the ring network can
become a bottleneck of network reliability. If any node is faulty, the network will break
down. In addition, fault diagnosis is difficult. The token ring network is the most
famous network with a ring topology structure.
• The front-end backhaul network is responsible for the access of network cameras.
Video signals are transmitted to the dedicated video network through the site
observation unit.

• GE: Gigabit Ethernet

• GPON: Gigabit-capable Passive Optical Network


• The 802.1X and MAC authentication is enabled on access devices. When a camera
connects to the network, the access device sends the user name, password, and MAC
address to the controller for authentication. After the authentication succeeds, the
access device switches the access port to a service port, and the video data connects to
the network.

• P2MP wireless link backup: point-to-point backup.


• The 2:N optical splitter is mainly used for optical path protection. It has two upper
transmission ports, and N lower transmission ports. Currently, the common upstream
rate is 1.24416 Gbit/s, the downstream rate is 2.48832 Gbit/s, and the maximum split
ratio is 1:128. 10G-PON: 2.5 Gbit/s (upstream) and 10 Gbit/s (downstream); 100G-PON
will be supported in the future.

• PON advantages:

▫ High bandwidth and flexible scalability: The optical access bandwidth is high,
which meets the current and future bandwidth requirements. High-bandwidth
access is supported, with the upstream rate of 1.244 Gbit/s, and downstream rate
of 2.488 Gbit/s.

▫ P2MP access: A fiber in the central office is split to different user's homes with an
optical splitter, reducing optical fiber costs.

▫ PON: No active component exists, which is maintenance-free and consumes no


power, reducing the OPEX.

▫ Low fiber loss and wide coverage: able to meet the requirement of large capacity
and few offices. Optical fibers are used for transmission, and the coverage radius
of the PON access layer can reach tens of kilometers.

• The network consisting of Passive Optical Splitters (POSs) and optical fibers is also
called an Optical Distribution Network (ODN).
• AP: Access Point

• RT: Remote Terminal

• Maximum throughput per AP: 600 Mbit/s

• Maximum throughput per RT: 100 Mbit/s

• A single AP can satisfy the transmission requirement of 100 channels of HD video.

• Total throughput of a single base station with 4 APs configured in omnidirectional


mode: 2.4 Gbit/s

• The propagation of waves along a straight line is called line-of-sight (LOS)


propagation. Usually the propagation conditions of wireless communication systems
are divided into LOS and non-line-of-sight (NLOS). In the case of LOS propagation,
radio signals are transmitted along a direct line between the transmitter and receiver
without any obstacle.

• LOS: line-of-sight
• AC: Access Controller

• AP: Access Point


• Star architecture for single-fiber bidirectional transmission; independent running to
enhance security; applicable to a wide temperature range; surge protection, and plug-
and-play
• OTN, short for Optical Transport Network, transmits information through optical
signals over optical fibers.

• Currently, Huawei transmission products support the highest transmission capacity in


the industry with the single-wavelength 600 Gbit/s for testing and single-wavelength
200 Gbit/s for commercial use. A pair of optical fibers provides more than 20 Tbit/s
bandwidth and can carry 1 million 4K ultra-HD cameras (20 Mbit/s per camera). A
single fiber supports a maximum of 96 wavelengths.
• Backbone network bearer bandwidth = Number of optical fibers x Number of
wavelengths x Single wavelength capacity

• 2 x 96 x 100 = 20 Tbit/s

• 1 x 80 x 100 = 8 Tbit/s

• Learn how to calculate the fiber capacity and number of wavelengths on the backbone
network.

• Common single-wavelength bandwidths are 10 Gbit/s, 40 Gbit/s, and 100 Gbit/s.

• Generally, the number of wavelengths is 40, 80, or 96.

• Current situation of the single-wavelength bandwidth: The bandwidth between the


level-1 and level-2 platforms is 10 Gbit/s, that between the level-2 and level-3
platforms is 2.5 Gbit/s, and that between level-2 platforms is 1 Gbit/s. The process for
expanding the single-wavelength bandwidth of the optical fiber is quite simple.

• The IP hard pipe is a technology that creates a dedicated pipe by reserving hardware
resources on an IP network.

• The video in key areas must be continuous and complete, posing strict requirements on
the network quality. For example, the IP hard pipe technology can be used for
observing key scenarios and areas such as emergency command, city infrastructure,
and city centers, ensuring independent and high-quality transmission.
 Basic functions of a firewall:

 Content filtering: The firewall filters data that enters and exits networks,
manages network access behaviors, blocks forbidden services, records the
information and activities that pass through the firewall, detects network
attacks, and generates alarms.

 Deployed at the network border, the firewall provides functions such as the
network address translation (NAT) and virtual private network (VPN).

 The firewall supports functions including antivirus, intrusion detection,


authentication, encryption, remote management, and proxy.

 The firewall also provides in-depth detection to control some protocols.

 The firewall supports attack defense and scanning detection.

 Firewalls are usually deployed at enterprise network borders. The firewalls, with
powerful control capabilities, can provide VPN services to ensure the
communication between enterprises.

 System logs support after-the-event audit. A firewall provides logs containing


various operations and attacks, and provides the log query and filtering
methods to facilitate search and analysis.
• When a camera is deployed outdoors, hackers can replace the camera with a laptop
and access a network without authorization.

• After using the laptop to illegitimately access the network, the hacker can intrude into
the dedicated video network to tamper with or maliciously delete data, or disclose
sensitive information.

• Hackers exploit camera vulnerabilities to perform remote control, damage the


observation system, or steal sensitive information.
• Authentication in three aspects in introduced to deal with front-end security threats.

▫ 1. The device information authentication mainly uses the camera IP address and
MAC address for authentication. The device serial number and firmware version
will be supported later.

▫ 2. The traffic information authentication is used to detect the vendor of video


traffic and restrict access from unauthorized vendors. (For example, cameras of
vendor A are authorized, but those of vendor B are not. Through traffic detection,
cameras of vendor A are allowed for access, while those of vendor B are not.)

▫ 3. The protocol vulnerability protection is to detect on the firewall whether a


camera vulnerability is exploited to attack other cameras on the network. If such
behavior is detected, the attack will be blocked, and an alarm will be reported,
protecting all cameras on the network. If the traffic does not pass through the
firewall, you can isolate cameras to prevent attack spreading. For example, you
can configure an access whitelist on the cameras.
• 802.1X authentication is enabled for access switches. Before the authentication
succeeds, the camera access port is a management port, which can be used to access
only the authentication server.

• The authentication switch sends the user name, password, and MAC address of the
camera to the controller for a check.

• After the authentication succeeds, the camera access port changes to a service port,
which can be used to access other servers on the intranet.
• South-north traffic: traffic generated during interaction with external systems

• East-west traffic: traffic generated during interaction with internal systems


• Video security devices went through the following phases: From analog cameras that
simply collect video data in the analog/digital era; to IPCs that support video
networking and sharing in the post-network era; and now to cameras that can adapt
to multiple scenarios and support multi-service data association and match.

• Video security devices must evolve from simply video collection to multi-scenario
adaptation and multi-service data association and match.

• During the transformation of the public safety industry, the most prominent change is
that digital data is transformed into structured data, image data is transformed into
feature data, and independent video carriers are transformed into a unified data
resource pool.

• In the post-network era, vendors in the industry embed simple intelligent functions
into traditional hardware devices to meet the service requirements in a single-purpose
scenario.

• During this process, cameras have transformed from single-purpose terminals to a


platform that can integrate multiple applications, meeting requirements in multi-
service scenarios.

• Against this backdrop, the software-defined camera, HoloSens SDC, has come into
being, revitalizing the camera industry.
• Cameras impose high requirements on hardware stability and reliability because they
are installed and need to operate in complex environments.

• The low maintenance rate reduces after-sales costs.


• Video security devices went through the following phases: From analog cameras that
simply collect video data in the analog/digital era; to IPCs that support video
networking and sharing in the network era; and now to cameras that can adapt to
multiple scenarios and support multi-service data association and match. The cameras
evolve towards intelligent applications, all-weather adaptions, and practical industry
applications.

• HoloSens SDCs will help upgrade the public safety industry.

• Intelligence: HoloSens SDCs implement security checkpoint observation, target


blocklist-based alert, and metadata processing through a variety of functions such as
license plate recognition, behavior analysis, metadata extraction, and target detection.

• All-weather adaptation: HoloSens SDCs can work properly in environments with a


temperature ranging from –40° to +65°. HoloSens SDCs also feature IP67-rated
ingress protection, IK10-rated impact protection, self-cleaning glasses, class-D anti-
corrosion, and PoE+ power supply, facilitating construction and adapting to harsh
weather conditions.

• Practical industry application: The micro checkpoint camera, PoE infrared PTZ dome
camera, and long-focus bullet camera can be applied in a variety of industries such as
Smart City, ITS, and rail transportation.
• HoloSens SDC: software-defined camera

• The IPC is a type of digital video camera that can transmit video and audio data through
Internet or LAN. Users can directly view images from the camera, perform PTZ controls,
and set system parameters in the camera web system through a web browser.

• HoloSens SDCs are equipped with professional HiSilicon AI chips with a computing power
of up to 16 TOPS, making it possible to deploy cameras in an all-around and intelligent
manner. In addition, based on the container architecture, Huawei develops the industry's
first operating system dedicated to HoloSens SDCs. The software-defined architecture
enables software to be decoupled from hardware. Huawei also builds a camera App Store
that integrates a wide assortment of intelligent algorithms and applications. Users can
flexibly load algorithms and applications on cameras as simple as installing apps on smart
phones.

• HoloSens SDC OS: operating system based on the Linux kernel, provides a standard and
normalized software operating environment; enables software to be decoupled from
hardware; provides open service-oriented interfaces to help build an abundant camera
ecosystem; supports independent operation of multiple algorithms; supports algorithm
upgrade or switchover without interrupting services.

• HoloSens SDC Studio: An end-to-end development tool chain is available for developers to
train, develop, verify, and roll out algorithms and provides a wide array of services such as
general algorithm models, algorithm model file format conversion, and automatic data
labeling, reducing the development costs and improving the commissioning efficiency.
Additionally, the HoloSens SDC algorithms and applications can be centrally managed on
the HoloSens SDC Store.
• Traffic Behavior: Traffic Behavior cameras are installed at intersections and function as
ePolice, helping traffic management. The cameras provide license plate recognition
and vehicle feature recognition functions and work with lane lines and traffic lights to
observe the unidirectional traffic on a single lane or multiple lanes.

• Checkpoint: Checkpoint cameras are used as checkpoints for road observation. They
are installed far away from intersections on urban roads or highways. They observe the
unidirectional or bidirectional traffic of a single lane or multiple lanes, and collect and
identify key information about passing vehicles in real time, such as the vehicle color,
license plate, and brand.

• Micro checkpoint cameras are generally installed above roads with 1–4 lanes or at
intersections and provide live video-related services as well as license plate capture and
recognition functions.

• The difference between a checkpoint camera and a micro checkpoint camera lies in
that the micro checkpoint camera can also be used for video observation.
• Each increase of 10 degrees in temperature doubles the aging speed of electronic parts
and components. Huawei has strict requirements on the heat dissipation design of
devices. The working temperature of parts and components has a margin of at least
10%, prolonging the service life of devices through stronger resistance to harsh
environments.
• The following three products support physical stabilization:

▫ IPC6125-WDL-FA

▫ IPC6525-Z30

▫ IPC6625-Z30
• Reduces repetition rate by 2 times, saving platform storage and compute resources.
• Checkpoint cameras provide the following functions:

▫ Speeding/Low speed detection

▫ Detection of regular traffic violations such as wrong-way driving, failure to give


right-of-way to pedestrians, and unsafe lane change

▫ In-vehicle feature detection such as seat belt infractions (driver and front
passenger) and hands-free device infractions (driver)

• Principles:

▫ Vehicle detection: The camera splits and classifies images, obtains moving objects,
including pedestrians, non-motorized vehicles, and motor vehicles, and further
recognizes the motor vehicles through deep learning algorithms.

▫ Vehicle movement recording: A lot of incidents may occur when a vehicle is


moving, for example, changing the lane or being blocked by other objects. In this
case, the camera automatically switches the trajectory generation mode to
implement intelligent movement recording and accurately obtain vehicle
movement information to take snapshots.

▫ Vehicle information identification: The camera selects the optimal image from
the captured vehicle images and identifies vehicle information such as the vehicle
model, license plate, and vehicle color.
• Traffic behavior camera provides the following functions:

▫ Red-light-running detection

▫ Speeding/Low speed detection

▫ Detection of regular traffic violations at intersections such as unsafe lane change


and wrong-way driving

• Principles:

▫ Vehicle detection: The camera splits and classifies images, obtains moving objects,
including pedestrians, non-motorized vehicles, and motor vehicles, and further
identifies the motor vehicles through the deep learning algorithm.

▫ Vehicle movement recording: A lot of incidents may occur when a vehicle is


moving, for example, changing the lane or being blocked by other objects. In this
case, the camera automatically switches the trajectory generation mode to
implement intelligent movement recording and accurately obtain vehicle
trajectory information to take snapshots.

▫ Vehicle information identification: The camera selects the optimal image from
the captured vehicle images and identifies vehicle information such as the vehicle
model, license plate, and vehicle color.
• Micro checkpoint cameras are generally installed above roads with 1–4 lanes or at
intersections and provide live video-related services as well as license plate capture and
recognition functions.

• Principles:

▫ Vehicle detection: The camera splits and classifies images, obtains moving objects,
including pedestrians, non-motorized vehicles, and motor vehicles, and further
identifies the motor vehicles through the deep learning algorithm.

▫ Vehicle movement recording: A lot of incidents may occur when a vehicle is


moving, for example, changing the lane or being blocked by other objects. In this
case, the camera automatically switches the trajectory generation mode to
implement intelligent movement recording and accurately obtain vehicle
trajectory information to take snapshots.

▫ License plate identification: The camera selects the optimal image from the
captured vehicle images, locates the license plate in the image, splits the license
plate characters, and matches the letters, digits, and other types of characters
separately to obtain an accurate license plate.
• Four modes:

▫ Micro checkpoint: The camera supports object classification parameter setting,


motor vehicle parameter setting, and recognition and snapshot of license plates,
non-motorized vehicles, and people.

▫ Target checkpoint: The camera supports facial parameter setting and target
detection and snapshot.

▫ Full intelligence: The camera supports settings of facial parameters and object
classification parameters to implement the target detection and object
classification functions. The two functions are enabled by default. This mode is
mainly used in scenarios where pedestrians and vehicles appear simultaneously,
such as entrances and exits of gas stations, buildings, campuses, communities,
and villages, and city intersections.

▫ Behavior analysis: The camera supports behavior analysis functions such as queue
length detection, crowd density detection, and intrusion detection.
• A primary camera that supports Primary-Secondary Camera Observation is installed under
the same switch as an existing generic camera (secondary camera). The network
transmission between the two cameras is normal. The primary camera obtains video
streams from secondary cameras, performs intelligent analysis, and outputs two channels
of video streams with analysis results.

• Currently, a primary camera supports a maximum of four secondary cameras.

• To implement the Primary-Secondary Camera Observation function, the primary camera


needs to obtain video streams from secondary cameras in real time. Therefore, it is
recommended that the primary camera be installed near the secondary cameras to ensure
that both cameras are under the same switch. If they are not under the same switch,
ensure that the network connection between the primary camera and the secondary
cameras is normal (the packet loss rate is 0). The switch must have a 1000 Mbit/s network
port.

• Principles:

▫ When an intelligent camera is directly connected to a generic camera through a


network cable, data (video streams, images, and structured data) of the generic
camera is uploaded by the intelligent camera, and the intelligent camera performs
data analysis based on application scenarios.

▫ When an intelligent camera is connected to a generic camera through a switch, video


streams of the generic cameras are uploaded by itself, its snapshots and structured
data are uploaded by the intelligent camera, and the intelligent camera performs
data analysis based on application scenarios.
• The PTZ dome camera that supports automatic movement recording provides the
following functions:

▫ The camera can detect moving objects (such as motor vehicles, non-motorized
vehicles, and pedestrians) that appear in the image and automatically obtain the
trajectory of the objects.

▫ On the View page, if you click the icon for recording movement, the camera will
detect moving objects that appear in the image. If you manually select an object,
the camera will automatically obtain the trajectory of the object.
• Note:

▫ a: You can loosen the screws to adjust the tilt angle of the camera.

▫ b: You can loosen the screws to adjust the horizontal angle of the camera.

▫ c: You can loosen the screws to adjust the vertical angle of the camera.
• Note: Make sure you use the handle, not the pigtail, to carry the camera. After the
installation is complete, remove the handle from the safety rope and tighten the hex
nuts on the buckles. The handles and buckles depend on the actual situation.
• Note: Currently, most software defined cameras (SDCs) are CS-mount cameras. To
connect a C-mount lens to a CS mount, a C/CS adapter ring must be used, as shown in
the figure on the right. Otherwise, the image effect will be severely affected.
• Note: The camera uses the DC-Iris lens by default. To use the P-Iris lens, log in to the
web interface, choose Settings > Video/Audio/Image > Image > Exposure, and set the
iris type to P-IRIS.
• Note: The zoom rings of some lenses may use other labels: T and ∞ (∞ is the same as
W), which depend on the actual situation. For the auto iris, skip steps 1 and 5.
• Power cable: Connect the power cable. The bare cable supports the 1.5 mm² (length ≤
25 m) power cable at most. The crimp terminal supports the 0.5 mm² (length ≤ 10 m)
power cable.

• Grounding: earth impedance ≤ 5 Ω, length ≤ 25 m. Use an O-type terminal with an


inner diameter of 3 mm.
• It is recommended that you use twisted pairs as alarm cables. The cable diameter
ranges from 22 AWG to 28 AWG, and the impedance of the whole cable is less than or
equal to 100 Ω.
• Nowadays, more and more observation sites are deployed in Smart City projects,
extending to multiple scenarios such as rural areas, factories, and communities.
Various items and things, from indoor to outdoor, are being observed. The
construction of systems for city public safety needs to tackle the following challenges:

▫ Difficult site design and acquisition. The traditional solution focuses only on
devices and uses different cameras and networks. Therefore, power supplies or
cabinets at different sites pose various requirements, which are hard to be met.

▫ Unreliable site quality. Devices from multiple vendors are combined in the
traditional solution. Lack of specific integration tests, unprofessional installation,
and unclear after-sales responsibilities result in low system stability and reliability.

▫ High O&M costs. Manual operations, including on-site maintenance, inspection,


and repair, are the most expensive part of site O&M. On-site battery replacement
and commissioning demand high costs as well. On one hand, each lead-acid
battery has a short service life and needs to be replaced every 1 to 1.5 years due
to harsh outdoor environments and damages. On the other hand, the world is
suffering lead-acid battery theft cases. For example, over 50% of the total lead-
acid batteries were stolen throughout a year in the project of Lagos, Nigeria.
• Simple:

▫ Modular design for power supplies and networks, multi-function integration

▫ eMIMO

▫ Supports wired and wireless access.

▫ Compact, one-stop, flexible to configure, and easy to install

• Reliable:

▫ IP65 rating, applicable to Class D environments

▫ Surge protection integrated, wide temperature range from –40℃ to 55℃

▫ Five-year lithium battery lifespan, keeping sites always online

• Intelligent:

▫ Efficient remote management over the eSight

▫ Easy operation on the mobile app, no need to climb the pole

▫ Asset protection by lithium battery anti-theft software


• The grid power supply solution works circularly as follows: (The power source
preference sequence is grid > battery.)

▫ If the grid is normal, it supplies power for loads and batteries.

▫ If the grid is abnormal, batteries supply power to loads.


• The solar-grid power supply solution works circularly as follows: (The power source
preference sequence is PV module > mains > battery.)

▫ If sunlight is sufficient, PV modules supply power to loads and batteries.

▫ If sunlight is insufficient, PV modules and mains supply power to loads and


batteries.

▫ If there is no sunlight, the mains supplies power to loads and batteries.

▫ If there is no sunlight and the mains is not available, batteries supply power to
loads.
• The solar power supply solution works circularly as follows: (The power source
preference sequence is PV module > battery.)

▫ If sunlight is sufficient, PV modules supply power to loads and batteries.

▫ If sunlight is insufficient, PV modules and batteries supply power to loads.

▫ If there is no sunlight, batteries supply power to loads.

• The capacity of an ordinary lithium battery (CMB20E) is 20 Ah, with power backup
duration of less than 10 hours. The capacity of a lithium battery cabinet (ESC30) is 150
Ah. Its power backup duration is 72 hours. Therefore, lithium battery cabinets are
recommended in the solar power scenario without mains to ensure sufficient battery
backup time.
• SFP is short for Small Form Pluggable.
• The AR switch ring network access is applicable to areas or cities with reliable network
construction. It has the following features:

▫ The Smart Ethernet Protection (SEP) protocol for fiber ring network protection
prevents video loss.

▫ 2.5 Gbit/s bandwidth, supporting 600 channels of HD videos per block

▫ Support for alarm, audio, traffic flow detection, mobile phone MAC address
collection, and other sensor data access

▫ Support for a wide operating temperature from –40℃ to +70℃, capable of


withstanding harsh outdoor environments
• GPON access networks use optical transmission devices to connect to video security
devices. Solution advantages:

▫ High bandwidth: passive network with less space and low power consumption;
1.25 Gbit/s bandwidth, supporting smooth evolution to 10G PON and 40G PON,
as well as HD videos; one-hop traffic aggregation, more smooth video experience

▫ Great security: The solution supports 802.1x authentication and prevents


unauthorized access by MAC address filtering functions.

▫ Simple deployment, easy maintenance, and wide coverage: The solution covers 5
to 40 km and implements ONU PnP.

▫ High reliability: Work properly in harsh environments, such as high temperature


of 70ºC.
• The microwave wireless access solution is applicable to remote areas or areas with
poor networks. The solution uses microwave to receive and aggregate video data.
Solution advantages:

▫ High transmission quality: Microwave transmission has low latency and supports
QoS.

▫ Cost-effectiveness: The 4.9 GHz and 5.8 GHz frequency bands support direct
utilization with no need to pay for them. Those free frequency bands can be used
for microwave transmission.

▫ High bandwidth: The maximum bandwidth for long-haul transmission reaches


220 Mbit/s. Trans-horizon propagation ensures smooth video backhaul. The
access point supports a maximum of 20 channels of HD video backhaul and a
single system supports over 100 HD videos.

▫ Easy deployment: Small and light microwave devices can be quickly deployed.
• The LTE wireless access solution is ideal for areas where LTE base stations have been
deployed or areas with centralized cameras. Solution advantages:

▫ Flexible coverage: For large areas with limited cameras, use LTE networks on low
frequency bands with wide coverage to greatly reduce network construction costs.
For small areas with multiple cameras, use LTE networks on high frequency
bands to cover the areas.

▫ Great security: LTE networks use dedicated frequencies, which do not interfere
with public network frequencies. In addition, the unique air interface encryption
technology of LTE ensures data security.

• The transmission distance between an LTE terminal and a base station is about 2 km.
• The video observation site uses the function module of PowerCube 500 V200R001C10.
The function module adopts integrated design with built-in transmission device boards,
requiring no adapters, PoE modules, or network port surge protectors. Easy on-site
cabling and utilization of interconnection terminals enable simple installation.

• Compared with traditional UPS or lead-acid batteries, the on-site cabling workload is
reduced by 30% and man-hours required for installation are decreased by 50%.
• Take the function module of PC500-300H1 as an example. EG8P is the service board of
this function module.

• Service ports:

▫ GE1–GE2: Support two GE optical ports/GOPN ports. The ports are connected
through the SFP/GPON optical module.

▫ GE3–GE8: Support six GE electrical ports, two of which can be used to transmit
P&E++ signals, and three of which can be used to transmit P&E signals. The ports
use RJ45 connectors.

▫ PoE ports (GE3–GE7) on the EG8P transmit not only Ethernet service signals, but
also power signals.

▫ When using the wrapped connection solution, you can only swap to a GE8 port
or a port on which 802.1x is not enabled. If 802.1x is enabled but authentication
fails, services will be blocked.

• EG8P indicator description:

▫ STAT:

▪ On (green): The board is working properly.

▪ On (red): The board hardware is faulty.

▪ Off: The board is not working, not created, or not powered on.
• Take the function module of PC500-300H1 as an example.

▫ COM_CAN: Communicates with cascaded lithium batteries.

▫ RS485: Communicates with microwave devices.

▫ PoE: Two of the ports can be used to transmit P&E++ signals, and three of them
can be used to transmit P&E signals. The ports use RJ45 connectors.
• Lithium batteries that feature high efficiency and long service life can effectively
reduce the maintenance cost.

• The following three features promote the large-scale application of lithium batteries at
sites:

▫ Light and small: Under the same capacity, the weight of a lithium battery is one
third that of a lead-acid battery.

▫ Modular design and easy installation: The weight of a lithium battery is less than
20 kg. In this case, only one person is required to move and install it, shortening
the installation time by 50%.

▫ Long service life and no replacement: The cycle life of lithium batteries is
consistent with that of site cameras, requiring no replacement within 5 years.
• 6.8 m mains pole, cantilever height: 6 m; 7.3 m solar pole, cantilever height: 6 m

• Lightning rods for mains poles and solar poles are different.
• On a sunny day, the output voltage of each PV module ranges from 40 V DC to 50 V
DC.
• Backhaul mode: wired backhaul through the AR550C or wireless backhaul through the
microwave RTN301e

• Power system: power unit + lithium battery

• Camera model: bullet camera X2221 and PTZ dome camera X6621-Z30
• Both the function module and lithium battery weigh less than 20 kg. With handles on
the cabinet top, it is easy for a person to quickly install the device with one hand in
three steps.

• The P2MP microwave can be installed in two minutes. The terminal RT only needs to
face the central AP direction, saving you from coarse alignment.

• All devices installed on poles do not require on-site commissioning or configuration.


You only need to install and power on the device.

• One can easily complete the installation and delivery of standard sites without
knowledge about microwave or wired/wireless backhaul.
• Power supply units (PSUs) can work normally when the ambient temperature is lower
than 50°C (sunlight impacts ignored). PSUs support stable operating at up to 55°C
after being derated.

• Lithium batteries can work properly as long as the ambient temperature is lower than
45°C (sunlight impacts ignored). The batteries can work under the temperature up to
55°C without security risks while its lifespan is shortened. If the lithium battery works
in a 50°C environment for a long time, its lifespan will decrease by 30% (compared to
that in a 45°C environment). If the lithium battery works in a 55°C temperature, its
lifespan will decrease by 50% (compared to that in a 45°C environment).

• Ambient temperature is the temperature reported by the local weather forecast (air
temperature).

• The IP65 rating is applicable to class C environments. Anti-corrosion capability is


applicable to harsh environments.
• Reference answer:

▫ ABC

▫ ABC
• Video security has experienced the analog era and digital era. Now, it has entered the
cloud era. HD, intelligence, and openness have become the main features of video
security services.

• Disadvantages of traditional video security

▫ Hardware resource:

▪ Each region is independently constructed and independent of each other.

▪ Resources are isolated, which cannot cope with traffic peaks.

▪ There are many vendors of IPCs and platforms, and various forms of
products coexist, for example, DVR/DVS, IP SAN network storage, and
integrated platform. Hardware resources cannot be shared among different
forms of products.

▪ When new services are required, for example, the capacity of the analysis
platform needs to be expanded on the original storage platform, the
scalability is poor and the solution is complex. When services are reduced,
some hardware resources become idle, leading to hardware waste.

▪ The compatibility is poor. Each vendor supports only its own devices.
• Overall industry trend: Ultra-HD, cloud migration, intelligence and big data

• Advantages of IVS:

▫ Cloud migration

▪ Unified server cluster, unified plan, and unified external services

▪ Cloud scheduling mechanism and resource sharing, effectively improving


the emergency handling efficiency

▪ Flexible scalability: When service requirements are increased or reduced,


you need to only add or reduce the number of nodes.

▪ Unified management platform

▫ Intelligence

▪ Device-cloud synergy between cameras and the platform, achieving


network-wide intelligence

▪ Deep learning used to adapt to special scenarios

▫ Openness

▪ Open platform architecture, compatible with the algorithms of multiple


vendors
• Cloud migration

• Compared with traditional video security, IVS is based on the universal cloud
architecture and can adapt to future architecture evolution more flexibly.

▫ Unified computing: All compute resources are centrally managed, scheduled, and
allocated.

▫ Unified storage: All storage resources are centrally managed and allocated.

▫ Unified platform: Multiple intelligent analysis tasks are integrated to adapt to


various service scenarios.

▪ Video/Image management: Video and images can be accessed, forwarded,


stored centrally, and shared online.

▪ Video/Image parsing: Target analysis, license plate analysis, vehicle analysis,


video synopsis, behavior analysis, and third-party algorithms

▪ Video/Image search: Target search, license plate search, and vehicle search

▫ Unified openness: The southbound and northbound algorithm interfaces support


multiple algorithms and adapt to multiple service applications.
• Intelligence

▫ Matrix intelligence in a broad sense is implemented through intelligent collection


from cameras, intelligent analysis and management by the platform, and
intelligent service collaboration.
• Openness

▫ The openness of southbound algorithms allows the access of algorithms from


multiple vendors.

▫ The openness of northbound applications allows third-party applications to use


the northbound SDK interface to develop applications required by customers
based on the big data, high performance, and high reliability features of the
CloudIVS.

• Feature data: structured and unstructured data after intelligent analysis

• Metadata: structured data transferred from cameras

• Semi-structured data: eigenvalue, N dimensional vector data, which is used for


searching by image

• Structured data: attribute value, such as the license plate color and hat attribute, which
is used for conditional search
• The IVS1800 adopts the integrated software and hardware design. At least one
IVS1800 needs to be configured. The IVS1800 functions as devices. The IVS1800 is
constructed based on the HiSilicon Hi3559A chip, Ascend AI processor (optional), and
EulerOS. In the minimum configuration scenario where one IVS1800 is configured, it
supports services such as video and image access, storage, and forwarding, intelligent
analysis, search, and alert task creation.
• Port description

▫ USB port: Connects to one of the following USB devices: USB flash drive, USB
mouse, Removable hard disk. Two USB 2.0 ports on the front panel and one USB
3.0 port on the rear panel.

▫ Ground terminal: Connects to a ground cable.

▫ RS-485 port: Connects to an external PTZ device or access control system.

▫ Alarm port: Alarm input port connects to an external alarm input device, for
example, the access control system. Alarm output port connects to an external
alarm input device, for example, an alarm bell.

▫ Serial port: Used for device access and maintenance.

▫ Audio port: Audio input port used for audio input, this port can be used to
broadcast voice files or talk with users in the observation area where cameras
with microphones are installed. Audio output port used for audio output, this
port can be used to listen to channel-associated voice of cameras.
• After the intelligent service is enabled, the access bandwidth performance deteriorates.
• If each IVS1800 device connects to 64 cameras, one iClient can manage a maximum of
four IVS1800 devices.

• If each IVS1800 device connects to 16 cameras, one iClient can manage a maximum of
16 IVS1800 devices.
• IVS3800S: Provides functions such as access, recording storage, and networked sharing.

• IVS3800C: Provides functions such as personal/vehicle feature analysis and extraction,


behavior analysis, video synopsis, and video search.

• IVS3800R: Searches for objects based on structured data and feature data generated
after intelligent analysis.

• IVS3800F:

▫ Provides functions such as access, recording storage, and networked sharing.

▫ Provides functions such as personal/vehicle feature analysis and extraction,


behavior analysis, video synopsis, and video search.

▫ Searches for objects based on structured data and feature data generated after
intelligent analysis.
• Data/Service convergence

▫ Data convergence: unified storage of video and images.

▫ Service convergence: convergence of networking management, streaming media


forwarding, storage, analysis, and search services.

▫ Lite Edge OS: one server for all services, resource pooling, and on-demand
capacity expansion.

• On-demand combinations of storage, compute, and search resources, one server for all
services, and optimal TCO

▫ Applicable to multiple service scenarios of the customer, container technology


used, and on-demand combinations of storage, compute, and search resources.

▫ On-demand deployment of multiple algorithms, one product model with N


capabilities, and resource sharing.

▫ Highly integrated, saving equipment room footprint and reducing overall power
consumption.

• Deployment of storage, compute, and search resources in cluster mode, elastic scaling

▫ Distributed horizontal expansion, linear capability expansion, unified cluster.

▫ Dynamic task and data allocation for load balancing.

▫ Automatic detection of faulty nodes in a cluster and service migration.


• IVS3800S: Insert 960 GB SSD disks in slots 0 and 1 as system disks(standard
configuration). Insert 8 TB or 10 TB SATA PMR disks in slots 2 to 35 and 40 to
43(optional).

▫ In the disk specifications, 8 TB/10 TB disks are Perpendicular Magnetic Recording


(PMR) disks, and 14 TB disks are Shingled Magnetic Recording (SMR) disks.

• IVS3800F: Insert 3.84 TB SSD disks in slots 0 to 5(standard configuration). Inset 8 TB or


10 TB SATA PMR disks in slots 6 to 35(optional), insert 960 GB SSD disks in slots 40
and 41 as system disks(standard configuration), and insert 3.2 TB NVMe disks in slots
44 to 45(optional). Insert at most three AI accelerator cards in SLOT4 to
SLOT6(optional).
• IVS3800C: Insert 960 GB SSD disks in slots 0 and 1 as system disks(standard
configuration). Insert 3.84 TB SSD disks in slots 2 to 9(standard configuration). Insert
at most six AI accelerator cards in SLOT1 to SLOT6(optional).

• IVS3800F: Insert 960 GB SSD disks in slots 0 and 1 as system disks(standard


configuration). Insert 3.84 TB SSD disks in slots 2 to 9(standard configuration). Insert
3.2 TB NVMe disks in slots 44 to 47(optional). Insert at most three AI accelerator cards
in SLOT4 to SLOT6(optional).
• Hardware: Provides hardware resources for services.

• Host operating system: An operating system that runs on bare metal servers. All
hardware resources are mounted to the host operating system, and all containers run
on this operating system.

▫ Container is a virtualization technology. Services run in containers, which are


isolated from each other. Containers and virtual machines (VMs) have certain
key difference:

▫ VM: Hardware resources, such as CPUs and memory, are used to create an
independent operating system that can run applications and provide services.

▫ Container: No operating system is required. Service applications invoke hardware


resources in a host operating system through the container engine. All containers
share the same host operating system.

• Container management service: Centrally manages services in all containers. The


services include installation and deployment, service lifecycle management, and
network and storage management.

• Image management: Provides video/image access, storage, management, and


forwarding functions.

• Image analysis: Provides intelligent analysis.

• Image search: Provides post-analysis search capabilities.


• General Server: Used to deploy the HUAWEI CLOUD Stack (HCS), which is then used to
create VMs on physical servers to provide compute resources (such as CPUs, memory,
and network adapters) for upper-layer services (CSP and IVS).

• Storage Server:

▫ OceanStor 5500 V5: Provides system disks and data disks for VMs on the
management plane and service plane.

▫ OceanStor 9000: Stores video and image data for the service layer.

• Heterogeneous computing server:

▫ FusionServer G5500: GPU hardware physical resource, which provides GPU


acceleration and hardware decoding capabilities for service-layer VMs.
• Authentication protection:

▫ Provides access authentication and session control, avoiding the access of invalid
cameras.

▫ Adopts algorithms with the top security level, preventing password leakage.

• Service protection: AES stream encryption, digital watermark, and media super error
correction.

• Network protection: Multiple security protocols such as SSL and TLS are used to ensure
network security.

• System protection: strong password policies, IP address filtering, minimum


authorization rules, and security logs.

You might also like