You are on page 1of 10

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2021.3079257, IEEE Sensors
Journal
IEEE SENSORS JOURNAL, VOL. XX, NO. XX, MONTH X, XXXX 1

Augmented Multiple Vehicles’ Trajectories


Extraction under Occlusions with Roadside
LiDAR Data
Xiuguang Song, Rendong Pi, Chen Lv, Jianqing Wu, Han Zhang, Hao Zheng, Jianhong Jiang and Haidong He

Abstract—Object occlusion is a common issue in Light


Detection and Ranging (LiDAR)-based vehicle tracking
technology. The occlusions can cause variance in vehicle
location and speed calculation. How to link the vehicle
trajectories caused by occlusion issues is a challenge for traffic
engineers and researchers. This paper developed an augmented
vehicle tracking method under occlusions with the roadside
LiDAR data. The proposed method can be divided into two parts.
The first part based on the corner point is used to choose a
representative vehicle tracking point. And the second part based
on the GNN algorithm is employed to link the vehicles’
trajectories under two occlusion situations. The performance of
the proposed method has been evaluated using roadside lidar
data collected from four different scenarios. The test results
showed that more than 89% of disconnected trajectories can be fixed with the proposed method, which is superior
compared to the state-of-the-art method. The proposed method can benefit a lot of transportation areas, such as traffic
volume count, vehicle speed tracking, and traffic safety analysis.
Index Terms—Roadside LiDAR, connected-vehicles, vehicle trajectory, tracking point

I. INTRODUCTION or cameras”. For LiDAR or camera, one major challenge for


extracting the high-resolution vehicle trajectory data is the
H igh-resolution vehicle trajectory data have a lot of
potential applications in different transportation areas [1],
including but not limited to crash prediction [2], [3], [4],
occlusion issue. The occlusion refers to the situation that one
vehicle is occluded by another vehicle or by other background
objects [27]. Occlusion is a common issue for multiple objects
automatic traffic density estimation [5], traffic flow monitoring
tracking [28], [29]. This paper developed a new method to
[6], car-following analysis [7], driver behavior analysis [8], [9],
extract the high-resolution vehicle trajectory data using
[10], [11], fuel consumption estimation [12], [13], [14], [15],
roadside LiDAR by taking the occlusion into the consideration,
adaptive traffic signal control [16], [17], [18], route navigation
improving from recent developments outlined in the review
[19], [20], traffic demand analysis, traffic operation [21], [22],
section below.
and advanced driver assistance system development [23], [24].
By now, a lot of traffic sensors, such as radar, Bluetooth, II. RELATED WORKS
camera, and Light Detection and Ranging (LiDAR) can provide Generally, the whole vehicle trajectory extraction procedure
vehicle trajectory data [25]. As pointed out by Galceran et al. can be divided into two steps: vehicle detection and vehicle
[26], “radar and Bluetooth sensors usually provide sparser and tracking. As for vehicle detection, the methods of vehicle
less accurate geometric information when compared to LiDAR detection have been well developed in previous studies [3], [30],
[31], [32], [33]. The process of vehicle detection can be divided
This research was funded by the National Natural Science into three major steps: background filtering, point clustering,
Foundation of China (52002224), the Natural Science Foundation of and object classification. The detailed information can be found
Jiangsu Province under Grant number BK20200226, the Program of in [3], [30], [31], [32], [33].
Science and Technology of Suzhou(SYG202033), the Research To obtain the vehicle trajectory, it is necessary to track the
Program of Department of Trans-portation of Shandong Province under
grant number 2020BZ01-03. same vehicle continuously and accurately. Several techniques
X.Song, R.Pi, C.Lv, J.Wu and H.He are now with the School of Qilu including Multiple Hypothesis Tracking (MHT) [34], Global
Transportation, Shandong University, Jinan 250061, China and Suzhou Nearest Neighbor (GNN), and Joint Probabilistic Data
Research Institute, Shandong University, Suzhou 215123, China Association (JPDA) [35], [36] have been developed for data
(e-mail:songxiuguang@sdu.edu.cn;pirendong@mail.sdu.edu.cn;20201
5398@mail.sdu.edu.cn;201999000171@sdu.edu.cn). H.Zhang is with association. The previous practice shows that though JPDA and
Shandong High-speed Group Co., LTD, Jinan 250002, China (e-mail: GNN may provide better accuracy, they are difficult to
16zhanghan@163.com). H.Zheng is with School of Mathematics and implement and can cause heavy computational load [37]. The
Statistics, Central South University, Changsha 410083, China (e-mail: GNN uses the Chamfer distance [38] to associate the vehicle in
zhenghao18@csu.edu.cn). J.Jiang is with Shandong Provincial
Communications Planning and Deign Institute CO.LTD, Jinan 250000, different frames, which is easy to implement and require
China (e-mail:18119447201@163.com). relatively low computational load [39]. Zhang et al. [40]

XXXX-XXXX © XXXX IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
1530-437X (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Western Sydney University. Downloaded on June 15,2021 at 11:11:18 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2021.3079257, IEEE Sensors
Journal
2 IEEE SENSORS JOURNAL, VOL. XX, NO. XX, MONTH X, XXXX

proposed a new method utilizing the Kalman Filter and joint constrained the application of LiDAR to generate the
probabilistic data association filter to track the vehicles and high-resolution vehicle trajectories. The LiDAR can work day
estimate the vehicles’ speed. Kim and Park [41] proposed an and night without the influence of light conditions [52]. The
extended Kalman filter (EKF) reflecting the distance researchers also found solutions to improve the performance of
characteristics of lidar and radar sensors. This method can LiDAR under adverse weather, such as rain and snow [50].
produce accurate distance estimations by sensor fusion. Wu et However, there are very limited studies addressing the
al. [42] proposed a systematic procedure for vehicle tracking occlusion issue for the roadside LiDAR [53]. Thornton et al.
using the roadside LiDAR sensors. The procedure can be [54] used a rule-based method to identify a partially occluded
divided into 5 parts. And the filed test was conducted to vehicle in the parking lot. For example, any object with a length
evaluate the proposed method’s validation. Vimal Kumar A. R. less than a predefined threshold, such as 2.5 meters, was
et al. [43] employed a low-density solid-state flash lidar for considered an occluded vehicle. Lee and Coifman [22] detected
collecting sparse data. The JPDA algorithm and Kalman filter the occlusion by checking whether the background curve can be
were employed to extract vehicles’ trajectories. The results seen between a given pair of vehicles. If not, the farther vehicle
showed that the vehicles’ trajectories can be extracted well is suspected of being occluded. However, those pioneer studies
using the proposed method. can only work for specific sites and are not transferable.
Another critical problem here is how to represent the location Though some other previous studies also stated that the
of the vehicle for the association. Representative points and occlusion issue can be eliminated by setting up multiple
bounding boxes are the two widely used methods to represent LiDARs at different directions [55], [57], [57], [58], the extra
the vehicle location. Since the shape of the vehicle varies with cost of adding and maintaining the LiDARs make this approach
different distances and directions to the roadside LiDAR, the also unimplementable. Recently, Zhao et al. [25] proposed a
bounding box could not effectively represent the location of the Kalman filter-based method to fix the occlusion issue. At one
vehicle [44], [43]. Coifman et al. [45] firstly used the corner timestamp i, the position of the object is estimated based on the
points to track the vehicles. The results showed that using a historical information from its previous frame i-1. If the object
corner point for vehicle tracking can greatly improve the is not detected in the current frame, then the algorithm will
tracking accuracy compared to using the bounding box. Later continuously search the object in the next 1.5 seconds by using
Wu [46] and Sun et al. [47] also found that compared to using the speed in frame i-1. If the object is not detected in 1.5
the center point of the vehicle, using corner points can reduce seconds, the tracking algorithm will stop. The limitation of this
the speed error. The vehicle trajectory can then be generated. method is that this algorithm could generate high-speed errors
However, the occlusion issue is a big challenge for vehicle if the object is chopped into several parts due to the occlusion
tracking. Based on the degree of occlusion, occlusion can be since the Kalman filter-based algorithm will always link the
divided into partial occlusion and full occlusion [48], [49], [50]. nearest point in the next frame. Therefore, generating
A lot of studies have been conducted to address the occlusion high-resolution vehicle trajectories that can overcome
challenge in image detection. Jung and Ho [2] developed a occlusion is still an open issue.
vehicle tracking algorithm considering occlusion reasoning for
video detection. When occlusion is detected, the algorithm can III. DATA AND METHODS
create a new trajectory and link the new trajectory to each
trajectory when there are no occlusions. The testing results A. Tracking Point Switching Detection And Fixing
showed that 5~10 km/h speed error was observed. Zhang et al. In order to extract the vehicles’ trajectories from the roadside
[27] developed a unique framework to detect and handle object LiDAR data continuously and accurately, how to choose a
occlusion. The compactness ratio and interior distance ratio of representative tracking point is crucial. However, the current
the objects were used for occlusion detection on the intraframe algorithm uses the nearest point as the tracking point to
level and subtractive clustering on motion vectors was applied associate the vehicle at different frames, different parts of the
on the interframe level. The total detection rate of occlusion can vehicle can inevitably be selected as the tracking point. Figure 2
reach 93.87% and 100% for partial occlusion and full occlusion, shows an example where the tracking point switched from the
respectively. front corner to point A in the middle of the vehicle and then to
Though the occlusion detection for image processing is the back corner later. As a result, the speed calculation can be
relatively mature, the performance of cameras can be greatly inaccurate.
influenced by harsh environmental conditions, such as weak
light, rain, and snow [51] which limit the field of view and

1530-437X (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Western Sydney University. Downloaded on June 15,2021 at 11:11:18 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2021.3079257, IEEE Sensors
Journal
2 IEEE SENSORS JOURNAL, VOL. XX, NO. XX, MONTH X, XXXX

Fig. 2. Tracking Point Switching Issue


Actually, there is a distance offset (L) compared to the Ym − Yp
non-switching scenario when the tracking point switches from  = max(tan −1 ( ))
the front corner to the back corner. Xm − X p
The vehicle length (VL) is updated every frame. The detected (4)
vehicle length (DVL) in the current frame i is compared to the −1
Ym − Yq
− min(tan ( )), p, q  NV & p, q  m
historical frames and max value from the beginning of the Xm − Xq
tracking is used as VL in frame i, and is denoted as,
For the corner point (CP), the angle β is calculated by
VLi = Max[VLi −1 , DVLi ], i  0 (1) equation (4). Then a local coordinate system can be created
Where VLi represents the assigned vehicle length in frame i. with the angle bisector of β considered as the Y-axis. The CP is
the origin of the local coordinate system. If point g and h and
DVLi is the detected vehicle length using GNN in frame i. CP created angle β, then g and h must be located in different
Frame 0 means the frame when the tracking started VL0 = 0 . quadrants (one in the first quadrant and one in the second
quadrant) in the local coordinate system. If g and h switched the
The tracking point m with any other point n in the same
quadrants at frame i, then switching point issue occurred. When
vehicle can be used to create a slope. The slope can be denoted
the vehicle is approaching the object, the front corner is always
as α. Then the angle β between max α and min α can be
the tracking point. Then when the first time of switching point
calculated as
occurred, it must be the situation that the front corner point
Ym − Yn switched to the back corner point. Then the vehicle length is
 = max(tan −1 ( ))
Xm − Xn used to move the back corner point to the front corner point.
(2) The tracking point is moved along the length direction (vehicle
Y −Y −1 length>vehicle width) with VL. Then a virtual tracking point C
− min(tan ( m n )), n  NV & n  m
Xm − Xn can be created to calculate the speed. The tracking point
switching issue is then fixed. The point switching adjustment
Where X m and Ym are X coordinate and Y coordinate of algorithm can be illustrated in Figure 3.
point m, respectively. NV is the point set of the vehicle cluster. Vehicle Length Calculation

Ideally, β should be equal to 90-degrees for the front corner and


back corner. Given the unsmooth feature of point distribution, a
VL=DVL VL=DVL VL=Max DVL
DVL DVL

tolerance value ϒ in the angle is given to β. The back corner and Max DVL in the trajectory

the front corner should meet


Y-axis

Non Corner Point Switching to Corner Point


Position of Tracking Point
(3) Identification
β

Actually, in order to track the vehicle accurately, it is β


Tracking point
is the corner
Tracking point
before
Tracking point
after

necessary to move the tracking point m to corner point B


point adjustment adjustment

(though the corner point B may not be visible in the LiDAR Tracking point is
data). Within NV (except point m), each point together with β not the corner
point
X-axis

point m can create a line. The slope between different lines can q
Adjusted Tracking Point
Quadrant 2

then be calculated. Assuming the point p and q represents the p


Y-axis

Quadrant 1

two points that created the max β in equation (4), then the X-axis

corner point B can be p or q, depending on which point has the q


Quadrant 1
p
Quadrant 2

shortest distance to the LiDAR. Point B can be the front corner


or the back corner. Corner Point Adjustment
Fig. 3. Tracking Point Identification and Adjustment

1530-437X (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Western Sydney University. Downloaded on June 15,2021 at 11:11:18 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2021.3079257, IEEE Sensors
Journal
2 IEEE SENSORS JOURNAL, VOL. XX, NO. XX, MONTH X, XXXX

B. Occlusion Issue Detection And Fixing represented by the angle between the Y-axis and the line
Based on the type of occluding object, the occlusion can be created by the tracking point in frame i and i-1.
divided into two types: background occluding issue and road (1) Full Occlusion
user occluding issue. The background occluding issue refers to It is assumed that there are E vehicles in frame i-1 and F
the situation where the vehicle is blocked by the background vehicles in frame i. If there is one vehicle that is visible in
such as trees or buildings. The occlusion area is fixed since the frame i-1 and fully occluded in frame i, then E>F. Given the
location of the LiDAR and the background is fixed. Figure 4 detection range of the LiDAR as R (R is the effective detection
shows an example of the occlusion issue. radius of the LiDAR), if
Vehicle A Vehicle B X 2i −1 + Y 2i −1  R (6)
a b

Object 1 Then there should be one vehicle G in frame i-1 that could
not be associated with any vehicle in frame i. A prediction
algorithm is developed to estimate the position of the vehicle. It
LiDAR
should be mentioned that the prediction algorithm will not
export the location of the vehicle in frame i if the vehicle is not
Vehicle D

Vehicle F
detectable. But the prediction algorithm will store the current
speed, lane information, and moving direction in frame i-1.
Vehicle E

Then the prediction algorithm will continuously search vehicle


Object 2
S within a certain time interval (usually 1.5s, 15 frames if the
rotating frequency is 10 Hz). The searching radius (SR) is
c d
calculated based on
Vehicle C
SR = VGi−1 * T (7)
Fig. 4. Occlusion Issue
Object 1 and Object 2 are the background objects in the Where VGi−1 is the speed of Vehicle G at frame i-1 and T is
space. Part of Vehicle A is occluded by Object 1 and part of the time interval from frame i-1. Assuming another vehicle
Vehicle C is occluded by Object 2. However, the visible part of record (H) appeared in frame i+t (t<=T), H could not be
Vehicle A is continuous (the distance between points a and b is associated to other vehicle records in frame i+t-1 and the
just like the situation when there is no occlusion). This situation distance between vehicle G and H is within SR, then vehicle H
is called Scenario 1 in this paper. As for Vehicle C, the visible can be associated to vehicle G, denoted as
part is dispersed (the distance between point c and d is much
t + 1 H Hi+t − H Gi−1
longer than the normal situation). This situation is called GH if VGi−1 *  (8)
Scenario 2 in this paper. Vehicle B is blocked by Object 1. F X Hi+t − X Gi−1
Vehicle D traveled at a lane closest to the LiDAR. At a specific
time in Figure 4, Vehicle E is partially blocked and Vehicle F is If no vehicle record can be linked with Vehicle G at frame i-1,
fully blocked. Different from the background blocking, the then the tracking is stopped.
Vehicle E and F can be partially/fully blocked by Vehicle D at (2) Partial Occlusion
different locations if their speeds are similar. As for full If there is one vehicle visible in frame i-1 and partially
occlusion, since the vehicle is invisible, there is no point cloud occluded in frame i, the visible part of this vehicle in frame i
reported. As a result, the GNN will lose its tracking on the can be continuous or disperse.
vehicle. If the vehicle shows up later, the GNN will assign a The continuous visible part can be any proportion of the total
new ID to it. area of the vehicle, depending on the relative location of this
As for partial occlusion, if Scenario 1 occurs, the GNN may vehicle and the occluding object. The first situation is that the
still lose its tracking on the vehicle if the clustering is failed due vehicle can be successfully classified as a vehicle. The visible
to the insufficient number of points. If the clustering is part can be either the front part or the back part of the vehicle.
successful and the GNN does not lose its tracking, the reported With equation (2) and (3), we can judge whether the tracking
position calculated by the GNN is still inaccurate. point of the vehicle in frame i is a corner point or a non-corner
If Scenario 2 occurs at the frame i, then the GNN may point. If the tracking point is a corner point, then equation (4)
continuously track the vehicle. However, the vehicle may be can be used to adjust the location of the tracking point, if
clustered into two vehicles. The GNN can only match a vehicle necessary. If the tracking point is not a corner point, the
to the vehicle that is closest to the vehicle at frame i-1. A new tracking point can be moved to a corner point CP’ which has the
ID will be assigned to the vehicle far away from the vehicle at shortest distance to the LiDAR. It should be mentioned that CP’
frame i-1. As a result, there will be two trajectories for the same may not be the actual corner point since the corner point may be
vehicle. This can also be an error for vehicle volume counting. already blocked by the other object. As a result, there may be a
The moving direction of one Vehicle at frame i can be bias for the speed and location calculation. If DVLi< VLi-1, then
represented using the following equation. with equation (1), VLi should be equal to VLi-1. The final
selected point is a virtual point along the length direction with
Yi − Yi −1
DV = tan −1 (5) an extended distance of VLi-1-DVLi, as shown in Figure 5.
X i − X i −1
Where X and Y are the XY coordinates of the vehicle.
Equation (5) shows that the moving direction, in fact, is

1530-437X (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Western Sydney University. Downloaded on June 15,2021 at 11:11:18 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2021.3079257, IEEE Sensors
Journal
2 IEEE SENSORS JOURNAL, VOL. XX, NO. XX, MONTH X, XXXX

TABLE I
THE LIDAR SPECIFICATIONS
Occlusion
Area Indicator Value
Final
Tracking Laser beams 32
DVL i Point Range 40cm~200m
VLi-1 Tracking VLi-1 Range resolution +/- 3cm
Point
Scan FOV 40°×360°
Frame i-1 Frame i Vertical angle resolution 0.33°
Rotation rate 300/600/1200(r/min)
Laser wavelength 905nm
Size 114mm(diameter)*108.73mm(height)
Working temperature -30℃~ 60℃
Fig. 5. Tracking Point Occlusion under Partial Occlusion Weight 1.17kg
If DVLi >= VLi-1, this bias could not be fixed since the vehicle In addition, the roadside LiDAR means the LiDAR deployed
length information cannot be used to further adjust the location in a stationary location along the roadside (shown in Fig.1),
of the tracking point. which is different from the mobile LiDAR (shown in Fig.1) and
The second situation is that the visible part of the vehicle is the airborne LiDAR [41], [44]. As for the LiDAR type, there
small and the cluster is classified as a non-vehicle road user are rotating LiDAR, flash LiDAR, and solid-state LiDAR [45].
(bicycle/pedestrian). Here we assumed that the non-vehicle This paper focused on rotating LiDAR.
road user would not use the vehicle lane. Therefore, if a
pedestrian suddenly appeared in the vehicle lane out of the
detection boundary (such as 5 meters within the boundary) of
the LiDAR, then this must be an occluded vehicle. With this
assumption, the type of road user is changed to a vehicle. Then
the vehicle can be continuously tracked.
If the vehicle is chopped into two parts, only the part closest (1) (2)
to the vehicle in the last frame can be tracked and the other part
will be assigned to a new ID. The location and the frame ID
(time information) are used to judge whether the two IDs
belong to the same vehicle. The detailed algorithm is
documented in Figure 6.
The vehicles’ trajectories can be divided into two parts in this
paper. The first part is Tracking Point Switching Detection and (3)
Fixing elaborating the method about how to choose a Fig. 1. Three types of LiDAR installation. (1)Roadside LiDAR fixed
installation; (2) Roadside LiDAR portable installation; (3)Mobile LiDAR
representative vehicle tracking point in detail. The second part
is Occlusion Issue Detection and Fixing. This part clarified the B. Selected Sites For Evaluation
solutions for associating the unrelated vehicles due to the The proposed method was evaluated by processing the
occlusion. Therefore, the first part is the basement of the second roadside LiDAR data collected at four sites. Figure 7
part. demonstrates the locations of the selected sites. The first
location was selected at one intersection. The installation of
LiDAR was shown in Fig.7. There are two traffic signs between
the location of the LiDAR and the scanned road area. As a
result, there are two occlusion areas on the road. The second
location was selected at a road segment in front of a high school.
The LiDAR was installed on a tripod on the road median for
temporary data collection. Due to the high student volume, the
vehicles need to stop at this site waiting for pedestrians crossing
the road. The third location was selected at one location along
the national highway G104. There is mixed traffic (a lot of
Fig. 6. Merging Two IDs Belonging to the Same Vehicle commercial trucks) traveling on the national highway. The
forth location was selected at the on-ramp area on an overpass.
IV. TESTING RESULTS AND DISCUSSION The LiDAR was also installed on a tripod. The occlusion issue
that one vehicle is blocked by another one is common when the
A. Data Collection
traffic was converging at this site. As a result, the occlusion
In this paper, the RS-LiDAR-32 was employed for data issue is also common at this site. as shown in Fig.7.
collection. The LiDAR has 32 channels with scan angles of
-25° to +15° in the vertical direction and 0° to 360° in the
horizontal direction. More detailed information of
RS-LiDAR-32 is shown in table 1.

1530-437X (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Western Sydney University. Downloaded on June 15,2021 at 11:11:18 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2021.3079257, IEEE Sensors
Journal
2 IEEE SENSORS JOURNAL, VOL. XX, NO. XX, MONTH X, XXXX

occlusion issues and one point switching issue. As for tracking


point A, the occlusion issue stopped the tracking of one vehicle
since point A could not be linked to any point in the next frame
in the searching radius of GNN. As a result, a new vehicle ID is
assigned to the same vehicle.
The adjusted tracking point A’ using the proposed method
can then be successfully linked to the point in the next frame.
Therefore, the two trajectories can be merged together. As for
the occlusion issue at Point B, though the trajectory is still
continuous, the inaccurate location can cause speed variance.
The adjusted point position can reduce the speed variance. As
for the point switching issue at point C, the speed variance can
also be a major aftermath. After adjusting point C to point C,
the speed variance can be reduced, as shown in Figure 9. It is
clearly shown that after adjustment, the speed variance caused
by the occlusion and point switching issues was greatly
reduced.
As for the point switching issue at point C, the speed
variance can also be a major aftermath. After adjusting point C
to point C’, the speed variance can be reduced, as shown in
Fig. 7. Area of study
Figure 9. It is clearly shown that after adjustment, the speed
variance caused by the occlusion and point switching issues
C. The Performance of Proposed methods
was greatly reduced.
Figure 8 shows an example of tracking point location
adjustment. In the example shown in Figure 8, there are two

Fig. 8. Example of Tracking Point Location Adjustment

120 120
Vehicle ID 1 Vehicle ID 1
Vehicle ID 2
100 100

80 80
Speed (mph)
Speed (mph)

60 60

40 40

20 20

0
0
0 10 20 30 40 50 60 70
0 10 20 30 40 50 60 70
Frame ID
Frame ID

(1) (2)
Fig. 9. Example of Tracking Point Location Adjustment Before-and-After Speed Distribution: (1)Speed Distribution Before Adjustment;(2)
Speed Distribution After Adjustment

1530-437X (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Western Sydney University. Downloaded on June 15,2021 at 11:11:18 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2021.3079257, IEEE Sensors
Journal
2 IEEE SENSORS JOURNAL, VOL. XX, NO. XX, MONTH X, XXXX

D. Compared To The State-of-the-art Method discrete feature of point clouds the ϒ should be larger than 0°.
So ϒ is suggested to be 30°in this paper.
Based on the authors’ best knowledge, the method proposed
There are still some errors found in the results after applying
by Zhao et al. [25] represents the most advanced method. This
the proposed algorithm. Then the raw LiDAR data were
paper evaluated the proposed occlusion fixing method by
checked in RSView for diagnosis. If there is a long vehicle
comparing with the method documented in paper [25]. The
(such as a commercial truck) entering the road and it is blocked
vehicle tracking algorithm developed by Cui et al. [3] (without
by another object and the tracking starts by coincidence, the
considering the occlusion) was selected as a reference method.
vehicle may be chopped into two dispersed parts. Since the
These three algorithms were applied to process the same
frame i-1 is available, there will be two IDs reported. As a result,
30-minute data collected from four different sites. Table 2
there may be two IDs existing in the record until the occlusion
showed the results after processing.
TABLE 2
disappeared, as shown in Figure 10.
DATA PROCESSING RESULTS WITH DIFFERENT METHODS
Percentage
NDVT*
NDVT* of fixed Percentage of
with the NDVT*
with the occlusion fixed
method with the
Sit method with the occlusion with
develope proposed-b
e develope method the
d by ased
d by Cui developed pro-posed-bas
Zhao et method
et al.[3] by Zhao et ed method
al.[25]
al.[25]
1 50 35 30.0% 3 94.0%
2 66 39 40.9% 4 89.7%
3 168 97 42.26% 5 97.0%
4 45 22 51.11% 3 93.3%
*NDVT: number of detected trajectories.
It was shown that the proposed method can fix more Fig. 10. Long Chopped Truck
disconnected trajectories compared to the method developed by
Zhao et al. More than 89% of disconnected trajectories can be Another issue is a pickup with a trailer. Since the connection
integrated with the proposed method, while only 52% of part between the vehicle and the trail may not be detected by
integration for disconnected trajectories can be realized by LiDAR, the vehicle may be detected as two-vehicle records, as
Zhao et al. Also testing results showed significant difference shown in Figure 11.
with changing sites through the method developed by Zhao et al.
This method only fixed 30.0% of disconnected trajectories in
site 1 and fixed 51.11% of disconnected trajectories in site 4.
However, the proposed method nearly had the same high
performance in different sites, which verifies the scalability of
our algorithm using in different scenarios.
Besides the performance of fixing the disconnected vehicles’ Raw LiDAR
X-axis (m)
Clustering
0

trajectories, the detection range is also important which


-60 -50 -40 -30 -20 -10 0
-5
Trajectory 1
Trajectory 2 -10

determines the region of interest (ROI) for traffic flow -15

Y-axis (m)
-20

monitoring, route navigation, near-crash warning system, -25

-30
Trajectory
among others. The detection range in [25] only reached 30m (in
one direction) due to the thin point clouds when away from the Fig. 11. Pickup with a Trailer
LiDAR sensor. Thin point clouds lead to the loss of tracking
points and reduce the accuracy of vehicle tracking greatly. The The third issue is the long-time occlusion. One important
proposed method in this paper can switch the tracking point by assumption of the proposed method is that the occluded object
creating a virtual point that guarantees the continuity and can be detected again after some time. If the occlusion
accuracy of the tracking point. Besides, this method can ensure continuously occurs and the occluded object does not show up
the continuity of vehicle's trajectories according to the distance again, then the algorithm stops tracking at the beginning of the
between adjacent frames. Comparation between the two occlusion, as shown in Figure 12.
algorithms showed that the effective detection range of ours can
reach about 60m, which is two times farther than Zhao’s.
E. Parameter Analysis and Error Diagnosis
The experimental results showed that if the value of ϒ
became larger, the tracking point was easily to be located at the
length direction or width direction. Therefore, the larger value
of ϒ had a negative impact on treating the corner point as the
tracking point. And if the value of ϒ became smaller, it can
precisely locate the corner point as the tracking point in theory.
However, due to the unsmooth surface of the vehicle and the (1)

1530-437X (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Western Sydney University. Downloaded on June 15,2021 at 11:11:18 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2021.3079257, IEEE Sensors
Journal
2 IEEE SENSORS JOURNAL, VOL. XX, NO. XX, MONTH X, XXXX

Fig. 12. Pickup with a Trailer Truck Occluded By Another One: (1)
Frame 1899;(2) Frame 1926

At Frame 1899, there are two visible commercial trucks.


Later at Frame 1926, Truck A was blocked by Truck B, and
Truck A did not show up within the detection range of the
roadside LiDAR. As a result, the trajectory of Truck A was
broken from Frame 1926. This issue could not be fixed by the
proposed algorithm.

(2)

Though four major reasons causing the failure of connecting


the trajectories, the probability of those special events is
relatively low. And the performance of the proposed method in
Additional Cluster
Out of the Vehicle

adverse situations such as foggy or snowy weather hasn't been


evaluated due to the lack of adverse situations’ data.

V. CONCLUSION
Original Tracking Point
This paper developed a novel method to extract the vehicle
Adjusted Tracking Point
trajectories considering the influence of object occlusion issues
(both partial occlusion and full occlusion) as well as the point
switching issue. The proposed method was evaluated at four
actual sites. Compared to the state-of-the-art methods, the
proposed method can greatly improve the accuracy of the
Fig. 13. Vehicle passing X=0 & Y>0 in one frame vehicle trajectories by automatically merging the disconnected
ones. In a brief word, Our contributions are summarized as
Another special issue is shown in Figure 13. It can be seen follows. (1) The point switching adjustment algorithm via
that when the vehicle passed the line-X=0, an additional part creating a virtual point was proposed to fix the tracking point
out of the vehicle was generated. This was caused by the switching issue, making it feasible to track the vehicles. (2) The
mechanical features of the rotating LiDAR. The LiDAR occlusion detecting and fixing algorithm was proposed to
considered the line X=0 with Y>= 0 as the starting and ending extract the vehicle trajectories under full occlusion and partial
location of one frame. When a vehicle is close to X=0, in one occlusion. (3) The developed multiple vehicles’ trajectories
frame, the vehicle may be scanned twice in the same frame. extraction algorithm outperformed state-of-art methods on four
Then the distance between the first scanned part and the second different experimental sites with fixing at least 89% of
scanned part can be estimated based on the speed and the disconnected trajectories. This effort can benefit a lot of
location of the vehicle, as shown in equation (8). transportation areas such as traffic volume count, vehicle speed
v tracking, adaptive traffic signal timing, and traffic safety
GH if Disoff = − d − DVL (9)
analysis [59].
F
Where Disoff means the distance offset between the first Of course, it should be mentioned that there are several
scanned part and the second scanned part, v is the vehicle speed. special cases that the proposed method cannot handle. How to
F is the rotating frequency of the vehicle. d is the distance track vehicles under those special scenarios is a topic for future
between the vehicle when it is first scanned. With the studies. The previous studies found that the use of multiple
traditional GNN, another vehicle ID will be assigned to the first sensors can provide better performances in object tracking.
main body of the vehicle. Though the proposed method can This research purely focused on vehicle detection and tracking
merge two object IDs into one, the speed at this frame may be using the roadside LiDAR sensor. Considering the camera and
exaggerated. The total length of the vehicle can be larger than radar sensors, the accuracy may also be available for some sites,
the normal length of the other, depending on the real moving and it will be interesting to integrate the data collected from
speed of the first vehicle. For this issue, the speed variance at different types of traffic sensors to further improve the tracking
this frame increased with the speed of the vehicle. This issue accuracy. performances in object tracking [37].
could not be solved by the current algorithm.
[2] Jung, Y.K. and Ho, Y.S., 1999, October. Traffic parameter extraction
using video-based vehicle tracking. In Proceedings 199 IEEE/IEEJ/JSAI
REFERENCES International Conference on Intelligent Transportation Systems (Cat. No.
99TH8383), pp. 764-769.
[1] Wu, J., Xu, H., Zheng, Y. and Tian, Z., 2018. A novel method of
[3] Cui, Y., Xu, H., Wu, J., Sun, Y. and Zhao, J., 2019. Automatic Vehicle
vehicle-pedestrian near-crash identification with roadside LiDAR data.
Tracking with Roadside LiDAR Data for the Connected-Vehicles System.
Accident Analysis & Prevention, 121, pp.238-249.
IEEE Intelligent Systems, 34 (3), pp.44-51.

1530-437X (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Western Sydney University. Downloaded on June 15,2021 at 11:11:18 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2021.3079257, IEEE Sensors
Journal
8 IEEE SENSORS JOURNAL, VOL. XX, NO. XX, MONTH X, XXXX

[4] Zhao, J., Xu, H., Wu, J., Zheng, Y. and Liu, H., 2018. Trajectory tracking [26] Galceran, E., Olson, E. and Eustice, R.M., 2015, September. Augmented
and prediction of pedestrian's crossing intention using roadside LiDAR. vehicle tracking under occlusions for decision-making in autonomous
IET Intelligent Transport Systems, 13(5), pp.789-795. driving. In 2015 IEEE/RSJ International Conference on Intelligent
[5] Bhaskar, P.K. and Yong, S.P., 2014, June. Image processing based Robots and Systems (IROS), pp. 3559-3565.
vehicle detection and tracking method. In 2014 IEEE International [27] Zhang, W., Wu, Q.J., Yang, X. and Fang, X., 2008. Multilevel framework
Conference on Computer and Information Sciences (ICCOINS), pp. 1-5. to detect and handle vehicle occlusion. IEEE Transactions on Intelligent
[6] Lv, B., Xu, H., Wu, J., Tian, Y., Zhang, Y., Zheng, Y., Yuan, C. and Tian, Transportation Systems, 9(1), pp.161-174.
S., 2019. LiDAR-Enhanced Connected Infrastructures Sensing and [28] Kamijo, S., Matsushita, Y., Ikeuchi, K. and Sakauchi, M., 2000,
Broadcasting High-Resolution Traffic Information Serving Smart Cities. September. Occlusion robust tracking utilizing spatio-temporal markov
IEEE Access, 7, pp.79895-79907. random field model. In Proceedings 15th International Conference on
[7] Guan, H., Xingang, W., Wenqi, W., Han, Z. and Yuanyuan, W., 2016, Pattern Recognition. ICPR-2000, 1, pp. 140-144.
May. Real-time lane-vehicle detection and tracking system. In IEEE 2016 [29] Lou, J., Tan, T., Hu, W., Yang, H. and Maybank, S.J., 2005. 3-D
Chinese Control and Decision Conference (CCDC), pp. 4438-4443. model-based vehicle tracking. IEEE Transactions on image processing,
[8] Xu, W., Wei, J., Dolan, J.M., Zhao, H. and Zha, H., 2012, May. A 14(10), pp.1561-1569.
real-time motion planner with trajectory optimization for autonomous [30] Wu, J., Xu, H. and Zheng, J., 2017, October. Automatic background
vehicles. In 2012 IEEE International Conference on Robotics and filtering and lane identification with roadside LiDAR data. In 2017 IEEE
Automation, pp. 2061-2067. 20th International Conference on Intelligent Transportation Systems
[9] Ma, Y., Wu, X., Yu, G., Xu, Y. and Wang, Y., 2016. Pedestrian detection (ITSC), pp. 1-6.
and tracking from low-resolution unmanned aerial vehicle thermal [31] Wu, J., Xu, H. and Zhao, J., 2018. Automatic lane identification using the
imagery. Sensors, 16(4):446. roadside LiDAR sensors. IEEE Intelligent Transportation Systems
[10] Wu, J. and Xu, H., 2017. Driver behavior analysis for right-turn drivers at Magazine, in press.
signalized intersections using SHRP 2 naturalistic driving study data. [32] Wu, J., Xu, H., Sun, Y., Zheng, J. and Yue, R., 2018. Automatic
Journal of safety research, 63, pp.177-185. background filtering method for roadside LiDAR data. Transportation
[11] Wang, Q., Zheng, J., Xu, H., Xu, B. and Chen, R., 2017. Roadside Research Record, 2672(45), pp.106-114.
magnetic sensor system for vehicle detection in urban environments. [33] Wu, J., 2018. An automatic procedure for vehicle tracking with a roadside
IEEE Transactions on Intelligent Transportation Systems, 19(5), LiDAR sensor. Institute of Transportation Engineers. ITE Journal, 88(11),
pp.1365-1374. pp.32-37.
[12] Sun, Y., Xu, H., Wu, J., Hajj, E.Y. and Geng, X., 2017. Data processing [34] Chavez-Garcia, R.O. and Aycard, O., 2015. Multiple sensor fusion and
framework for development of driving cycles with data from SHRP 2 classification for moving object detection and tracking. IEEE
Naturalistic Driving Study. Transportation Research Record, 2645(1), Transactions on Intelligent Transportation Systems, 17(2), pp.525-534.
pp.50-56. [35] Song, T.L., Kim, H.W. and Musicki, D., 2015. Iterative joint integrated
[13] Zhou, X., Tanvir, S., Lei, H., Taylor, J., Liu, B., Rouphail, N.M. and Frey, probabilistic data association for multitarget tracking. IEEE Transactions
H.C., 2015. Integrating a simplified emission estimation model and on Aerospace and Electronic Systems, 51(1), pp.642-653.
mesoscopic dynamic traffic simulator to efficiently evaluate emission [36] Chen, X., Li, Y., Li, Y., Yu, J. and Li, X., 2016. A novel probabilistic data
impacts of traffic management strategies. Transportation Research Part D: association for target tracking in a cluttered environment. Sensors,
Transport and Environment, 37, pp.123-136. 16(12):2180.
[14] Zhao, J., Li, Y., Xu, H. and Liu, H., 2019. Probabilistic Prediction of [37] Choi, J., Ulbrich, S., Lichte, B. and Maurer, M., 2013, October.
Pedestrian Crossing Intention Using Roadside LiDAR Data. IEEE Access, Multi-target tracking using a 3d-lidar sensor for autonomous vehicles. In
7, pp.93781-93790. 16th International IEEE Conference on Intelligent Transportation
[15] Mensing, F., Bideaux, E., Trigui, R. and Tattegrain, H., 2013. Trajectory Systems (ITSC 2013), pp. 881-886.
optimization for eco-driving taking into account traffic constraints. [38] Li, R., Zhao, Y., Chen, J., Zhou, S., Xing, H. and Tao, Q., 2018, June.
Transportation Research Part D: Transport and Environment, 18, Target Detection Algorithm Based on Chamfer Distance Transform and
pp.55-61. Random Template. In 2018 IEEE 3rd International Conference on Image,
[16] Chen, J., Tian, S., Xu, H., Yue, R., Sun, Y. and Cui, Y., 2019. Vision and Computing (ICIVC), pp. 106-112.
Architecture of Vehicle Trajectories Extraction with Roadside LiDAR [39] Liu, S., Zheng, J., Wang, X., Zhang, Z. and Sun, R., 2019, June. Target
Serving Connected Vehicles. IEEE Access, 7, pp. 100406-100415. Detection from 3D Point-Cloud using Gaussian Function and CNN. In
[17] Feng, Y., Head, K.L., Khoshmagham, S. and Zamanipour, M., 2015. A 2019 34rd Youth Academic Annual Conference of Chinese Association
real-time adaptive signal control in a connected vehicle environment. of Automation (YAC), pp. 562-567.
Transportation Research Part C: Emerging Technologies, 55, pp.460-473. [40] Zhang J, Xiao W, Coifman B, et al. Vehicle Tracking and Speed
[18] Yu, C., Feng, Y., Liu, H.X., Ma, W. and Yang, X., 2018. Integrated Estimation From Roadside Lidar. IEEE Journal of Selected Topics in
optimization of traffic signals and vehicle trajectories at isolated urban Applied Earth Observations and Remote Sensing, 2020, 13: 5597-5608.
intersections. Transportation Research Part B: Methodological, 112, [41] Kim T, Park T H. Extended Kalman filter (EKF) design for vehicle
pp.89-112. position tracking using reliability function of radar and lidar. Sensors,
[19] Chen, J., Xu, H., Wu, J., Yue, R., Yuan, C. and Wang, L., 2019. Deer 2020, 20(15): 4126.
Crossing Road Detection with Roadside LiDAR Sensor. IEEE Access, 7, [42] J. Wu, Y. Zhang, Y. Tian, R. Yue, and H. Zhang. Automatic Vehicle
pp.65944-65954. Tracking with LiDAR-Enhanced Roadside Infrastructure. Journal of
[20] Zheng, J., Yang, S., Wang, X., Xia, X., Xiao, Y. and Li, T., 2019. A Testing and Evaluation, 2021, 49, 121-133.
Decision Tree based Road Recognition Approach using Roadside Fixed [43] A. R., V.K., S.C. Subramanian, and R. Rajamani, On Using a
3D LiDAR Sensors. IEEE Access, 7, pp.53878-53890. Low-Density Flash Lidar for Road Vehicle Tracking. Journal of Dynamic
[21] Wu, J., Tian, Y., Xu, H., Yue, R., Wang, A. and Song, X., 2019. Systems, Measurement, and Control, 2021. 143(8).
Automatic ground points filtering of roadside LiDAR data using a [44] Zhang, Z.Y., Zheng, J., Wang, X. and Fan, X., 2018, July. Background
channel-based filtering algorithm. Optics & Laser Technology, 115, Filtering and Vehicle Detection with Roadside Lidar Based on Point
pp.374-383. Association. In 2018 37th Chinese Control Conference (CCC), pp.
[22] Lee, H. and Coifman, B., 2012. Side-fire lidar-based vehicle classification. 7938-7943.
Transportation Research Record, 2308(1), pp.173-183. [45] Coifman, B., Beymer, D., McLauchlan, P. and Malik, J., 1998. A
[23] Gutjahr, B., Gröll, L. and Werling, M., 2016. Lateral vehicle trajectory real-time computer vision system for vehicle tracking and traffic
optimization using constrained linear time-varying MPC. IEEE surveillance. Transportation Research Part C: Emerging Technologies,
Transactions on Intelligent Transportation Systems, 18(6), pp.1586-1595. 6(4), pp.271-288.
[24] Wan, N., Vahidi, A. and Luckow, A., 2016. Optimal speed advisory for [46] Wu, J., 2018. An automatic procedure for vehicle tracking with a roadside
connected vehicles in arterial roads and the impact on mixed traffic. LiDAR sensor. Institute of Transportation Engineers. ITE Journal, 88(11),
Transportation Research Part C: Emerging Technologies, 69, pp.548-563. pp.32-37.
[25] Zhao, J., Xu, H., Liu, H., Wu, J., Zheng, Y. and Wu, D., 2019. Detection [47] Sun, Y., Xu, H., Wu, J., Zheng, J. and Dietrich, K.M., 2018. 3-D data
and tracking of pedestrians and vehicles using roadside LiDAR sensors. processing to extract vehicle trajectories from roadside LiDAR data.
Transportation research part C: emerging technologies, 100, pp.68-87. Transportation research record, 2672(45), pp.14-22.

1530-437X (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Western Sydney University. Downloaded on June 15,2021 at 11:11:18 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2021.3079257, IEEE Sensors
Journal
8 IEEE SENSORS JOURNAL, VOL. XX, NO. XX, MONTH X, XXXX

[48] Wu, J., Xu, H. and Zheng, J., 2017, October. Automatic background
filtering and lane identification with roadside LiDAR data. In 2017 IEEE
20th International Conference on Intelligent Transportation Systems
(ITSC), pp. 1-6.
[49] Wu, J., Xu, H. and Zhao, J., 2018. Automatic lane identification using the
roadside LiDAR sensors. IEEE Intelligent Transportation Systems
Magazine, in press.
[50] Wu, J., Xu, H., Zhao, J., and Zheng, J., 2021. Automatic Vehicle
Detection with Roadside LiDAR Data under Rainy and Snowy
Conditions. IEEE Intelligent Transportation Systems Magazine, 13(1),
pp.197-209.
[51] Vaquero, V., del Pino, I., Moreno-Noguer, F., Sola, J., Sanfeliu, A. and
Andrade-Cetto, J., 2017, September. Deconvolutional networks for
point-cloud vehicle detection and tracking in driving scenarios. In 2017
IEEE European Conference on Mobile Robots (ECMR), pp. 1-7.
[52] Li, B., Zhang, T. and Xia, T., 2016. Vehicle detection from 3d lidar using
fully convolutional network. arXiv preprint arXiv:1608.07916.
[53] Wu, J., Xu, H., Zheng, Y., Zhang, Y., Lv, B. and Tian, Z., 2019.
Automatic Vehicle Classification using Roadside LiDAR Data.
Transportation Research Record, 2673 (6), 153-164.
[54] Thornton, D.A., Redmill, K. and Coifman, B., 2014. Automated parking
surveys from a LIDAR equipped vehicle. Transportation research part C:
emerging technologies, 39, pp.23-35.
[55] Yue, R., Xu, H., Wu, J., Sun, R. and Yuan, C., 2019. Data Registration
with Ground Points for Roadside LiDAR Sensors. Remote Sensing,
11(11): 1354.
[56] Lv, B., Xu, H., Wu, J., Tian, Y., Tian, S. and Feng, S., 2019. Revolution
and rotation-based method for roadside LiDAR data integration. Optics &
Laser Technology, 119, p.105571.
[57] Lv, B., Xu, H., Wu, J., Tian, Y. and Yuan, C., 2019. Raster-based
Background Filtering for Roadside LiDAR Data. IEEE Access, 7 (1),
76779 - 76788.
[58] Wu, J., Xu, H. and Liu, W., 2019. Points Registration for Roadside
LiDAR Sensors. Transportation Research Record, 2673(9), pp.627-639.
[59] Zheng, Y., Zhang, Y. and Li, L., 2016. Reliable path planning for bus
networks considering travel time uncertainty. IEEE Intelligent
Transportation Systems Magazine, 8(1), pp.35-50.

1530-437X (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Western Sydney University. Downloaded on June 15,2021 at 11:11:18 UTC from IEEE Xplore. Restrictions apply.

You might also like