You are on page 1of 172

DERIVATION OF A BELIEF FILTER

FOR HIGH RANGE RESOLUTION RADAR


SIMULTANEOUS TARGET TRACKING AND IDENTIFICATION

A dissertation submitted in partial fulfillment of the


requirements for the degree of
Doctor of Philosophy

By

ERIK PHILIP BLASCH

B.S.M.E., Massachusetts Institute of Technology, 1992


M.S.M.E., Georgia Institute of Technology, 1994
M.S.I.E., Georgia Institute of Technology, 1995
M.S.E., Wright State University, 1997
M.B.A., Wright State University, 1998
M.S., Wright State University, 1999

_____________________________________

1999
Wright State University
WRIGHT STATE UNIVERSITY

SCHOOL OF GRADUATE STUDIES

Dec 13, 1999

I HERBY RECOMMEND THAT THE DISSERTATION PREPARED UNDER MY SUPERVISION BY


Erik Philip Blasch ENTITLED Derivation of a Belief Filter for High Range Resolution Radar
Simultaneous Target Tracking and Identification BE ACCEPTED IN PARTIAL FULFILLMENT OF
THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy in Engineering, Electrical.

_____________________________
Lang Hong, Ph.D.
Dissertation Director

_____________________________
James E. Brandeberry, Ph.D.
Director, College of Engineering and
Computer Science Ph.D. program

_____________________________
Joseph F. Thomas, Jr., Ph.D.
Dean, School of Graduate Studies
Committee on Final Examination

________________________
Lang Hong, Ph.D.

_________________________
Fred D. Garber, Ph.D.

________________________
Arnab K. Shaw, Ph.D.

__________________________
Richard J. Bethke, Ph.D.

________________________
John J. Westerkamp, Ph.D.

________________________
Jeffery R. Layne, Ph.D.

ii
 Copyright by

Erik Philip Blasch

All Rights Reserved

1999

iii
ABSTRACT
Blasch, Erik Philip. Ph.D., Department of Engineering, Wright State University, 1999. Derivation of a
Belief Filter for High Range Resolution Radar Simultaneous Target Tracking and Identification.

Standard multitarget tracking algorithms employ Bayesian updating to associate the highest

measurement probability to target tracks. Limitations of traditional tracking algorithms are that

measurement-to-track associations do not account for target-type uncertainties, target identities, or

incomplete knowledge. The dissertation develops the Joint Belief-Probabilistic Data Association (JBPDA)

and the Set-Based Data Association (SBDA) simultaneous tracking and identification algorithms to:

advance data association tracking techniques, provide an alternative to Bayesian tracking methods, and

demonstrate a fusion of stochastic and set-based uncertainties.

The JBPDA algorithm fuses kinematic-continuous and identification-discrete features to track

and identify targets. The set of features includes moving-target indicator (MTI) position hits and high-

range resolution (HRR) radar range-bin locations and amplitudes. Kinematic states include target

positions, velocities, and poses, where pose is the aspect angle of the target. Using MTI positions and

estimated track-pose information, HRR features are extracted to obtain a target belief-ID with an

associated belief-pose. To facilitate the combination of continuous-probabilistic and discrete-set

mathematics, a belief-probabilistic uncertainty calculus is devised to combine track-pose and belief-pose.

The intersection, association, and filtering of track and identification beliefs results in a recursive

simultaneous tracking and identification algorithm.

The SBDA uses a belief filter, based on Dempster-Shafer theory, to filter past, estimate current,

and predict future tracking and ID beliefs by associating believable events for measurement-to-track

updates. Belief sets are constructed spatially over kinematic positions, evidentially over target

identifications, and temporally and recursively for stochastic process updates. In contrast to tracking

methods which use Bayesian methods, the SBDA is robust to cluttered measurements, can identify an

unknown number and type of targets, and captures incomplete knowledge and target maneuvers.

iv
The dissertation research includes: developing state equations for simultaneous tracking and ID,

innovating mathematics to combine belief and probabilistic uncertainties, deriving information fusion and

mutual information relationships for HRR and MTI measurements, assessing track quality through belief

updates, and demonstrating a set-based and belief-probabilistic recursive belief-filtering approaches for

simultaneous HRR tracking and identification of moving targets. The results demonstrate that the JBPDA

and SBDA effectively track and ID a set of moving targets from cluttered HRR and MTI measurements.

v
ACKNOWLEDGEMENTS

While there are many people to thank who helped and encouraged me along the way, I am

indebted to the United States Air Force for allowing me to work on my dissertation in connection to my

officer duties. I would further refine my acknowledgements by specifying that the Air Force Research

Laboratory / Sensors Directorate / Automatic Target Recognition (AFRL/SNAT) was the key group for

helping to motivate me to finish my dissertation work. Jerry Covert, Ed Zelnio, Bill Baker, Dale Nelson,

and finally Doug Hager were supportive of my efforts. My co-workers were also encouraging as I saw

others finishing their Ph.D. research – Devert Wicker, Scott Weaver, Rick Mitchell, Greg Power, and

Juan Vasquez as well as Bill Pierson and Jeff Layne - and I am proud to join the group. I would also like

to thank AFRL for sponsoring the MSTAR and DDB collections, which I was able to use in my

dissertation. For the rest of the staff - Jim Morgan, Phil Douville, Phil Hanselman, Raj Malhotra, Mike

Berarducci, Ron Dennis, Todd Jenkins, Al Wood, Sandy Berning, and Bruce Rahn, as well as building

620 custodial and computer support - I say thanks. Finally, the true graybeards – Pete Howe, Vince

Velten, Jim Leonard, Lou Tamburino, and the late Harry Klopf, I appreciate their insightful conversations

and guidance. Stan Musick – the only true graybeard I meet in the Air Force - bar far, provided me with

the best information and critical conversations for my dissertation work.

While it goes without question that I would thank my committee, I believe that I was lucky to

have Lang Hong and John Westerkamp as mentors. They are truly remarkable people and their research

quality is something I hope to one day emulate (e.g. see references). Lang gave me the theoretical support

and guidance for the whole dissertation as well as leading the efforts to coordinate a Ph.D. at WSU. John

Westerkamp gave me the initial code to interface with the HRR data and continual directions in

processing the data. Jeff Layne was not only a committee member, but also a colleague, friend, and author

of another track and ID algorithm. My other committee members were supportive in motivating me to

finish my Ph.D. at WSU. Arnab Shaw suggested I try the program only months after I came to WPAFB.

Richard Bethke came from the same engineering background, school, and program. Fred Garber was

vi
supportive from the beginning, at conferences, and on the base. I cannot thank them enough, because

after my PhD experience, I realize the committee is the difference between an ABD and PhD.

Additionally, for two ABDs - Frank Coombs and Lannie Hudson; I thank them for their guidance over the

years.

I would like to thank the WSU Ph.D. Program for its initiation and am proud to complete my

degree at WSU. I would also like to show appreciation to the AFRL, Dayton Area Graduate Studies

Institute (DAGSI), the WSU Psychology department, and the Air Force Tuition Assistance Program, all of

which helped me fund the PhD program. In addition, the Air Force and AFRL/SNAT supported me

when I desired to present some of the work at national and international conferences, when I needed

computer equipment, and (my home) research facilities.

Throughout the years, I have been studying with many friends that probably deserve some credit.

For my studies at Wright State, I can group it into two types of people: my friends in class that were also

on the base and Mike Bryant. Mike gave me a good laugh to brighten my day, a place to sleep, Matlab

guidance, and the incentive to complete the WSU program.

Finally, throughout my struggles, my family and friends have been there to support me. Quentin

Davis has been a friend for a long time – ever since we met in AFROTC - and I am glad that he

encouraged me to finish. For my former Air Force Captain brothers, Ian Blasch and Kyle Blasch, I

appreciate their support in the process, because they were there when I was experiencing the joys of MIT

AFROTC, the AF delay program for graduate school, and Active Duty Service. Most of all I have to

thank my parents, Dr. Bruce Blasch and Dr. Barbara Blasch for caring, teaching, and guiding me. They

are my true mentors and my past eight years has been simple with their love, kindness, and knowledge of

the doctoral struggles.

In summary, to my family, my committee, WSU, AFIT, and the AFRL research group; I am truly

lucky to combine my interests for this PhD and never will forget your kindness. It is ironic that I signed

up to do my duty to serve my country, but it appears that United States Air Force actually served me -

which I hope someday to return the favor.

vii
TABLE OF CONTENTS

TABLE OF CONTENTS .................................................................................................................. viii


List of Figures.................................................................................................................................... xi
List of Tables ..................................................................................................................................... xiv
List of Symbols and Operators.......................................................................................................... xv
Chapter 1 Introduction..................................................................................................................... 1
1.1 Multisensor/Multitarget Tracking Problem Definition............................................................... 2
1.2 Contributions............................................................................................................................ 3

Chapter 2 Literature Overview ....................................................................................................... 5


2.1 Multisensor/Multitarget Tracking Approaches.......................................................................... 5
2.1.1 Radar Tracking ..................................................................................................................................8
2.1.2 Tracking and Classification Algorithms..............................................................................................9
2.2 Set Theory Algorithms.............................................................................................................. 10
2.3 Multisensor Evidential Fusion for ATR .................................................................................... 12
2.4 HRR Automatic Target Recognition Approaches ...................................................................... 12
2.5 HRR Set Theoretical Tracking and Identification...................................................................... 13
CHAPTER 3 Background Information ........................................................................................... 15
3.1 Sensor Management and Sensor Fusion .................................................................................... 15
3.2 Set Issues.................................................................................................................................. 19
3.2.1 The Dempster-Shafer Set Theory........................................................................................................20
3.2.2 Dempster-Shafer Set Theory – Example Belief Filtering Track and ID Belief Example.......................22
3.2.2.1 Track and ID Beliefs for Object ID..............................................................................................22
3.2.2.2 Track and ID Beliefs for the Number of Objects ..........................................................................27
3.3 Classification Belief Filter ........................................................................................................ 29
3.3.1 Confidence Value Calculation ............................................................................................................31
3.3.2 Belief-Hypothesis Confirmation .........................................................................................................33
3.3.3 Peak Feature Belief Evidence Accrual ................................................................................................34
3.3.4 Peak Feature Belief Evidence Accrual for Tracking ............................................................................35
3.4 Set Level Fusion – Tracking and Identification Belief Filter...................................................... 36
3.5 Chapter 3 Summary.................................................................................................................. 37

CHAPTER 4 Problem Formulation ................................................................................................. 38


4.1 Track and Identification Fusion ................................................................................................ 38

viii
4.2 Data Association....................................................................................................................... 41
4.3 High Range Resolution Radar................................................................................................... 43
4.4 Tracking and ID with HRR Information ................................................................................... 45
4.5 Limitations of Current Approaches ........................................................................................... 47
4.6 Proposed Problem ..................................................................................................................... 48
4.6.1 Multiobject Tracking..........................................................................................................................50
4.6.2 Moving HRR Detection Measurements ..............................................................................................52
4.6.3 Multilevel Sensor Fusion for Tracking and ID ....................................................................................53
4.6.4 Simultaneous Tracking and ID Assumptions.......................................................................................54

CHAPTER 5 Joint Belief-Probabilistic Data Association Tracking and Identification................. 56


5.1 Belief Probabilistic State and Measurement Equations .............................................................. 56
5.1.1 The Belief Functions..........................................................................................................................59
5.1.2 Belief Processing ...............................................................................................................................60
5.1.3 Belief Pose Update.............................................................................................................................64
5.2 Tracking and Identification Belief Filter ................................................................................... 66
5.2.1 Track and ID State Estimation............................................................................................................76
5.2.2 Track and ID Fusion...........................................................................................................................79
5.2.3 Track Initiation by ID.........................................................................................................................80
5.3 Summary of Chapter 5.............................................................................................................. 82

CHAPTER 6 Set-Based Simultaneous Tracking and ID................................................................. 83


6.1 Kinematic Tracking Belief Filter .............................................................................................. 84
6.2 Fused Track and Identification State Estimation ....................................................................... 88
6.3 Belief Filtering Cumulative Track and ID Recursive Belief Probability Measure ....................... 89
6.3.1 Tracking Set Combinations ................................................................................................................91
6.3.2 Identification Set Combinations..........................................................................................................93
6.4 Mutual Information for Combining Sets of Information ............................................................ 95
6.4.1 Mutual Information Model for Moving-Signature Detection................................................................96
6.4.2 Relative Entropy Information Metric for Moving-HRR Signature Detection ........................................98
6.4.3 Articulation and Object Type Classification Methods .........................................................................100
6.5 Set-Based Event Matrix ............................................................................................................ 101
6.6 Tracking and ID Set Combinations........................................................................................... 103
6.7 Track and ID Recursive Confidence and Uncertainty Measures................................................. 104
6.7.1 Temporal-Spatial Information Fusion for Disjoint Data ......................................................................104
6.7.2 Temporal-Spatial Information Fusion for Joint Data ...........................................................................106
6.8 Track Update and Propagation.................................................................................................. 108
6.9 Summary of Chapter 6.............................................................................................................. 110

CHAPTER 7 Belief Filter Tracking and ID Results ....................................................................... 112

ix
7.1 The MSTAR Data Set............................................................................................................... 113
7.2 JBPDA Results - OR................................................................................................................. 116
7.2.1 Case 1: Non-maneuvers with Multiobject Crossing.............................................................................116
7.2.2 Case 2: Maneuvers.............................................................................................................................118
7.2.3 No Maneuvers – Bad starting position - Belief-Probabilistic Tracker ..................................................120
7.2.4 Tracking Unknown Number of Targets...............................................................................................120
7.3 SBDA Results - AND ............................................................................................................... 122
7.3.1 Results of Mutual Information Set Association ...................................................................................122
7.3.2 Evidential Accumulation ....................................................................................................................126
7.3.3 Case 1: Non-maneuvers with Multiobject Crossing.............................................................................127
7.3.4 Case 2: Maneuvers.............................................................................................................................128
7.3.5 Receiver Operator Characteristic (ROC) Analysis ..............................................................................129
7.4 Tabulation of Object IDs for JBPDA and SBDA ....................................................................... 132
7.5 Baseline Case ........................................................................................................................... 136
7.6 Summary of the Results Section................................................................................................ 139
8.0 Discussion.................................................................................................................................... 140
9.0 Conclusions ................................................................................................................................. 142
9.1 Summary of Work .................................................................................................................... 143
9.2 Contributions of Dissertation .................................................................................................... 143
9.3 Future Work ............................................................................................................................. 144
Acknowledgements......................................................................................................................................145

References.......................................................................................................................................... 146
List of Abbreviations ......................................................................................................................... 152
Appendix A Tracking....................................................................................................................... 153
Appendix B FUSE Alogrithm........................................................................................................... 154

x
LIST OF FIGURES

Figure 1.1. a) Multi-feature-Multitarget Air-to-ground HRR Target Tracking, and b) HRR Signature.
.................................................................................................................................................... 3
Figure 2.1. Tracking and ID Literature. ............................................................................................. 7
Figure 2.2. Multi-Object State Estimation Literature.......................................................................... 12
Figure 3.1. JDL Sensor Fusion Model................................................................................................ 16
Figure 3.2. Tracking and ID Sensor Fusion Model............................................................................. 17
Figure 3.3. Move–Stop-Move Scenario. ............................................................................................. 18
Figure 3.4. Elements of Sensor Fusion............................................................................................... 19
Figure 3.5. Radar Planning and Control. ........................................................................................... 19
Figure 3.6. Set space for Belief Filter Normalization.......................................................................... 24
Figure 3.7. O1 Belief Intersections for track and ID. .......................................................................... 25

Figure 3.8. Mass Probability Update for ID System............................................................................ 27


Figure 3.9. Mass Probability Update for Track System....................................................................... 27
Figure 3.10. Mass Probability Update for Track and ID System. ........................................................ 28
Figure 3.11. Mass Probability Update for Number of Objects. ............................................................ 28
Figure 3.12. High Range Resolution Profile, showing amplitudes (a) and range bin locations (l). ...... 29
Figure 3.13. In-Class likelihood CDF used to determine Decision Confidence. .................................. 33
Figure 3.14. Scrambling and Ordering of Features............................................................................. 36
Figure 4.1. Pose Angle - Defined. ...................................................................................................... 40
Figure 4.2. Data Association Problem with Position Measurements Only........................................... 41
Figure 4.3. Data Association using ID and Position Measurements. ................................................... 42
Figure 4.4. Detection of SAR Images for Stationary targets versus HRR Data for Moving Targets. .... 44
Figure 4.5. Radar Tracking and Identification. .................................................................................. 45
Figure 4.6. HRR Belief Tracking with only One HRR Signature Update. ........................................... 46
Figure 4.7. HRR Belief Tracking with HRR Signatures...................................................................... 47
Figure 4.8. Multiscan MTI plot with overlaid HRR Hit. ..................................................................... 49
Figure 4.9. Multilevel Tracking and ID Approach. ............................................................................ 51
Figure 4.10. Multi-Object Tracking and ID Scenario. ........................................................................ 52
Figure 4.11. Three Levels of Information Fusion for HRR Tracking and ID....................................... 54
Figure 5.1. Tracking and Identification Measurement Space.............................................................. 60
Figure 5.2. Object Identification Uncertainty Calculation................................................................... 62
Figure 5.3. Distribution of Set Uncertainty stochastically over the beliefs........................................... 63

xi
Figure 5.4. Pose Estimate from a Fixed Sensor. ................................................................................. 65
Figure 5.5. Pose estimate from (a) Kinematic Information only and (b) Object Identification. ............. 66
Figure 5.6. Tracking and Classification Joint Association – circles indicate an “OR”......................... 68
Figure 5.7. Believable Events for the association matrix. ................................................................... 69
Figure 5.8. Track Maintenance. ......................................................................................................... 71
Figure 5.9. Information Flow for JBPDA. .......................................................................................... 76
Figure 5.10. Fusion of Objects in JBPDA........................................................................................... 80
Figure 5.11. Number of Tracks determined from Association Matrix Bel IDs. ................................... 81
Figure 6.1. Overview of the Set-Based Data Association. ................................................................... 83
Figure 6.2. SBDA Tracking Model. ................................................................................................... 85
Figure 6.3. SBDA Tracking with ID. ................................................................................................. 87
Figure 6.4. Set Theory Approach to Tracking and Identification........................................................ 90
Figure 6.5. Set space for Kinematic Information. ............................................................................... 92
Figure 6.6. Using MI to assess a Range of Pose Estimates for HRR ID............................................... 95
Figure 6.7. Histogram of Probabilities of Amplitudes from HRR profile............................................. 97
Figure 6.8. Tracking and Classification Joint Association –circles indicate an “AND”....................... 102
Figure 6.9. Believable Events for the Association matrix. .................................................................. 102
Figure 6.10. Calculating Joint Information. ....................................................................................... 107
Figure 7.1. HRR Belief Filtering Interface. ........................................................................................ 112
Figure 7.2. Moving and Stationary Target Acquisition and Recognition (MSTAR) Data Set.............. 113
Figure 7.3. Extracted Feature Sets – Kinematic Σ, Feature Γ, and Signal ϕ........................................ 115
Figure 7.4. JBPDA Positions for Case 1, α = 1.0 with and without ID. .............................................. 117
Figure 7.5. JBPDA Positions for Case 1, with α = 0.8 and α = 0.5..................................................... 117
Figure 7.6. JBPDA Position tracks for Case 2, α = 1.0 with and without ID....................................... 118
Figure 7.7. JBPDA Position tracks for Case 2, α = 0.8 and α = 0.5................................................... 119
Figure 7.8. JBPDA Beliefs and Plausibilities...................................................................................... 119
Figure 7.9. JBPDA Position tracks for Incorrect Track Initiation with α = 0.8 and α = 0.5. ............... 120
Figure 7.10. Assessment of an Unknown Number of Tracks with One target per track....................... 121
Figure 7.11. Information Metric used to determine the Articulation and Target Type......................... 123
Figure 7.12. Average Information Metric used to determine the Articulation and Target Type........... 124
Figure 7.13. (a) Relative Entropy Beliefs plotted over time and (b) enhanced view to show o = 11. .... 125
Figure 7.14. Classification of target types using Relative Entropy. ..................................................... 125
Figure 7.15. Discrimination of Objects – Reducing the Plausibility of Objects.................................... 126
Figure 7.16. SBDA Positions for Case 1, α = 1.0 and ID. .................................................................. 127
Figure 7.17. SBDA Positions for Case 1, JBPDA with α = 0.8 and α = 0.5........................................ 128

xii
Figure 7.18. SBDA Position tracks for Case 2, α = 1.0. ..................................................................... 129
Figure 7.19. SBDA Position tracks for Case 2, α = 0.8 and α = 0.5. .................................................. 129
Figure 7.20. Belief Assessment of Pcc|d for Target Robustness............................................................ 130

Figure 7.21. Belief Assessment of Pmis-unkown|d for Target Robustness................................................ 130

Figure 7.22. Belief Assessment of Pmis-unkown|d and Pcc|d for Target Robustness. ................................ 131

Figure 7.23. JPDAF and JBPDA for Constant Heading Objects. ........................................................ 137
Figure 7.24. JBPDA for Constant Heading Objects, α = 0.8 and α = 0.5........................................... 137
Figure 7.25. SBDA for Constant Heading Objects, α = 1.0, α = 0.8, and α = 0.5. ............................. 138

xiii
LIST OF TABLES

Table 3.1. Object Initial Probabilities from ID System. ...................................................................... 22


Table 3.2. Initial Probability Mass Functions. .................................................................................... 23
Table 3.3. Set Probability Mass Functions.......................................................................................... 24
Table 3.4. Two Measurement (Track and ID) Probability Mass Functions.......................................... 25
Table 3.5. Conflict Ranges for the Belief Measurement...................................................................... 26
Table 6.1. FUSE Algorithm Table. .................................................................................................... 106
Table 7.1. MSTAR Target Designations. ........................................................................................... 114
Table 7.2. Confusion Matrix for Three Targets. ................................................................................. 122
Table 7.3. Orientation information for (Target Az = 115°)................................................................. 123
Table 7.4. Clutter Information for Case 1 and Case 2......................................................................... 126
Table 7.5. Average Normalized Position Square Errors...................................................................... 132
Table 7.6. Average Normalized Velocity Square Errors. .................................................................... 132
Table 7.7. Belief, Plausibility, and Certainty for Track 1.................................................................... 134
Table 7.8. Belief, Plausibility, and Certainty for Track 2.................................................................... 134
Table 7.9. Belief, Plausibility, and Certainty for Track 3.................................................................... 135
Table 7.10. Average Normalized Position Square Errors for Constant Heading.................................. 138
Table 7.11. Average Normalized Velocity Square Errors for Constant Heading. ................................ 138
Table A.1 Overview of Tracking Algorithms. ..................................................................................... 153

xiv
LIST OF SYMBOLS AND OPERATORS

List of Symbols

a – Amplitude of a HRR range bin features, indexed from 1…s


b – value used to find the data association probabilities, β
c – normalization constant to assess clutter distributions
d - hypersphere of dimension for elliptical validation region in Chi-square analysis d = 17
e – value used to find the data association probabilities, β
f – feature of a HRR scan composing both an amplitude and a location (al). Since an observation profile is
fixed, the number of features is equal to the number of locations indexed from 1, … , q
g - index for number of good targets where g < n and g = 1, …, G
g - smaller set of ID/track information
i.e. gϕ (a l) = p(aq lq = ϕ) - extracted set of amplitude and location measurements from the set ϕ
h – larger set of ID/track information –
i.e. hΓ (F) = p(Γr = F) - extracted set of feature measurements Γ from the set F
h – Entropy calculation for Mutual Information processing
h(x,y) – joint entropy
h(x|y) – conditional entropy
i – index for the measurement number for each time step k, i = 1, …, mk
j – track index measurement at each time step k, j = 1, …, mk
k – time step
l – locations of range bin features in a HRR scan indexed from 1 … s
m – mk measurement of at each step k, consists of MTI position hits and HRR signature measurements
mij – elements of the belief Markov transition matrix
m – mass assignment probability in Dempster-Shafer theory
n – mass assignment probability in Dempster-Shafer theory
n – index for the total number of objects tracked
o – object in question and is indexed from o = 1, …., O
p – probability with a probability density function
q – index of a specific peak location with an associated amplitude
r – index for object signature comparison – r is the training index that matches object o
s - the set of features for a signature and a set of beliefs of an object for a track
t – track number for a run and indexed from t = 1, …, T
u – used as index to sum over object signatures
v – measurement variance, vk is a zero-mean mutually independent white Gaussian noise sequence with known
covariance matrix Rk,
w – kinematic variance, wk is a zero-mean mutually independent white Gaussian noise sequence with known
covariance matrix Qk,
• •
x – the target state representing the target positions and velocities xt = [xt xt yt yt ]T
x – target/object x position
y – target/object y position
z – measurement vector composing of the kinematic state and HRR features, zk = [xk , f1 , …, fq ]
k k
A – Mutual information association matrix
A – set of HRR amplitudes
B - covariance matrix, Bk, for the belief state Bel(k)
C – confidence for a target – (i.e. a 1 x 12 state vector for 11 targets and 1 unknown target)
Confidence = certainty = [1 – uncertainty] = [ 1 – [Bel,Pl])

xv
Cd - the volume of the unit hypersphere of dimension d
D – Differential Entropy D(x||y)
D - covariance matrix, Dk, for the belief measurement Bel(k)
E – estimation of mean and variance
F – feature space consisting of features of locations and amplitudes of an object
F – kinematic state transition matrix
G – Number of Good targets with high target-ID beliefs
H – Measurement Matrix
I – the Identity matrix
I – Mutual Information of an observation Y and trained object set O, I(O,Y)
J – Jacobian Matrix
K – Tracking conflict function, normalizes track belief
L – Classification conflict function, normalizes classification belief
L – Set of HRR locations
M - Bel state transition matrix composing of elements m11, …, mn+1 n+1
N – The normal function for a Gaussian distribution
O – total number of Objects per scan
P – Probability – cumulative distribution function
PD – Probability of detection of an object
PG - Probability that augmented belief track measurements fall within the validation region
Q – state covariance matrix for track and pose state
R – measurement covariance matrix for track and belief pose
S – Innovation covariance
T – total number of Tracks for the entire system, where the number of tracks is t = 1, …, T
U - Uncertainty vector for each target belief and is a 1 x 12 matrix
U – universal set for Dempster Shafer theory
UBel – Uncertainty of belief
UU – Uncertainty of uncertainty - belief-probabilistic uncertainty
V – Validation region used to gate kinematic position measurements of interest
X – State matrix for kinematic, pose, and belief states [x, φ, Bel]
Y – Measured HRR profile for Mutual Information Comparison to O - the trained set of Objects
Z – Global set of measurements consisting of z = 1, …., Z
Bel – belief states for each hypothesized object of a track
Pl – plausibility states for each hypothesized object of a track
Unk – unknown class appended to the belief state to capture the unknown target information
α - weight used to determine the contributions of track and ID to pose update
β - data association weights assigned to probability measurements
β - belief measure used in set theory
χ - Chi-square information to gate kinematic measurements
δ - Track detection indicator
φ - the object pose, where a pose for each object is determined from the belief and track information
ε - Small value for truncation
γ - belief gate threshold
η - Mutual information HRR variable indexed over amplitude and locations
ϕ - the signal set consisting of a and l information that is grouped into f features, in ID measurement space
κ - uncertainty conflict for set-based data association
λ - spatial density of false measurements
µ - mean of a HRR peak location and amplitude
µF(Φ) - the prior probability mass function of the number of false measurements (the clutter model).
ν - innovation used in filtering – ν = measured - estimated
θ - joint track-ID association events

xvi
ρ - plausibility measure
σ - variance of a HRR peak location and amplitude
τ - measurement association indicator
Ψ - normalization constant for the joint track and ID association event belief-probabilities
Φ - number of false measurements
Σ - the measurement set consisting of z measurements
Ω - validation matrix
Γ - the feature set consisting of f features
ξ - the object set of interest consisting of objects between ξ = 1, …, O
∅ - the empty set

List of Operators
Σ - Summation
⊗ - intersection of plausible sets
⊕ - intersection of belief sets
∪ - set union
∩ - set intersection
⊇ - set inclusion – (i.e., A is included in set B, A ⊇ B)

xvii
CHAPTER 1 INTRODUCTION

Multitarget-multisensor tracking algorithms match kinematically detected measured positions in

time and space. Typical tracking methods utilize a highest probability or minimum error of association

between the predicted position and current position measurements to match points. Many position hits

result in measurement clutter. Limitations of probabilistic tracking algorithms in cluttered environments

are: 1) target-identification confidences1 and measurement uncertainties are not modeled, 2) high target

density (clutter) may result in miss associations, and 3) algorithms are not robust to false alarms. If

tracking algorithms had access to identification (ID) information, it would enhance measurement-to-track

associations, minimize track uncertainties, and discern a varying number of targets in a cluttered

environment.

One method to achieve a target ID is to associate the highest probability of detection to the

hypothesized target based on training methods. Such examples are template matching and target

modeling, which use a minimum squared error (MSE) or maximum likelihood estimator (MLE) criterion.

However, the fused decisions, based on the highest probability or minimum error could lead to miss ID

since no information is gained as to measurement content, training set completeness, or measurement

validity. Another approach is to utilize a set theoretic approach, which assesses a belief probability

relative to a set of training information. The set theoretic tracking/ID approach 1) can constrain a

plausible number of targets of interest to track, 2) fuse track and ID sets to take advantage of nearly

orthogonal information, and 3) assess track association confidences through an accumulation of target

track and ID evidence.

Humans and machines are typically trained for specific missions and/or scenarios. One such case

is identification of a target. A typical scenario is a pilot is looking for a target, where the target is a

1
specific member of a target set (i.e. a tank is a vehicle). For a specific mission, the human is instructed to

locate a target of interest whether the target is moving or stationary. Since the desired target type is

known (i.e. a T72 is a tank), the person seeks accumulated sensed evidence for the target such as

movement on a road, features that discern the target type, and relative direction of target movement.

When the human approaches a plausible target; either the target is moving, the human is moving, or both

are moving. Attention to moving sensory ID and track features can be fused as a single perception of the

target. However, before a final declaration of the target type is made, the person continually refines the set

of plausible information to correctly identify the target. In the case of a moving-multitarget scenario, one

must continually update the information on all targets and associate confidences with target tracks and

IDs. Image analysts typically utilize target beliefs to reduce measurement uncertainty by accruing

evidence from the intersection of tracking and ID information. To accumulate evidence, the human acts

as a sensor manager to control a set of sensors to gather measurement data.

1.1 Multisensor/Multitarget Tracking Problem Definition

Multitarget tracking and ID sensor fusion is a subset of sensor management, which includes

selecting sensors, sensor recognition policies, and tracking algorithms for a given set of mission

requirements. For example, in a typical tactical aircraft, the onboard sensors are active radar, electro-

optical, and navigation sensors, with each sensor having a variety of modes in which it can operate and

features it can detect. Figure 1.1 shows the case of a high range resolution (HRR) radar collection of

range-bin features. The radar sensor makes kinematic measurements to detect and track targets of interest

while reducing pilot workload. In a dynamic and uncertain environment, a sensor manager, such as a

human, must fuse the track and classification information to ID the correct target at a given time. The

human can aid tracking algorithms by determining a set of tracks to follow and aids classification

algorithms by constraining the set of plausible targets. Likewise, a machine can help the human by

isolating the number of true targets from clutter, providing belief updates of target type, displaying the

1 Covariances could be used as stochastic confidence measures, but these are kinematic-target confidences and not target identification

2
target track, and computing the predicted position of the target.

Figure 1.1. a) Multi-feature-Multitarget Air-to-ground HRR Target Tracking, and b) HRR Signature.

We wish to automate the process of simultaneous target tracking and ID to assist a human in the

target-ID process. One way to augment the human’s sensing capability for target ID is to use the

computer to process computations. The computations involved include surveillance operations such as

searching through moving target indicator (MTI) hits, processing HRR feature measurements, and

predicting track events. To match the computer instructions to the human’s operational capabilities, it

would be desirable to have the computer process target-ID beliefs, update target-ID confidences,

evidentially accumulate target beliefs, maintain a track history of targets, and assess track-ID radar

measurement uncertainty.

1.2 Contributions

Dynamic multitarget-multisensor track fusion with uncertainty requires evidence accrual to

maintain a track and assess target ID evidence. The formulation might be labeled as belief filtering since

sensed target tracks and identities are represented as situational beliefs. The objective of a multitarget-

multisensor tracking algorithm is to 1) extract a number of tracks from the dynamic environment, 2)

assess confidence-ID levels from the target classification algorithm, and 3) integrate the track and ID

information for real-time beliefs of the number and types of targets from a plausible set of targets. In this

confidences.

3
dissertation, we computationally describe the belief filter to simultaneously track and ID targets, similar to

what the human might do. Contributions to state-of-the-art research include a 1) novel set theory

approach to tracking and ID, 2) an uncertainty calculus to combine belief and probabilistic uncertainty,

and 3) the fusion of kinematic, feature-ID, and set level information for a recursive algorithm.

By introducing a feature-based, set-theoretic belief filtering algorithm, similar to the activities of

a pilot or an image analyst, a purely mathematical approach may offer a means to control some aspects of

computational burdens and limitations experienced by analytical data association multitarget tracking and

ID techniques. Additionally, a combined track and ID algorithm can 1) improve track quality, 2) fuse

nearly orthogonal sets of information, and 3) simultaneously track and ID targets in the presence of

clutter. Two concepts are developed. The first concept implements a set-ID with a modified traditional

tracking data association algorithm which we call the Joint-Belief Probabilistic Data Association (JBPDA)

approach. The second method implements a purely set-theoretic tracker and target identifier, which is

labeled the Set-Based Data Association (SBDA) approach. Both algorithms require the belief filter to

recursively process beliefs and demonstrates an innovative way to track targets in clutter.

The application of the dissertation is a novel HRR tracking algorithm. HRR is used because 1)

HRR applies to moving targets, 2) the Statistical Feature-Based Classifier (STaF) algorithm, developed

by Mitchell and Westerkamp, provides robust set-theory analysis for HRR target classification, and 3)

HRR is distant and weather invariant. Chapter 2 is a literature review of the work in multitarget tracking,

sensor fusion, and set theory with more attention given to theory of algorithms that utilize HRR radar

information as related to the JBPDAF and the SBDAF. Chapter 3 provides background information

including sensor fusion, the STaF algorithm, and Dempster-Shafer theory. Chapter 4 formulates the

problem of multitarget-MTI/HRR target tracking and ID. Chapter 5 develops the JBPDAF by combining

continuous track and discrete ID information. Chapter 6 derives the SBDA based on set theory and

mutual information to fuse track and ID sets. Chapter 7 presents simulated MTI position hits with real

HRR profile results of the JBPDAF and SBDAF for simultaneously tracking and identifying moving

targets from HRR measurements. Chapter 8 discusses results and addresses benefits and limitations of the

4
algorithms, and lists contributions of the research. Chapter 9 draws conclusions, provides a summary of

the work, and lists potential areas of future investigation.

CHAPTER 2 LITERATURE OVERVIEW

The literature overview concentrates on algorithms for multitarget radar tracking and

identification, set theory algorithms, sensor fusion, and high range resolution (HRR) automatic target

recognition (ATR).

2.1 Multisensor/Multitarget Tracking Approaches

Multitarget tracking in the presence of clutter has been investigated for two decades. The premier

multitarget-multitracking algorithm was the Multiple Hypothesis Tracking (MHT) algorithm developed by

Reid [1] in 1979 based on a multiple hypothesis estimation (MHE) [2] of kinematic information.

Although MHT is considered optimal, it is infeasible to implement do to the multitude of hypothesis that

need to be generated and processed [2]. Researchers have been trying alternative approaches to

compensate for the computational requirements of MHT. Instead of generating hypothesis, another

approach is to use measurement information to discern target tracks such as data association algorithms.

Bar-Shalom [2] has developed numerous data association tracking algorithms that assign the highest

probability of measurements to tracks; for example, the joint-probability data association filter (JPDAF)

and the maximum-likelihood data association algorithms. A third approach for multitarget tracking

algorithms is to track targets with multiple sensors such as the multiresolution wavelet-based approach

formulated by Hong [3,4,5,6,7,8]. A fourth class of prominent tracking algorithms are the multiple model

estimator (MME) approaches developed by Maybeck [9]. A fifth class of tracking algorithms, derived

from the Generalized Pseudo Bayesian estimator, are the interactive multiple model (IMM) approaches to

target tracking [2,10] which are designed to capture target kinematic maneuvers. Combinations of the

above approaches are available such as the IMMJPDAF [2] and the IMPDA [11]. Appendix A shows a

5
table of comparison of the most popular tracking methods [12]. A novel approach, presented in the

dissertation is to use target-ID information for a multiresolution data-association algorithm to capture

target maneuvers and to increase track quality in high density or cluttered environments.

Multitarget tracking in the presence of clutter has been investigated through the use of data

association algorithms [2,13,14] such as the joint-probability data association (JPDA). Likewise,

multisensor fusion algorithms have focused on tracking targets in clutter from multiple look sequences

such as the multiresolution approach formulated by Hong [15]. One inherent limitation of current tracking

algorithms for clutter mitigation is that the information used to track targets is based only on kinematic

measurements which necessitates track maintenance [2,16]. One way to augment kinematic trackers to

maintain tracks is to let an image analyst select targets to update tracks [17]. An image analyst chooses

features from the target to determine the target type. Recently, algorithms have been proposed that

simulate these ideas called feature-aided tracking. Feature-aided tracking uses target features to help

discern targets in the presence of clutter such as high-range resolution radar(HRR) features [18].

However, feature-aided algorithms are sequential in nature and do not capture an image analyst’s ability

to simultaneously track and identify targets. A human typically processes a belief or a hypothesis in the

target type to update track information. MHT generates hypotheses of target kinematics and could be

expanded to create hypotheses over target types versus clutter. However, the expansion of the kinematic

and target-type hypotheses could quickly grow to an unmanagble number. In order to implement the

MHT algorithm, hypothesis management is typically performed by pruning unreasonable hypotheses. As

an example, Farina has numerous techniques for processing radar measurements [19] with various MHT

pruning techniques; however hypotheses are pruned, not cluttered measurements.

Another tracking method that captures the nature of hypothesis generation to discern targets from

clutter is set-theoretical based trackers. A few fusionists, including Mahler [20] and Mori [21], use a set-

theoretical approach as part of a unified data fusion theory [22]. One of the advantages of using the set-

theoretical paradigm is that the set of information (i.e. number of targets) can be controlled; however, the

set of targets needs to be identified as members of the selected set. Using a set-theoretic approach, instead

of generating hypothesis that are only to be pruned later, an intelligent method to hypothesis generation is

6
established by controlling the set of information. The set of information can be processed to obtain target-

type beliefs, plausibilities, and certainties of moving targets. Beliefs represent target-ID states which can

be predicted, estimated, and filtered [23]. Additionally, by using a target feature-set or target attributes

[24], implausible targets as classified from a feature set can be removed to reduce the number of target

hypotheses in a dynamic scenario. Plausibilities help to maintain multiple target-IDs without declaring a

single target identity. Further reductions in computations are exhibited if a simultaneous tracking and ID

Tracking and ID Literature


Set Theory Literature Belief Theory
Evidential Algorithms
Dempster-Shafer, Modified Modified DS Algorithms Literature
Multiresolution Algorithms Set theory mapping DS-Bayes
Adv: Confidence, Ev. Accrual
Different Resolutions Adv: Optimal
Limit: Need fixed sets
Adv: Set Sizes, Wavelet Limit: Set restriction
Not Optimal
Fusion of Same Data - Type Assumptions for Optimality Belief Filtering Algorithms
Ref: Shafer
Limit: Full Data Spectrum Ref: Mahler Recusive DS
Ref: Hong Adv: Evidential Approach
Fusion, Cont./Discrete Sets
Limit: Set restriction
Mori Algorithm Track/ID MME Algorithms Tracking Algorithms Ref: Blasch
Poisson Target Models JPDAF, IMM
Adv: Hybrid Space, target # infinite Adv: Optimal Adv: Estimation
Limit: No Identification Limit: Models for targets Limit: Fixed # targets, models
Need to restrict set size Assumptions for Optimality A priori Infomormation
Ref: Mori Ref: Layne Ref: Bar-Shalom

algorithm is designed. As a summary, the illustration below shows some of the prominent groups

working the field of multi-target tracking with advantages (adv), limitations (limit), and references (ref).

Figure 2.1. Tracking and ID Literature.

Since the military has funded much of the work in multitarget tracking, many people have

worked specifically with radar for multitarget radar tracking such as Blackman [24,25] and Farina [26].

Other researchers using radar for target tracking applications include Blair [27], Bogler [28], Bar-Shalom

and Li [29], Layne [30], Odom [31], and Washburn [32]. While each of these researchers has utilized

radar measurements, none have explored a set-theoretic approach to radar target tracking. For instance,

Bar-Shalom uses the peak signal return for obtaining measurement updates, but does not use the radar for

classification [2]. A radar can operate in a variety of modes, but the dissertation will focus on MTI and

HRR radar target tracking.

7
2.1.1 Radar Tracking

High Range Resolution (HRR) radar is distance and weather invariant and can obtain a signature

from a moving target. Three HRR tracking and classification algorithms have been proposed. In 1996,

Stone [33], of Metron Corp., proposed a non-linear Bayesian likelihood ratio tracker (LRT) using data

fusion to control the number of targets and compared it to the MHT algorithm. Following this work,

Metron, applied the approach to shipboard HRR automatic target recognition (ATR) [34]. For the second

method, Jacobs and O’Sullivan added diffusion tracking to their Bayesian HRR ATR algorithm and

computed joint likelihood probabilities [35] with applications [36]. Kastella has adapted the work of

Jacobs and uses scatter-centering models for a nonlinear joint tracking and recognition algorithm based on

joint probability density functions, but much of his work is simulated [37]. Kastella and Musick have

used a joint-multiprobability algorithm, or JMP, to associate classifications and track updates [38].

Another approach which is based on the Kalman filter and seeks to track and recognize targets is that of

Libby, who was a student of Maybeck [39,40]. Libby’s approach is rigorous, where he couples behavior

(state dynamics) and appearance (syntactic observable quantities) through a multiple model estimators

(MME), but has only simulated the HRR identification performance and has not included tracking

information. The third HRR tracking algorithm is that of Layne [41,42] - an automatic target recognition

and tracking filter (ATRF) – which is an interactive multiple model (IMM) approach for HRR signatures.

Layne extended Libby’s work for HRR signatures, including the classification and identification in cases

where a different target type exists for each track (i.e. M1 tank). In the ATRF, there is a tight coupling of

pose-aspect information between the tracking and recognition system.

The LRT, JMP, and ATRF tracking and classification approaches, although influential in this

work, rely on the Bayes’ rule for identification where the most probable target is selected. A limitation of

using a Bayesian analysis is that it does not capture incomplete knowledge. For instance, there are times

when unknown targets might be of interest that are not known at algorithm initiation. At other times,

there are unknown number of targets to track or targets not trained for classification. Another limitation

of these approaches is that they are for single target tracking. We seek to expand on these tracking and ID

algorithms for HRR signatures, by allowing for the capability to discern unknown relevant targets, reject

8
non-plausible targets, and retain targets that lie outside a kinematic gate but within a classification gate in

a multitarget tracking environment with clutter. Similar to the ATRF [41], we can use the pose estimate

from the kinematic tracker, and further refine the track-pose estimate with the HRR belief-pose update.

The method used to refine the track-pose, reduce track-association uncertainties, and pick out the target

from clutter is a set-theoretical belief filtering approach which enhances track quality. For instance,

incomplete knowledge includes observed unknown targets, plausible targets, and partial observations. The

novel belief filtering approach accounts for association uncertainties between ID and tracking, is a

multiresolutional set-theory approach, and would be the first of its kind applied to real data. The real data

we use is HRR measurements augmented with simulated MTI hits for multitarget tracking.

In multiple target tracking, there are methods of tracking and recognition and tracking and

detection [43,44]. In the case that the MTI sensor only gives position updates, we assume that the HRR

evaluates belief-ID information to confirm MTI position measurements similar to a signal-detection

threshold to optimize tracking performance. Identification of an target is more than detection or

recognition, it is the acknowledgement that a specific target is a unique member of a specific target-type

(i.e. T-72) [45]. Novel work in the area of HRR classification and identification was performed by

Mitchell and Westerkamp [46] in developing a robust HRR Statistical-Feature Based Classifier (STaF).

The STaF algorithm is robust in decision making and performed better than a quadratic classifier. Using

HRR peak amplitudes from the signal, they showed the advantage of the algorithm at high declaration

rates while maintaining a low-probability of miss-identifying unknown target classes [47]. For multiple-

target HRR classification in clutter, the STaF algorithm provides robust detection of HRR target profiles.

Using the advantages of the STaF algorithm for identifying targets may lead to robust multitarget

tracking.

2.1.2 Tracking and Classification Algorithms

Multitarget classification and tracking is not a novel concept, since humans perform this

function naturally in normal human vision. Some researchers have tracked objects with images such as

Bar-Shalom [48] and Blasch [49,50], and there are many others. Some have used acoustic sensors [51]

9
and infrared sensors [52]. Efe has an algorithm for maneuvering targets [53]. Of the many approaches

and sensors that might inherently give information on tracking and classifying targets, three general

groups are utilizing HRR radar measurements for tracking and classifying/identifying targets and were

listed in Section 2.1.1.

To obtain an target ID, we use an HRR sensor which is fused with MTI information to increase

track quality. Multisensor fusion of tracking and ID information helps to eliminate false alarms [54].

Principles of data and information fusion, such as in Bayesian and Dempster-Shafer fusion, can be found

[55,56,57,58] and many articles overview fusion techniques [59,60]. There are similar works in the

robotics community where targets are tracked from stationary image sensors with rotating charged-

coupled devices (CCDs) [50] or moving mobile robots tracking stationary images [49] to isolate a target in

clutter. Other tracking algorithms using imaging sensors are prevalent in the literature [61,62], but no

formal method for simultaneous tracking and ID has been proposed.

Many tracking algorithms utilize information from multiple sources at a variety of resolutional

levels and results have proven well for distributed filtering of multiresolutional signals [63,64]. The

ability to process the measured signals at a variety of resolutions is applicable to many situations, e.g.

surveillance systems, where the observer could choose to look at the fine details at the highest resolution

or a coarse control at the lowest resolution. For this dissertation, we use a multiresolutional approach

where coarse measurements are the kinematic track information from a moving target indicator (MTI)

and the fine measurements consist of the HRR feature measurements. This dissertation develops the

equations for belief-probabilistic feature-aided simultaneous tracking and ID algorithm in a set-theoretical

approach.

2.2 Set Theory Algorithms

Many people have investigated the basic properties of the evidential theory derived by Dempster-

Shafer [65,66] where traditionally, evidential reasoning is applied to decision making [67]. Evidential

reasoning is applicable to dynamic-control processes yet the application of evidential reasoning for radar

tracking is non-existent in the literature. Three topics of discerning control information from uncertain

10
knowledge are: (1) the use of evidential theory in decision making and set methods of fuzzy logic and

expert systems which can be mapped into an evidential-belief system [68,69], (2) analysis of Bayesian

tracking and multisensor fusion methods which can be enhanced by the use of an evidential system to deal

with an unknown target types [70], and (3) evidential target ID theories applied to HRR and SAR radar

analysis. From the literature review, no simultaneous tracking and ID multisensor-multitarget algorithm

for unknown targets has been proposed, (many implement the sequential nature of tracking and ID for

single target ID), especially ones that use an evidential approach.

There are a few papers that propose a set theory for tracking and multisensor fusion. The first, in

1986, was Mori, et. al. [71] who examined a multitarget approach using a Poisson process to model the

arrival time-event measurements. Mori’s algorithm effectively controls the set expansion of possible

targets. In 1997, Mori updated his work to include random sets for data fusion problems [72,73]. In

1992, Goodman worked with fuzzy sets to update probabilistic information for tracking fusion [74] and

many others have implemented fuzzy sets such as Farooq [75]. Mahler [76] presents a unified data fusion

set theory approach that includes Dempster-Shafer evidential reasoning, fuzzy systems, expert systems,

and information theory. While Mahler overviews the main categories for set theory approaches for his

Unified Evidence Accrual Data (UNEAD) fusion, no practical implementation of the work has been

published. Finally, since set-theoretic fusion is a class of data fusion, the reader is referred to a recent

book, Mathematics of Data Fusion [77]. Figure 2.2 illustrates how set theory is its own type of tracking

algorithm as a correlation free paradigm. Aspects of data fusion of interest to the dissertation work is that

of multisensor evidential ATR fusion.

Multi-Object State Estimation (tracking) Literature


Correlation-Based Approaches and Algrothims

Recursive Algorithms Scan-Wise Optimization Nearest Neighbor [Sea]


2-D Optimal Assignment [Jonker]
Probabilistic Data Association [Bar-Shalom]
Multiple Hypothesis Filtering [Redi, Mori]

Batch Processing Algorithms 0-1 Integer Programming [Morefield]


(Primal-Dual) Relaxation Method [Washburn]
Multiresolution [Hong]

Correlation-Free Approaches and Algrothims [Barlow,Kastella,Mahler, Mori]

11
Figure 2.2. Multi-Object State Estimation Literature.

2.3 Multisensor Evidential Fusion for ATR

Multisensor/Multitarget tracking is a subset of information fusion. Prominent books on

Information fusion list the Bayesian and Dempster-Shafer (DS) evidential approach [78,79]. Typically,

fusion researchers have been concerned with target ID without taking advantage of tracking information

for multisensor-multitarget fusion. Bogler is an example of many who have applied evidential reasoning

to target identification [80] as well as Dillard for a multisensor system [81]. Hong has formulated a

recursive method for evidential reasoning [82], belief functions [83], and confidence values [84], yet has

not applied his techniques with real radar data including measurement uncertainties associated with data

collection. Hong has applied his recursive methods to simulated air target ID [70] and Blasch has applied

it to ground target ID [85]. The previous articles of recursive set-theory multisensor approaches need to

be compared with that of articles relating to HRR automatic target recognition approaches.

2.4 HRR Automatic Target Recognition Approaches

While computer vision and pattern recognition theory encompass many image processing

algorithms, the approaches are typically applied to high contrast video images [86,87]. The classification

of objects using the clustering approach for images is useful for synthetic aperture radar (SAR) image

segmentation [88]. MSTAR researchers have applied a variety of approaches to SAR ATR where SAR is

a form of coherently integrated radar profiles that includes more preprocessing and more radar scans than

HRR to form an image (see these books for further details [89,90]). One approach that is showing

promise for SAR ATR is a mutual information technique demonstrated by Choi, from Alphatech

(unpublished), and Blasch. Blasch has applied the technique to SAR data sets [91,92,93]. From these

methodologies, researchers exploit SAR imagery for target tracking with position cues from an MTI [94].

One of the drawbacks from the research; however, is that SAR imagery is ONLY for stationary targets,

while HRR is for moving targets. SAR and HRR are collected from the same radar antenna. A tradeoff is

necessary to assess transitioning between the radar mode processes [95]. Since SAR includes more radar

12
scans than HRR, HRR profiles can be extracted from SAR data [96]. The goal is thus to achieve SAR-like

ATR from moving targets using only HRR profiles.

Recently, HRR ATR Techniques were featured at the 1999 SPIE AeroSense Conference with

papers from Layne on MME tracking and ID [42], Shaw on eigenvalue templates [97], Westerkamp on

Bayesian Confidences [98], Bhatnagar on MSE Bayesian belief functions [99], and Williams and Gross on

moving HRR ATR2 [100]. Highlights of the current technology is that target recognition of ground

moving targets is performed by extracting information from the HRR profile [101]. To date, the group

has utilized the STaF algorithm by Mitchell, has out-performed basic Minimum Squared Estimate

techniques, and has alluded to incorporating tracking information into their HRR target ID algorithms.

Hopefully, this research complements the group’s HRR ATR by demonstrating a simultaneous tracking

and identification of ground moving targets.

The set-theory approach to HRR target classification was proposed by Mitchell and Westerkamp

[102,103,104], and termed a Statistical feature based classifier (STaF) and is influential in this work. In

addition, Blasch presented a feature-based approach for SAR [105] and HRR target identification [106]

which is based on the Dempster-Shafer approach. In both cases, classification is performed using a set of

features and a set of targets. To summarize the literature overview as related to the dissertation, we

integrate the previous information for HRR-set theoretical tracking and ID.

2.5 HRR Set Theoretical Tracking and Identification

While much work has been done in tracking, set theory, and multisensor fusion; the author is

unaware of an application of set-based HRR radar tracking and ID. In addition, no work has been

performed emphasizing track ID confidences to improve track quality nor set-based algorithms to

simultaneously track and identify moving HRR targets. The authors working with HRR profiles for

tracking and ID have different techniques to fuse continuous tracking kinematics with discrete target type

from HRR range-bin features such as: Layne [42], Stone [34], Kastella [37], and Jacobs [36]. The

2
Moving HRR ATR assumes a pose from the MTI tracker to help with target classification and identification.

13
development of the belief filter for fusing discrete HRR-ID features and continuous kinematic set

information is the mathematical contribution of the dissertation work to the literature. In general, the

work advances tracking, sensor fusion, and HRR ATR technology.

The dissertation develops a feature-based belief filter for simultaneously tracking and identifying

targets. The belief filter leverages evidence accrual for tracking and ID based on a feature-based approach

for HRR tracking and the STaF algorithm for HRR classification. The STaF algorithm is a classification

application of evidential reasoning, but we wish to reformulate the algorithm to reflect the Dempster-

Shafer approach. In order to demonstrate how evidential reasoning can be used for HRR target tracking,

we will develop a belief filter for a dynamic tracking environment. The belief filter extends the STaF

algorithm for robust HRR profile tracking by choosing a plausible set of tracks and targets made available

to the tracking algorithm at each time instant. The tracking algorithm determines from the set of targets

and the set of position hits how many and which targets associate with which tracks. Additionally, by

eliminating targets that are not plausible reduces the track number and the set of trained HRR profiles

from which the classification algorithm must search. Chapter 3 provides background information on

sensor fusion, the Dempster-Shafer method, and the STaF algorithm.

14
CHAPTER 3 BACKGROUND INFORMATION

In this Chapter, we overview some of the background concepts of the Joint Belief-Probabilistic

Data Association (JBPDA) and the Set-Based Data Association (SBDA) approaches. Topics covered

include a standard fusion model, set theory, and how to process the high range resolution (HRR) profile.

For the fusion model, Chapter 3 overviews the Joint Director of Labs (JDL) model. For the set theory

approach, Chapter 3 describes the basics of the Dempster-Shafer theory and includes an example of track

and ID fusion. For the HRR profile analysis, Chapter 3 presents the basics of the Statistical Feature Based

(STaF) classification algorithm of Mitchell and Westerkamp [46].

3.1 Sensor Management and Sensor Fusion

Sensor management is the process of controlling multiple sensors to resolve ambiguities about

multiple, possibly unknown targets. Controlling multiple sensors to track and ID a single object is similar

to tracking and identifying multiple objects. For multiple sensors and multiple targets, a strategy must be

employed to allocate sensors to targets. Consider, for example, a single controlled sensor – e.g. a tracking

radar – as it attempts to follow a single target. The radar must adjust its azimuth, elevation, and focal

length in such a way as to anticipate the location of the target at the time the next position hit/HRR

signature of the target. Using successive position hits from the radar, one track is formed. Next, consider

the case of multiple sensors and multiple objects. The sensor manager must allocate specific sensors to

specific targets.

A target-tracking problem is a standard problem in control theory. The sensor as well as the

target have a time-varying state vector. The sensor-management problem is solved by treating the sensor

and target as a single system whose parameters are estimated simultaneously. If we define a state vector,

associated with the target, and a reference vector, associated with the radar, a tracker can keep track of

the distance and pose between the state and reference vectors. If the target moves (e.g. change in state),

15
the sensor manager must move the radar to follow the target. In this dissertation, the state vector includes

the kinematic, pose, and ID state for each target track. The reference vector is kinematic, pose, and ID

relative to a fixed sensor. The sensor manager will process measurements for each track, determine how

many targets exist, and control sensors to obtain measurement information. As an example, an MTI

system can cue an HRR sensor. Sensor management is only one level in a sensor fusion model.

One of the prominent information fusion strategies is that of the Joint Director’s of Labs (JDL)

fusion model of Steinberg, Bowman, and White [107]. The JDL model consists of five modules, and is

shown in Figure 3.1.

JDL Model_Revised

Level 0
Distributed Pre-Processing Distributed
Level 1 Level 2 Level 3
Local THREAT
OBJECT SITUATION
Intel Refinement Refinement Refinement Human
SIG Computer
IM
Interface
EL
EW
Sonar DataBase Management
Radar Level 4 System
HRR PROCESS
SAR Refinement Support Fusion
MTI DataBase DataBase
... Sensor/Info
Manager Information
Data Sources
Sources

Figure 3.1. JDL Sensor Fusion Model.

The processing of each level is as follows:

Level 0 − Sub-Object Data Assessment: estimation and prediction of signal/object observable states on the

basis of pixel/signal level data association (e.g. HRR feature collection);

Level 1 − Object Assessment: estimation and prediction of entity states on the basis of observation-to-

track association, continuous state estimation (e.g. kinematics) and discrete state estimation (e.g. target

type and ID);

Level 2 − Situation Assessment: estimation and prediction of relations among entities, to include force

structure and force relations, communications, etc. (e.g. multiple target movements);

16
Level 3 − Impact Assessment: estimation and prediction of effects on situations of planned or estimated

actions by the participants; to include interactions between action plans of multiple players (e.g.

assessing target types at specific locations); and

Level 4 − Process Refinement (an element of Resource Management): adaptive data acquisition and

processing to support mission objectives (e.g. sensor management control of MTI and HRR

measurements).

The fourth module is that of sensor management, or the control of sensors as highlighted in

Figure 3.2. In this case, HRR sensor management is a function of the time to collect a signature from

radar scans, where to point the sensor for target estimation, and ATR correlation to a database. The sensor

manager must utilize information from the other JDL levels to 1) track and anticipate the next state

position of the target, 2) to obtain an HRR scan and ID the target with a pose estimate, and 3) to activate

the radar mode.

Figure 3.2. Tracking and ID Sensor Fusion Model.

17
For the dissertation work, we assume that the incoming sensory information is from the MTI and

HRR sensors, which are two radar modes. To illustrate the situation, we show in Figure 3.2 how the JDL

model applies to the HRR tracking and ID problem. In the scenario, a pilot approaches a set of targets

consisting of targets of interest and clutter (decoys). MTI, HRR, and SAR data are preprocessed and sent

to the object assessment processor. Situation and impact assessment include target tracks and target IDs.

Finally, the sensor manager chooses which set of data to collect next.

HRR tracking is complicated since the sensor manager would have to know when to activate the

HRR radar mode to ID a moving target. If the target is moving (rotating) and stopping, it is difficult to

process target dynamics, since we would have to know the target’s velocity. Figure 3.3 shows the problem

of moving and stopping targets. Note, SAR imaging should be used when the target is stopped. Thus, the

problem of HRR tracking can be complicated when a target is maneuvering, such as in a change in

direction, since the SAR mode should be activated.

Collect
SAR

Vel
Stop Move Stop Move Stop Move

Time

Figure 3.3. Move–Stop-Move Scenario.

One problem inherent in HRR tracking is to assess not only the target ID, but when to activate

the radar mode. By assessing the target velocity, the sensor manager might be able to determine when to

activate the HRR sensor. Additionally, when the HRR is active, it presents the problem of when to

activate the SAR mode. Thus, by simultaneously identifying the target and its kinematic velocity, the

sensor manager can determine when to activate the HRR radar. One of the main planning issues for HRR

or SAR activation is where to point the radar such as obtaining kinematic information from the MTI.

Figure 3.4 shows how tracking and ATR are central to the fusion system or that there are a host of sensor

18
fusion issues that surround tracking and ID/ATR algorithms such as resource management, sensor

management, estimation, and correlation of data association.

Sensor Collection Management Resource


Management
Mgt
Sensor Integration

Sensor Fusion
ATR
Correlation Tracking Estimation

Info Fusion

Data Fusion

Figure 3.4. Elements of Sensor Fusion.

The key issue of sensor fusion is data estimation and association to plan and control sensors.

Basically, a sensor manager has to know what is happening to determine how to act. Association is key to

determining when to activate the HRR sensor and relies on estimation information. The key problem is

how to perform data association simultaneously as shown in Figure 3.5. Additionally, Figure 3.5 shows

that the use of HRR is a function of estimation and association which is further explored in the

dissertation. To perform data association, a set-based data association will be used for simultaneous target

association and track estimation.

whats how to
happening act Planning Control
Where/How Which HRRs Information
Estimation many targets to activate

Association When to get Who tracked


HRR Sig Perception
Id target

Figure 3.5. Radar Planning and Control.

3.2 Set Issues

An approach to the multisensor, multitarget tracking problem becomes evident if we use the

random set approach [109] to reformulate the track and ID problem as a simultaneous global-sensor,

19
global-object single-set system. In this case, the “global” sensor follows a “global” object (some whose

individual objects may not be detected). The motion of the multitarget system is modeled using a global

Markov transition density and can be extended over target beliefs. The problem is how to define

analogs of the state and measurement vectors. The global densities need to be developed from

probability density functions of discrete and continuous data. Another approach for global information

management is mutual information of the state and measurement information. The next section

describes the set theory approach used in the belief filter for discrete data and the continuous

mathematics are developed in the JBPDA and SBDA formulations in Chapters 5 and 6 respectively.

3.2.1 The Dempster-Shafer Set Theory

The Dempster-Shafer (DS) theory of evidence was devised as a means of dealing with imprecise

evidence [79,78,108]. Evidence concerning an unknown target is represented as a nonnegative set

function m : P(U) → [0,1], where P(U) denotes the set of subsets of the finite universe U such that m(∅) =

0 and ΣS⊆U m(S) = 1. The set function m is called a mass assignment and models a range of possible

beliefs about propositional hypothesis of the general form PS =∆ “object is in S1” where m(S) is the weight

of belief in the hypothesis PS. The quantity m(S) usually interprets as the degree of belief that accrues in

S, but to no proper subset of S. The weight of belief m(U) attached to the entire universe is called the

weight of uncertainty and models our belief in the possibility that the evidence m in question is completely

erroneous. The quantities

Belm(S) =∆ ∑ m(O) (3.1)


O⊆S

Plm(S) =∆ ∑ m(O) (3.2)


O∩S≠∅

are called the belief and plausibility of the evidence, respectively and m(O) is the mass assignment for

object O. The relationships Belm(S) ≤ Plm(S) and Belm(S) = 1 – Plm (Sc) are true identically and the

interval [Belm,Plm] is called the interval of uncertainty, where the interval of certainty is (1 - [Belm,Plm]

20
) which can be used as a confidence measure. The mass assignment can be recovered from the belief

..
function via the Mobius transform: [109]

m(S) =∆ ∑ (-1) |S - O| Belm (O) . (3.3)


O⊆S

The set intersection quantity

1
(m ⊕ n) (S) =∆
1-K ∑ m(X(O)) n(Y(O)) (3.4)
X∩Y=S

is called Dempster’s rule of combination, where K =∆ ∑ m(X(O)) n(Y(O)) is called the conflict
X(O) ∩ Y(O) = ∅

between the evidence m and evidence n.

In the finite-universe case, the Dempster-Shafer theory coincides with the theory of independent,

non empty subsets of U (see [108,110,111]; or for a dissenting view, see [112]). Given a mass assignment

m, it is always possible to find a random subset Σ of U such that m(S) = p(Σ = S). In this case, Belm(S) =

p(Σ ⊆ S) = βΣ(S) and Plm(S) = p(Σ ∩ S ≠ 0) = ρΣ(S) where βΣ and ρΣ are the belief and plausibility

measures of Σ, respectively. Likewise, we can construct independent random subsets, Σ, Λ of U such that

m(S) = p(Σ = S) and n(S) = p(Λ = S) for all S ⊆ U. Then, it is easy to show that

(m ⊕ n)(S) = p(Σ ∩ Λ | Σ ∩ Λ ≠ 0) (3.5)

for all S ⊆ U, [109]. Thus, an intersection of overlapping sets can be fused to generate a global set

confidence, where confidence is defined on the range (1 - [Belm,Plm]) and uncertainty is [Belm,Plm]. As

an example, the next section provides a track and ID example to demonstrate the set theory approach.

21
3.2.2 Dempster-Shafer Set Theory – Example Belief Filtering Track and ID Belief Example

3.2.2.1 Track and ID Beliefs for Object ID

As an example, consider the following situation (adapted from [113]). Three object beliefs are

determined from the ID algorithm. The group of measurements available forms the set of object

measurement events:

E = {O1, O2, O3}

The ID system expresses its determination over the Frame of Discernment, which is a power set of E:

2E = {∅, {O1},{O2}, {O3}, {O1 O2}, {O1 O3},{O2 O3}, {O1 O2 O3}}

The object IDs express the portion of belief of the total belief that is committed specifically to

each of these set of events. For instance, the ID algorithm is unsure if the HRR signatures is {O1 or O3} or

{O1 and O3}. These numeric values, called mass probabilities, since some of them express a belief in a set

of events {O1O3} not just an individual event {O1} and can not be further subdivided into a belief in each

of the individual IDs contained within these sets. Note that the mass probability of ∅ is always zero

(since this is a false hypothesis), we know that the object is either one of the three types and the sum of

these values is one.

Lets suppose that the ID measuring system, after reviewing the initial set of evidence at the start,

the state assigns the following mass probabilities to the numbers of the frame of discernment:

Table 3.1. Object Initial Probabilities from ID System.

Event Mass
O1 is the object {O1} 0.1
O2 is the object {O2} 0.1
O3 is the object {O3} 0.2
Either O1 or O2 is the object {O1 O2} 0.1
Either O1 or O3 is the object {O1 O3 } 0.1
Either O2 or O3 is the object {O2 O3 } 0.3
One of the three must exist {O1 O2 O3 } 0.1
1.0

22
Using the definitions established previously, the belief can be computed as to the identity of the

object as particular subset of events using these mass probability values. For example, the belief that

either object is computed to be the identity

Bel ({O1 O2}) = m({O1}) + m({O2}) + m({O1 O2}) = 0.1 + 0.1 + 0.1 = 0.3

A summary of all the computed belief values and the mass probabilities are shown below, where

O is the subset of the Frame of Discernment (the objects in question) and minit is the initial probability

function.

Table 3.2. Initial Probability Mass Functions.

O {O1} {O2} {O3} {O1 O2} {O1 O3} {O2 O3} {O1 O2 O3}
minit (O) 0.1 0.1 0.2 0.1 0.1 0.3 0.1
Bel (O) 0.1 0.1 0.2 0.3 0.4 0.6 1.0

These belief values seem to indicate that O3 is the object, since all the sets that have O3 are

higher than those without it. The complete decision can not be rendered yet, since the values indicate that

it is a plausible event, but there is no guarantee – (i.e. HRR ATR, but no track information to confirm).

As more evidence is collected through measurements, the belief values will change and hopefully identify

a specific object-ID for a given track or some small subset that contains the solution. The belief interval

for the O3 is:

[ Bel ({O3}), 1 – Bel({O3}c) ] = [0.2, 1 – (minit({O1}) + (minit({O2}) + (minit({O1 O2})) ]

= [0.2, 1 – (0.1 + 0.1 + 0.1)] = [0.2 0.7]

The width, or range of information (the difference in these values) is the amount of uncertainty

that O3 is the solution, given the measurement evidence. This could also be determined as a confidence

interval.

Suppose that the system has a chance to include another measurement from the tracking system.

Considering the new information alone, a new belief function m1 is generated:

23
Table 3.3. Set Probability Mass Functions.

O {O1} {O2} {O3} {O1 O2} {O1 O3} {O2 O3} {O1 O2 O3}
m1 (O) 0.2 0.05 0.1 0.05 0.3 0.1 0.2
Bel (O) 0.2 0.05 0.1 0.3 0.6 0.25 1.0

Using Dempster’s rule to combine these two set of belief values, we form a composite belief

function. The rule will be explained through an illustration. Each of the mass probability

distributions divide the interval [0, 1] into segments.

{O1O 2 O3 }

{O 2 O3 }

{O 1 O 3}

m ID
{O 1 O 2}

{O 3 }

{O 2 }

{O 1 }

{O 2 } {O 1O 2 O3 }
{O 1 }
{O 1 O2 }
{O 3 } {O 1 O3 } {O O }
2 3
m
TRACK_update

Figure 3.6. Set space for Belief Filter Normalization.

Figure 3.6 shows the combined effect of two measurements from the tracking and ID systems.

The intersection of the probability masses is represented by the rectangles. The filled rectangles

represents the intersection of objects and the empty rectangles represent impossible intersections. The

empty squares represent a conflict function, where the area sum of the empty intersections is 0.245. When

computing the combined mass probability for objects other than O3, Dempster’s rule normalizes all the

measurement results to account for these empty intersections. In this example, the intersection of filled

rectangles is

(1 – 0.245) = 0.755 which normalizes the range function.

24
For example, the intersection for object one is:

O1 = 0.02 + 0.04 + 0.02 + 0.03 + 0.02 + 0.03 + 0.01 + 0.02 + 0.006 = 0.195

and the combined evidence is:

0.195
Combined Evidence {O1} = 0.755 = 0.258

The illustrated result for O1 is:

{O 1 O 2 O3 }

{O 2 O3 }

{O 1 O3 }

m ID {O 1 O 2}

{O 3 }

{O 2 }

{O 1 }

{O 2 } {O 1 O 2 O3 }
{O 1 }
{O 1 O 2}
{O 3 } {O 1 O3 } {O 2 O }
3
m
TRACK_update

Figure 3.7. O1 Belief Intersections for track and ID.

The following Tables show the mass probability and belief values resulting from this combination

of track and ID measurements.

Table 3.4. Two Measurement (Track and ID) Probability Mass Functions.

O {O1} {O2} {O3} {O1 O2} {O1 O3} {O2 O3} {O1 O2 O3}
m2 (O) 0.258 0.219 0.25 0.105 0.040 0.131 0.030
Bel (O) 0.258 0.219 0.25 0.580 0.484 0.600 1.00

From the simultaneous (sequential in example) measurement, we see that object 1 is now the

suspected object. Over time, the object beliefs, if discernable from the measurements, will be expressed.

The belief ranges or confidence intervals for this example, shown in Table 3.5, demonstrate that object 1

has the least uncertainty.

25
Table 3.5. Conflict Ranges for the Belief Measurement.

{O1} {O2} {O3}


Bel(m1) [0.100 – 0.400] [0.100 – 0.500] [0.200 – 0.700]
Bel(m2) [0.225 – 0.400] [0.250 – 0.451] [0.250 – 0.451]

For the identification update to the tracking algorithm, we can pass the mean, variance, or range

of beliefs as track-ID variables. The importance of the Dempster-Shafer approach over the other

uncertainty methods is that if can represent “certainty about certainty” which Bayesian systems can not

represent. The commitment to a belief in some suspected object O, Bel(O), does not imply that the

remainder belief holds for O’s complement, Bel({O}c). It is instead the case that Bel(O) + Bel({O}c) ≤ 1.

The quantity 1 – [Bel(O) + Bel({O}c)] is the degree of ignorance concerning O.

The major difficulty with this method is complexity, requiring an exhaustive enumeration of all

possible subsets in the frame of discernment in nearly all cases. Additionally, it offers no guidance on

how the mass probability assignments should be computed or how to make a decision from the results.

We address these issues in the next section wherein we use feedback to deal with these issues

1. The first issue is the number of subsets. Like humans, a minimal number of comparisons need to

be conducted. For this, we utilize two measurement comparisons for each object (i.e. track and

ID) for a fixed set of objects per track, plus the possibility that the sensor was not measuring

correctly.

2. Initial Assignment of probabilities. As in the previous case, the probability of a binary detection is

assumed for each object with all objects equally likely.

3. Decision Making. When a plausible function is achieved, a result is designated. When a object

belief is high and all evidence is accounted for, we can render a decision.

4. Declaration Confidence. Table 3.5 shows a range of values which corresponds to the confidence

interval in the object-tracking decision. Also, the converse, uncertainty, can be used as a

measurement uncertainty for belief propagations.

26
3.2.2.2 Track and ID Beliefs for the Number of Objects

One of the key advantages of simultaneous tracking and ID is that the ID algorithm can help the

tracking system to determine the number of objects in the system. As in the last example, the tracking

system helped the ID algorithm determine the correct object ID, given that the tracking algorithm

determined the predicted beliefs based on the estimated target position. Here, we show that the number of

objects, the mean, and variance can be determined from the belief sets.

From the sets of measurements we have:

0 0.1 0.2 0.4 0.5 0.6 0.9 1


1 Target 2 Targets 3T

0.1 0.1 0.2 0.1 0.1 0.3 0.1

{O 1 } {O 2 } {O 3 } {O 1 O 2 } {O 1 O3 } {O 2 O 3 } {O1 O 2 O3 }

m
ID_update

Figure 3.8. Mass Probability Update for ID System.

0.2 0.35 0.65 0.7 0.8 1


1 Target 2 Targets 3 Targets

0.2 0.1 0.05 0.3 0.05 0.1 0.2

{O 2 } {O 1 O 2 O3 }
{O 1 }
{O 3 } {O 1 O2 }
{O 1 O3 } {O 2 O3 }

m
TRACK_update

Figure 3.9. Mass Probability Update for Track System.

From these mass functions, we can group the sets of information into the belief in the object

number. We first plot the mass probabilities associated with each set of object numbers and then we group

the sets of information.

27
ID_update TRACK_update

0.3

m 0.2

0.1

0
{O 1 } {O 2 } {O 3 } {O 1 O2 } {O 1 O3 } {O 2 O3 } {O1 O 2 O3 }

Figure 3.10. Mass Probability Update for Track and ID System.

ID_update TRACK_update

0.5

0.4

m
0.3

0.2

0.1

0
0 1 2 3
Number of Objects

Figure 3.11. Mass Probability Update for Number of Objects.

From these mass probabilities, we can determine the mean and the variance for the number of the

objects which will be used to determine the combined belief-mean, µ, and belief-variance, or standard

deviation, σ, for the target.

To get the mean for the number of objects, we get the average

ID: µID =Σ mi = 1(0.4) + 2(0.5) + 3(0.1) = 1.70

Track: µTrack =Σ mi = 1(0.35) + 2(0.45) + 3(0.2) = 1.85

2 Σ (x - µID)2 (1 - 1.70)2 (2 - 1.70)2 (3 - 1.70)2


ID: σID = = + + = 0.757
n 3 3 3

2 Σ (x - µTrack)2 (1 - 1.85)2 (2 - 1.85)2 (3 - 1.85)2


Track: σTrack = = + + = 0.689
n 3 3 3

28
Thus, using the set measurement beliefs, we can hypothesize the number of targets over tracks

and object-IDs. In the next section, we overview the STaF algorithm.

3.3 Classification Belief Filter

To classify HRR signatures, we utilize the definitions of the Statistical Feature-based Classifier

(STaF) algorithm including processing of amplitudes and location features of a signature and calculating

the ID uncertainty. The difference between the STaF and the belief filter calculations is in the object

classification. The belief filtering for HRR profiles is similar to the STaF algorithm.

The belief classification algorithm is based on the STaF classifier [46,102,103,104]. The HRR

features, falo, consist of, a, salient peak amplitudes, l, feature peak location, and o the object profile.

Object class hypotheses are defined as the set O = {o1, o2,..., on} for an algorithm trained on n object

classes. The location data are represented by L = {l1, l2,…, ls} and the peak amplitude data by A = {a1,a2

,...,as} for s extracted peaks from an observed object signature, as shown in Figure 3.12. The basic

statistical modeling concept is to estimate the probability that a peak occurs in a specific location lq, given

that the observation is from object or. Further, the probability that the peak has amplitude aq, given that

the peak is at the location lq and that the observation is of object or, must be determined.

1.8
1.6
1.4
1.2
Magnitude
1
Peak (a)
0.8
0.6
0.4
0.2
00 50 100 150 200

Range Bins - (l)

Figure 3.12. High Range Resolution Profile, showing amplitudes (a) and range bin locations (l).

29
The primary estimated feature statistics required to determine these probabilities are the peak

location probability function (PLPF) and the peak amplitude probability density function (PAPDF); see

Mitchell [104] for more details.

The role of the Peak Location Probability Function (PLPF) estimation is to determine the

probability that a peak will be observed in a specific range bin location given that the observation was

from some individual object class or. This probability is estimated from the peak locations of the training

ensemble for each object class. A Parzen estimator with a normal kernel function along the range

dimension is employed to estimate the PLPF [104]. With this function, class probabilities are associated

with peak locations. However, for robust object classification, additional information is needed from the

conditional peak amplitude statistics.

The Peak Amplitude Probabilities Density Function (PAPDF) [104] uses amplitude statistics

which are conditional on the occurrence of a peak at a specific location and for a given object class. This

estimation approach ensures that the amplitude statistics are based only on the detected features rather

than a specific range-bin location. The form of the amplitude statistical distribution is assumed to be

Normal within a given range bin. While it is known that the magnitude of the signatures has a Rician

distribution, the Gaussian assumption is reasonable if a “power transform" is performed [104]. The

transformation simplifies the problem since the normal PDF is completely specified by two parameters,

the mean and variance. These parameters are calculated for each range bin from the amplitudes of the

extracted peaks of the training ensemble.

Given the PLPF and peak amplitude PDFs, class likelihoods and probabilities, are calculated for

a set of features. The feature location likelihoods are found by evaluating the PLPF at a specific feature

location. The amplitude likelihoods are found in a similar way using a mathematical expression for the

Gaussian PDF. The parameters of the Gaussian PDF are the estimated mean and variance terms. The

likelihood that the observed feature amplitude is the result of observing object class or is found by

evaluating:

(aq - µqr)2
p(aq | or lr) = [ 1 / 2πσqr] exp [- 2 ] (3.6)
2σqr

30
where µqr and σqr are the conditional mean and standard deviation for peak location lq and a given class

or. Note that this likelihood is conditioned on both the object class and the feature location. The joint

peak location and amplitude likelihood is calculated by multiplying the individual likelihoods,

p(aq lq | or) = p(aq | or lr) P(lq | or). (3.7)

From the joint likelihoods a posteriori probabilities are calculated using Bayes’ rule,

p(aq lq | or) p(or)


p(or | aq lq) = n . (3.8)
∑ p(aq lq | ou) p(ou)
u=1

One problem associated with the Bayesian probability calculations is that only relative

probabilistic information is considered rather than global information. This is because the Bayesian

equation (3.8) normalizes the probabilities relative to the likelihoods of some set of object hypotheses.

Therefore, only probabilistic information about a class relative to the probabilities of some set of other

classes is given. Consider the case that an observation is from an object for which no statistics are

available. If that observation looks statistically more like one class, say class o1 than the other classes, the

Bayesian probabilities would appear to be very confident that the observation belonged to class o1. In

reality, the likelihood, p(aq lq | o1), may be very low. This decision would result in an error. Therefore, if

it is possible that unknown objects will be observed, then Bayes’ decision alone will not be able to reject

incorrect decisions due to the unknown object. The information required to eliminate these errors can be

obtained from the likelihood values. The inclusion of likelihood information in this algorithm will be in

the form of a belief measure with an associated belief-probabilistic uncertainty. In the next section, we

discuss the confidence function.

3.3.1 Confidence Value Calculation

The inclusion of likelihood information in this algorithm will be in the form of a confidence

measure. The next sections will discuss the determination of the ID confidence, how the individual peak

31
confidence levels and a posteriori probabilities are used to determine class beliefs, and how to accrue the

class beliefs to obtain an overall track classification decision. The confidence-calculation method is

further refined in the JBPDA and the SBDA to reflect the interval of certainty obtained from

accumulating beliefs and rejecting implausible possibilities. Additionally, the uncertainty is used to

propagate beliefs and not the certainty value; however the confidence can be used to weigh the kinematic

information.

The most complicating requirement with HRR ground target classification is the rejection of

unknown target classes. This problem occurs because it is impossible to train a classifier to recognize

every possible ground target. Therefore, the belief classification algorithm must only make classification

decisions when the statistical confidence is high. The decision confidence measure can be based on the

class likelihoods using PDFs developed for each class. The likelihood statistics are obtained by comparing

the training exemplars with their own statistical model. By incorporating training information, we base

the confidence on past experiences.

The actual likelihood PDFs are estimated using a Parzen estimator with a normal kernel function

[104]. Observing that the likelihoods are class-conditional probabilities, larger likelihood decisions should

have a higher confidence level. A function that mirrors this concept is the cumulative distribution function

(CDF) [104]. For this reason, the CDF of in-class likelihood PDFs is used to determine the decision

confidence3. An example of a likelihood CDF is shown in Figure 3.13, calculated by the cumulative sum

of the PLPFs. At any likelihood x, the CDF evaluates the probability P(p(a,l|or)φ ≤ x), which represents the

decision confidence for a specific pose φ. Note that both the likelihoods and the CDFs are probabilities and

therefore their values are in the range [0,1]. Additionally, P(p(a,l|or)φ) ≤ 0.0 = 0 and P(p(a,l| or)φ) ≤ 1.0

represents the minimum and maximum confidence values, as defined by placing the values in a

histogram. The highest confidence is assigned to the probabilities nearest to the mean and is shown in

Figure 3.13.

3
We will use the value of (1 – [Bel, Pl]) range as the measure of confidence.

32
1

0.9

0.8 CDF -
0.7
For Target 1
Confidence
0.6
PLPF -
0.5
For Target 1
0.4

0.3

0.2

0.1

0
-3.5 -3 -2.5 -2 -1.5 -1
Likelihood p( a 1| T)

Figure 3.13. In-Class likelihood CDF used to determine Decision Confidence.

To obtain the confidence measure, an object class hypothesis, ohyp, must be made. It is assumed

that the observation belongs to class ohyp, resulting in the in-class likelihood p(aq lq| or)φ. The ohyp

decision confidence for peak q is calculated by evaluating the likelihood CDF at p(aq lq | ohyp)φ which

r
represents the confidence, Co (k), that the observed peak q is associated with object class hypothesis ohyp
hyp

[104], at each time event k. Therefore, each object hypothesis of interest must have an associated

confidence value for each of the extracted peaks in an observation. Typically this corresponds to

calculating a confidence value for each of the r object classes, or.

3.3.2 Belief-Hypothesis Confirmation

The classification belief filter simulates the confirmation process people perform by predicting

hypotheses in a frame of discernment, Θ. The frame of discernment consists of a collection of matched

features, Θ = ∪{f1, …,fq}. Only a subset of the entire combinations of features is possible. Thus, the

belief set is a modification of Shafer’s belief functions to only include a priori trained set of feature

combinations. The probabilistic fusion of extracted features is performed using Dempster’s rule. For

individual peak features, class likelihoods, a posteriori probabilities, and decision confidences from joint

likelihoods are calculated. These statistics are used to develop a set of beliefs for specific object

hypotheses. The confidences weight the class a posteriori probabilities to create a belief in a specific

object class. The beliefs are found using

o j
bkhyp (or | aq lq)φ = Cohyp(k) P(or | aq lq)φ , (3.9)

33
r
where Cohyp(k), is the confidence that the qth peak is associated with the object hypothesis ohyp for time k at

pose φ. Since the confidence is based on a class hypothesis, the beliefs generate a matrix [104], which is

like a covariance matrix for all plausible objects and a single unknown category to capture all unknown

objects. Each column of the matrix is associated with a particular class hypothesis. Additionally, an

r r
uncertainty value is calculated as Uohyp(k) = 1 - Cohyp(k) which completes the hypothesis matrix. Note,

since the sum of the a posteriori probabilities is unity, the sum of the beliefs and uncertainty for any given

hypothesis is also unity,

n
r o
Uohyp(k) + ∑ bkhyp (or | aq lq)φ = 1. (3.10)
i=1

The generation of the beliefs and uncertainties directly ties the confidence that an observed

feature is associated with an object hypothesis. Thus, a high uncertainty occurs when the likelihood that

the observed feature is not associated with the hypothesized object. By using uncertainty information,

unknown object observations are resolved and can be used to determine if the set of objects needs to be

increased. Robustness and confidence are obtained through evidential fusion of the individual feature

decisions. Since the confidence in the system is related to the uncertainty, we will use the uncertainty in

the propagation of beliefs. Additionally, through an association event matrix, we can determine the

validity of the measurement. However, we need to process the feature measurements of the HRR profile.

3.3.3 Peak Feature Belief Evidence Accrual

For each feature extracted from an observation, a belief hypothesis is generated. Each belief

contains accrued evidence to aid in the acceptance or rejection of each object class hypothesis. The

individual feature beliefs are fused using Dempster's rule of combination [114]. The fusion rule allows for

decision uncertainty and confidence values to be assigned to classification decisions. The uncertainty

between features 1 and 2 for an object hypothesis, assuming that uncertainty values are disjoint, is:

U1(k) U2(k)
U1;2(k) = n n (3.11)
1- ∑ ∑ bk(a1 l1) bk(a2 l2)
j=1 k = 1; k ≠ j

34
where the denominator normalizes the beliefs. The fused uncertainty can be updated with the new beliefs

by

bk(a1 l1) bk(a2 l2) + U1(k) bk(a2 l2) + U2(k) bk(a1 l1)
bk((a1 l1)(a2 l2)) = n n (3.12)
1- ∑ ∑ bk(a1 l1) bk(a2 l2)
j=1 k = 1; k ≠ j

These belief equations are recursively applied to the entire set of extracted independent features

to calculate the overall class beliefs and uncertainties for a specific class hypothesis given the pose

estimation from the tracking algorithm. Additionally, the unknown element of each belief set is the fused

uncertainty associated with a specific object class hypothesis. The object classification, using feature

information, is made by selecting the beliefs on the diagonal, like a variance-covariance matrix. The

classification is presented to the set-level fuser for object tracking and ID decision making.

3.3.4 Peak Feature Belief Evidence Accrual for Tracking

In feature-classification tracking, every feature-set measurement of object, o, in the validation

t
region of track, t, is evaluated to determine the classification belief Belk of an object. For our analysis, we

assume detected potential objects are recognized from the MTI sensor/processor and that each object

measurement has an estimated pose. Using the HRR information, we can classify the object-type. It is

assumed that the objects recognized are from a variety of different classes of objects (i.e. a variety of tanks

that form the same class of vehicle). The training of object features is done to assess a sufficient set of

features to recognize the object. Although algorithms exist for solving HRR recognition problems using

Bayesian updates, these algorithms employ probability analysis where the most likely object is selected

[102]. A limitation of using a Bayesian analysis is that it does not capture incomplete knowledge. For

instance, there are times when unknown objects might be of interest that are not known at algorithm

initiation. By employing belief states, based on the Dempster-Shafer [115] which incorporate all previous

hypotheses, the dynamic-detection trained system is converted to a Markov Decision Problem [116] which

augments the measurement association problem to account for incomplete knowledge. We assume that

the MTI cues the HRR to collect all signal features of an object. False objects, containing only partial

signals, would easily classified as not an object of interest and hence, not really clutter. Another method

35
might be to collect the HRR features separately and scramble [22,23] the incoming features and then

group them for each object measurement for object recognition, shown in Figure 3.14, but this would

require many permutations of the signal. For this dissertation, we assume the entire HRR profile is

collected and the features are labeled in the sequence measured. Cluttered measurements are HRR profiles

of similar objects. The output of the classification gives a belief in the object type and an associated

confidence value. One way to implement the processing of features versus objects is to process the

information as sets of information (i.e. sets of features and sets of objects).

Target Belief Filter


{feature set}-to- target
Confidence
Target Tracking Feature Extraction f 11 α
11
Interval
1
[(f1 ,...,fn), x] f21 α

Track 1
Feature Feature 12
Σ C1
Scrambling Labeling Target 1
f n1 Target Track
α 1n
{f 1 , ..., f n } {f11 ,f21,..., fn1 }

f12 α 21

Target 2
2
[(f1 ,...,fn), x] Feature Feature f22 α22 Fuse # ID
Track 2 Scrambling Labeling
Σ C2 and Track
fn2 Information
{f 1 , ..., f n } {f12,f22 ,..., fn2 } α 2n

f1m α n1

Target n
m
[(f1,...,fn), x] f2m αn2
Feature Feature Σ Cm
Track M Scrambling Labeling Add New Track
fnm Prune Number of Tracks
{f 1 , ..., f n } {f1m,f2m,..., fnm} α nn
Resolution

Time
Plausible Target Set

Figure 3.14. Scrambling and Ordering of Features.

3.4 Set Level Fusion – Tracking and Identification Belief Filter

Using the fusion techniques described above, one obtains an object class belief and confidence for

all plausible objects for each track. The decision uncertainty and confidence are used to determine the

quality of the object classification and reject unknown spurious object track observations. The technique

statistically models the uncertainty associated with a correct object class hypothesis for a given object from

an estimated pose. These data are generated by using in-class uncertainty statistics determined from the

training ensemble [104]. These "in-class" statistics are the uncertainties associated with classification of

36
target class or under the hypothesis that the target is from class or. The “in-class” information is accrued

in the DS algorithm for each time step, allowing for a fused decision confidence. The evidence accrual

becomes important when one needs to reject unknown object observations from any track.

3.5 Chapter 3 Summary

In Chapter 3, we overviewed sensor fusion, Dempster-Shafer theory, and the STaF algorithm. In

the next Chapter, we use these concepts to formulate the problem addressed in the dissertation.

37
CHAPTER 4 PROBLEM FORMULATION

One problem that typically challenges sensor fusion applications is tracking multiple objects and

correctly identifying objects that are targets of interest. In addition, current tracking applications include

measurements that may be spurious or cluttered. One such example of radar tracking is synthetic aperture

radar (SAR) and high-range resolution radar (HRR) measurements. Researchers have typically focused

only on automatic target recognition (ATR) for the stationary known targets such as the Moving and

Stationary Target Acquisition and Recognition (MSTAR) SAR data set or moving target ATR from HRR

data. Current radar tracking approaches for real-world applications must 1) deal with many unknown

targets and cluttered information, 2) capture ATR and track uncertainty, and 3) provide confidence

updates to users. In this dissertation, we address these issues in a radar tracking and identification (ID)

algorithm.

The proposed belief filter (BF) algorithm of the Joint Belief-Probabilistic Data Association Filter

(JBPDAF) and Set-Based Data Association Filter (SBDAF) may reduce burdens of standard tracking

algorithms which do not adapt quickly to a varying number of targets, only track the target’s center-of-

gravity, and are not adaptive to user’s interests. By assessing target positional and pose features over time

and space, the belief filter maintains plausible tracks and believable identifications of multiple targets.

The belief filter uses a set-based technique to 1) increase track/ID confidence levels on plausible targets,

2) eliminate ghost tracks4, and 3) classify unknown targets. This chapter will formulate the problem

including tracking and ID fusion, HRR and SAR radar processing, limitations of current approaches, and

a detailed problem addressed in the dissertation work.

4.1 Track and Identification Fusion

The ability to perform HRR track and ID requires fusion of kinematic MTI position information,

fusion of HRR features for target classification, and a higher set-level fusion of both the track and

38
classification information to assign a target-ID to a track. The fusion challenge is that the kinematic

information is continuous while the ID information is discrete. Like multitarget data association

algorithms for tracking targets in the presence of clutter, we assume that detected targets can be tracked

from a sequence of center-of-gravity positional data such as that obtained from a moving-target indicator

(MTI). Also, for a given sensor/target scenario, we assume detected HRR signature features, shown in

Figure 3.12, can effectively be fused to discern target types. Feature-to-target mappings can be achieved

either through training, learning, or prediction. By leveraging knowledge about target types from

training, the target can be identified in real time from the evidential classifications with confidence and

uncertainty measures. However, the confidence is only as good as the training set, the algorithm used, and

the resolution of the collected data. The fusion of classification and tracking information can significantly

reduce processing time by selecting a minimum number of position measurements to investigate, associate

position measurements to tracks without cycling through all permutations, as well as increase target track

quality5. In addition, correlating kinematic features with classified signatures will allow for identifying

targets at the same time tracking is performed. Fusion of information takes place at either the 1) kinematic

level for target tracking, 2) feature level for target classification, or 3) set level using beliefs for target

tracking and ID. For the fusion process to proceed, we must assume the information is available and able

to discern the target type. Additionally, set level fusion mathematics need to be addressed to obtain

confidence values.

Confidence in tracking is the ability of an algorithm to detect patterns of tracks and use the

confidence measures from the target classification to determine a plausible set of targets to track. To

determine the confidence, we use the pose information of the target, shown in Figure 4.1. Pose

information consists of a depression angle and an aspect angle. The depression angle is related to the

sensor position and aspect angle can be determined from the HRR target profile. The interaction between

the tracking and classification system is through pose. However, there is a resolutional problem based on

the accuracy of the sensed information, when fusing MTI and HRR information based on pose. How we

4
Ghost tracks are defined as a set of miscalculated tracks where no targets exist.
5
Track quality is defined in terms of confidences in target-types and the associated track.

39
deal with the problem is as follows. The tracker predicts a pose of the target from kinematic information

which we call track-pose. The track-pose cues the classification algorithm to select a set of pose angles

from which to match the observed HRR signature to the trained HRR signature set. The belief in a target

versus the set of targets returns a belief-pose of the targets. If a target belief is not higher than any other

target beliefs, the average pose of all trained poses that match, is the updated belief-pose to the tracker

with a small weight. For example, track pose indicates 000° and the belief poses are 005° and 003°. If

both targets are plausible, 004° is designated as the belief pose to update the track pose. If a target belief is

Pose-Depression

φ
Pose-Articulation

greater than the rest, its ID and belief-pose are used to update the tracker. The belief-pose updates to the

track-pose and is further described in Section Chapter 5 and Chapter 6.

Figure 4.1. Pose Angle - Defined.

The HRR profile consists of range-bin features (amplitudes and locations) related to the energy

return from a target. Westerkamp and Mitchell [102], Section 3.3, have shown that the peak features in a

HRR map can effectively classify targets and have developed a confidence measure to update beliefs. A

complication may arise if the set of peak features can not be isolated for the target. Thus, we modify the

uncertainty information as related to the interval of uncertainty of Dempster-Shafer (Section 3.3) to obtain

an uncertainty for belief updates. The belief-uncertainty is fused with the uncertainty from the tracker.

Since uncertainty information is restricted to a range of pose comparisons, the discernment of the number

and type of targets is only a situational belief. However, as the belief evidence accumulates and the

algorithm reduces the plausible set of targets, target ID uncertainty will decrease and confidence will

40
increase which can improve track quality by selecting the correct target ID for a given track. In order to

demonstrate how belief-IDs update the tracker, we use the belief-ID measurement to help in data

association and the propagation of uncertainty. Data association is a common tracking technique, but it

warrants further inspection.

4.2 Data Association

In tracking approaches that use data association, there is an assumption that the information for

tracking is provided through position measurements. The problem is that the tracker must isolate the

target of interest from the position hits where the position hits might be from clutter. If position

measurement information is difficult to discern an actual tracking scenario, the tracker can make an

incorrect assignment of the position measurements to tracks. As an example, Figure 4.2 below shows a

case in which the position measurements cause the tracker to get confused. In this case, object 1’s and

object 2’s position measurements fall within the kinematic gates of both objects6.

Object 2 trajectory

Detection Only

Object 2 False Track


Object 1 GoodTrack

Measurements
time k-2 time k-1 time k
Predicted target locations
Updated target locations
Kinematic Gate Prediction
Object 1 trajectory
Update

Figure 4.2. Data Association Problem with Position Measurements Only.

6
We will objects as designated position hits and targets as actual measurements that contain a belief in the object that satisfies the threshold
criterion.

41
As we can see from the Figure 4.2 (far left), a kinematic gate can isolate the position

measurements that are near the predicted measurement for each object’s track. In the case that one of the

true measurements falls within the kinematic gate of the predicted position, that measurement would be

designated as the true position measurement. If position measurements from another object fall within the

predicted kinematic gate of the object track (Figure 4.2 middle), the position measurement could be

considered the position measurement of the tracked object which would be an incorrect assignment. Once

the tracker locks on to another object, or uses the position hits of the other object’s clutter, the tracker

assumes that the hits of the second object are true hits for the first object (Figure 4.2 right). One way to

correct for this assignment mistake is to leverage other information, such as the target identity to help

resolve which position measurements are assigned to specific object tracks. For example, we could use a

HRR sensor to ID an MTI position hit by assigning a target-type to the position measurement. To

illustrate how the HRR information may help in data association, Figure 4.3 illustrates the process of how

a target-ID can refine the positional measurement to pick the validated measurement from the cluttered

measurements.

HRR Profile

Object 1 Trajectory Classification


Identification

Object 1Correct Track

Detection

Object 2 Correct Track


Good ID Measurements
Measurements
time k-2 time k-1 time k
Predicted target locations
Updated target locations
Object 2 Trajectory Prediction
Kinematic Gate Update

Figure 4.3. Data Association using ID and Position Measurements.

42
As we see from the Figure 4.3, if the identification sensor, such as an HRR radar, can update the

position measurements, it would help to ensure that the correct position measurement was assigned to the

correct track. Thus, both object 1 and object 2 have the correct tracks. HRR is a possible sensor to

perform target ID, but it has some difficulties as well.

4.3 High Range Resolution Radar

Radar systems are effective for surveillance applications due to their distance range resolution

invariance, all weather, and measurement capabilities. The radar antenna has a tradeoff between MTI,

HRR, or SAR modes. HRR radar offers a method for imaging moving targets by extracting energy return

from range profiles. If the target is stationary, a SAR image can be formed, but might take up to 10

seconds, which is lengthy in time-critical scenarios. A sensor mode that has recently received attention is

HRR radar which collects a signature in under 0.5 seconds. In the HRR radar mode, the cross range

resolution is large at long ranges. Due to this lack of cross range resolution, HRR is not useful for

stationary targets because clutter overwhelms the target information. However, if the target is moving, it is

possible to separate the target and clutter in Doppler [117]. Additionally, HRR processing can be

interleaved with ground moving target indicator (GMTI) processing for detection. Figure 4.4 shows how

the SAR and HRR are radar modes necessitate a priori knowledge of whether the target is moving or is

stationary.

43
Figure 4.4. Detection of SAR Images for Stationary targets versus HRR Data for Moving Targets.

Once an object is detected as moving, tracking information can be used to derive pose angles for

the objects. Tracking information enhances the target ID by reducing the number of unknown variables

associated with the target. Likewise, target ID helps determine the number of target tracks. In addition to

correctly identifying targets with HRR, a target ID system in military scenarios must also avoid incorrectly

identifying unknown targets as known targets. Clearly, the ID system must be allowed to reject ID

decisions rather than be forced to make a target ID. Requiring the system to maintain a high probability of

correct ID while at the same time rejecting unknown targets is a very challenging problem. One way to

combat the problem is by leveraging set fusion, which operates on accumulated beliefs and plausible target

sets from track and ID information and can coordinate HRR tracking and classification for assessing the

correct target ID. Mitchell and Westerkamp showed how this could be achieved for ATR, but tracking

information was not assessed to generate the initial pose estimation. For instance, in the top panel of

Figure 4.5, the association of kinematic measurements allows for the tracking of the T72 and the BMP2 at

each time update of 1 unit. If we only had SAR information with an update of 10 units, shown in Figure

4.5 in the bottom panel, evidence could accumulate for stationary target identity. The problem of robustly

tracking and identifying targets is a problem unsolved which would be the integration of the figures at the

right of Figure 4.5 for a simultaneous HRR tracking and ID.

44
Figure 4.5. Radar Tracking and Identification.

4.4 Tracking and ID with HRR Information

The difficulty in maintaining a HRR classification belief in a target over time, is illustrated in

Figure 4.6. We see that even if HRR is good over one scan and only rely on the MTI position

measurements on future scans, it is difficult to discern the targets. Moreover, for crossing targets, the

position information alone might not help associate a measurement to a track. However, when we

initially get an HRR scan, it might not discern the target type. For instance, if the tracker estimates the

pose as 030° and the true pose is 000°, none of the targets would look like the observed HRR signature

plot and all the objects remain plausible.

45
Bel = {A, B, C}
1

kinematic
Bel = {A, B, C} measurement
1
gating

Bel = {T72, HMMV, BTR70}


n

T72
Bel = {A, B, C}
2
Bel = {A, B, C}
HMMV 3
Bel = {A, B, C}
2

Bel = {A, B, C}
3
BTR70

time k - 1 time k time k + 1

Figure 4.6. HRR Belief Tracking with only One HRR Signature Update.

To compensate for the confusion, we must be able to identify the target at each point in time and

accrue evidence in the target type as shown in Figure 4.7. Evidence accrual includes both the track and

the ID updates. As shown in Figure 4.7, the HRR profiles cross at the same point in space (from the

MTI). By fusing this information from a tracker with the ID information, we can discern the target from

the clutter or another HRR signature. Additionally, as shown on the right where the measurements

continue to fall within the kinematic gate, belief for each target must be maintained.

46
Bel = {T72}
1

Bel = {T72, HMMV}


1 T72

Bel = {T72, HMMV, BTR70} Bel = {HMMV,BTR70}


n T72 3
BTR70
HMMV

Bel = {HMMV,BTR70}
T72 HMMV 2

HMMV BTR70

Bel = {HMMV,BTR70}
2
kinematic
measurement
BTR70 Bel = {HMMV,BTR70} gating
3
time k - 1 time k time k + 1

Figure 4.7. HRR Belief Tracking with HRR Signatures.

4.5 Limitations of Current Approaches

The literature overview alluded to many of the limitations of standard tracking algorithms and

extensively listed the current work in data fusion, HRR ATR, and tracking approaches that try to utilize

classification information. While all of these articles are influential, none are able to achieve robust

simultaneous tracking and ID of known and unknown targets. The proposed solution is that of a set-

theoretic approach to tracking and ID. Some of the limitations expressed in the previous sections of

Chapter 4 that must be overcome are: 1) set-theoretic tracking information consistency, 2) recursive set

updating, and 3) sound mathematics that lead to a satisfying tracking and ID algorithm. Inherently, the

algorithm will be suboptimal, but we seek a solution that closely approximates an optimal solution. The

optimal solution is that of the MHT algorithm (Section 2.1); however, while MHT is optimal in the

sense that no approximations or simplifications have be made, implementation requires careful

assumptions and thus has never truly been implemented in its pure mathematical form [2]. Therefore,

some have placed bounding restrictions on the algorithm to ensure that the computational explosion can

run in semi-real time [118].

47
4.6 Proposed Problem

The dissertation addresses a problem not solved in the literature, namely a set-theoretical

classification approach to multitarget-multisensor tracking and ID of HRR radar measurements. While

demonstrating that the problem is unique, we make some assumptions. The first is that the MSTAR

data set is the most contemporary set of HRR data available to the authors. We assume that effects of

scaling, distortion, and scintillation are not evident in the data, yet methods to handle these situations

could be developed at a later time. Additionally, we assume that the HRR data has been collected from

a moving target with a known pose angle. One complication, based on the MTI data is that the

algorithm will work at speeds amendable to the MTI. For instance, MTI rejection works on the speed

of the Doppler processing. Thus, thresholding might eliminate plausible targets. We assume that the

track and ID sets are available and bounded by the MTI system. With the assumptions alluded to up-

front for the problem, future research and data collection might address these issues, but will not be

covered in the dissertation work.

The proposed problem addressed in the dissertation is to 1) develop a set theory for target

tracking, 2) obtain target classifications from HRR profiles for set intersection, 3) derive a recursive set

approach for target tracking and identification, 4) discern the pose and tracking information from the

MSTAR data set, and finally, 5) demonstrate that the mathematics generate a satisficing, stable solution

(using Monte Carlo runs do to nonlinear measurements). As compared to the MHT algorithm which

hypothesizes about all target tracks and identities, the belief filter is like a belief confirming algorithm

(since we have a hypothesized set of targets), wherein if enough evidence is accumulated for the target

identity from tracking and classification, then the algorithm is confident enough to label the target. “If

it looks like a duck and walks like a duck, then it is a duck”, likewise, “If the tank moves like a T72, is

classified as a T72, and contextual information says that it is not our M1, then it can be identified as a

T72”. There is no need to generate a multitude of hypothesis about the tracks and identities to achieve

an optimal solution, since we are concerned with the “best” estimate for real-time applications.

48
The multisensor-multitarget tracking and HRR identification (ID) problem is to determine which

measured features should be associated with which kinematic measurements in order to optimize the

probability that the targets are tracked and identified correctly after z measurements. Assume that a

region of interest from multiple measurements mk at time step k of a moving target indicator (MTI) is

composed of o objects7 with f features, as shown in Figure 4.8. Dynamic target measurements zk, are

t t t
taken at time steps k, which include target kinematics and target-type HRR features zk = [ xk, f1k, …, fsk ]

for each track t, where t = 0,1, .., T and s is the set of extracted features. A radar sensor can measure

independent in time and the outcome of each measurement may contain kinematic or feature variables

indicating an object. The probability density of each measurement feature depends on whether the object is

actually present or not. Further assume that a unknown number of kinematic and feature measurements

will be taken at each time step, where we model the clutter composing spurious measurements. A belief in

t t
an object is rendered as to which object feature-set {f1k, …, fsk}, classified from a comparison to a trained

t
feature set of objects of interest, is associated with which track measurement xk.

MTI Hits MTI Validated Hits HRR Profile

Target 1

Figure 4.8. Multiscan MTI plot with overlaid HRR Hit.

The multitarget kinematic tracking problem is formulated and solved by using concepts from

probability data association (PDA) [2] and HRR classification [46]. Since the standard "Bayesian

association rule" - associate the measurement with the highest probability – may lead to ID errors, we

utilize a robust belief-probabilistic approach to capture track-ID uncertainty. Although it may produce a

7
We will refer to objects as a possible targets.

49
sub-optimal kinematic pose estimate when making decisions, it can increase track quality by fusing the

track and ID information to ensure object-ID robustness and capture incomplete knowledge of

measurements.

4.6.1 Multiobject Tracking

One of the key links between HRR tracking and ID is the ability of a tracking filter to accurately

position the ID sensor and estimate the target pose. Pose information consists of a depression angle and

an aspect angle. We define pose as the heading of the object as stored with the HRR profile and

referenced to a fixed sensor to discern target measurement direction. The HRR profile consists of range-

bin features of known targets and can be trained off-line for an on-line comparison. Westerkamp and

Mitchell [102] have shown that the peak features in a HRR map can be used to accurately classify targets

when multiple targets are in the search space, such as in the case of a MTI plot. We will utilize tracking

information to identify which targets are associated to a set of different tracks. In order to determine

which measurements are plausible, we utilize the belief filtering approach which uses evidential belief

updates to allow for processing unknown target information.

The methodology for set refinement of track information is shown in Figure 4.9. Detection is

based on kinematic measurements to isolate the object location. Based on HRR feature returns from the

object, an algorithm can classify the object into targets of interest, such as a tank. With the classification

information, we can recognize the target type such as the tank of interest, (e.g. a T72). However,

recognition information is complicated by issues of fratricide, so a classifier needs additional information

to identify the target. Such measurements of tracks and identify friend, foe, neutral (IFFN) sensors would

help identify the serial number of the tank to determine which T72 it is. For example, if a tracker is

tracking three objects, two of which may be the same - two T72s, the tracker would need additional

information to identify which T72 goes with which track. Figure 4.9 shows the levels of processing

where the coarse position measurements give the detection information. The HRR classification is done to

discern the object (i.e. tank from clutter). Further, the classification algorithm can recognize the target

50
(whether it is in the set of targets of interest). Finally, when an algorithm associates a specific target to a

track, it identifies the target.

Detect Target

Classify Object

Recognize Type Tank

Identify T72

IDENTIFICATION

Allegiance
Friend, Foe, Neutral

Intent
Threat - Shoot, Articulation

Behavior
Movement

TRACKING

Figure 4.9. Multilevel Tracking and ID Approach.

After the tracking and ID information is discerned, a system could extrapolate the information

for battlefield awareness. Battlefield awareness would include allegiance of the target. With allegiances

and tracking information, a system could determine the intent of targets and list the most plausible set that

could generate a threat. Finally, in battlefield awareness, a tracker could predict the movement of the

targets, develop plausible information for convoys, and assess distances to other critical targets. While

the subject of the algorithm is tracking and ID, it could be viewed as a critical necessity for battlefield

awareness. With different levels of fusion, information could be extracted for each user of interest. It is

believed that by setting up the problem in a set theory approach, additional sensory information could be

fused such as mission scenarios and objectives, contextual information, and target terrain information.

Figure 4.10 illustrates the multi-object scenario investigated in the dissertation. It is assumed that the

51
MTI and HRR measurements are obtained from the same platform and that the MTI cues the HRR radar

to collect object information.

Figure 4.10. Multi-Object Tracking and ID Scenario.

4.6.2 Moving HRR Detection Measurements

Consider an aircraft carrying a single HRR sensor, as shown in Figure 4.10, able to detect targets

like ground moving tanks. Assume that in the region of interest a 1-D HRR profile O(a, φ), where a is the

amplitude and φ is the pose, composed of L range bins L = (a)(φ), where there are a set of a amplitudes

for φ profiles. Any bin in the signature can be measured independently of the others, and the outcome of

each measurement is a variable indicating the probability peak magnitude p(a) of the bin. The probability

density of each measurement depends on whether the target is actually present or not. Further assume

that a fixed number of m range bin measurements will be taken from an observation. Each bin in the

52
signature is the calculated for its conditional p(a|φ) and joint p(a,φ) probabilities. The measurements are

then processed for entropy metrics. Combining these entropy metrics allows for a decision to be rendered

as to which articulation the target is in and its identification. The assumption is that the target type, e.g.

tank, is not known a priori and the entropy information can help classify the target. Articulation and

target type information will further help reference the target for finer recognition and identification

routines. Learned-observation information metrics are considered stored in memory and the ATR

algorithm compares the observed HRR signature measurements to a known database.

The moving-target detection problem is to determine a minimum necessary number of peaks to

measure and the estimated number of pose comparisons for a correct target ID. The number can be

determined from a salient set of peak features that constitute the target. In addition, the association

between the stored object signatures O(A(a)) and the observation signature Y(a) need to be compared for

the number of features a. These detection actions should provide the highest belief and isolate the target

information. After M measurements and comparisons to Y observations, a measure of mutual information

will determine the information-theoretic content. If a confidence value is achieved, a preliminary pose-

articulation is determined which allows the classification routine to extract features for initial target type

matching. After matching the target type with the pose-articulation, a fused ATR confidence is achieved.

In order to accomplish the multilevel tracking and ID, we will need to have different levels of

fusion. The fusion levels are kinematic, feature classification, and set level.

4.6.3 Multilevel Sensor Fusion for Tracking and ID

The key attribute of the belief filter algorithms is three levels of information fusion, shown in

Figure 4.11. The first level utilizes kinematic information from the measurement system. Since an MTI

system preprocesses the radar data to extract profiles, it can be used to get the target position information.

The second fusion level is the fusion of range-bin feature information from different or similar sensor

modalities. In this case, features from the HRR profile are associated with each other for target

classification. Likewise, tracking features are associated to assign measurements to tracks. The third level

is that which incorporates the set-level information. After tracking and feature classification associations

53
are performed, intersecting sets are fused together for target identification. Fusion at any level requires

recursive updates from past information which is shown by the arrows in Figure 4.11. The diagram

reflects the standard tracking elements of prediction, association, and updating of track information. The

decision output from the multilevel fusion system is the evidential target track and ID belief and its

associated confidence value is used as feedback information to guide the MTI to extract new profiles.

USER INTERFACE
DECISION + CONFIDENCE Fusion Level
OTHER FEATURE
COMBINED FEATURE DECISION
RESPONE PLANNING MODULE SET INTERSECTION
MODULE FUSION

COMBINED FEATURE
COMBINED FEATURE
RECURSIVE SET INTERSECTION
RESPONE PLANNING MODULE
UPDATE SENSOR SPECIFIC Set
RESPONSE COMBINED FEATURE TIME SEQUENCE
COORDINATOR
SET INTERSECTION
FEATURES DDB

FEATURE FEATURE FEATURE FEATURE FEATURE


FUSION FUSION FUSION FUSION FUSION

HRR SAR / INFO


TRACK ASSOCIATION 1d
HRR ( 1d ) ASSOCIATION
RESPONSE RESPONSE SAR (2d) / Other
PLANNING PLANNING MODULE ASSOCIATION CLASSIFICATION MODULE Feature
Association
HRR(1d) SAR(2d) TRACK SAR(2d) HRR(1d)
HRR SAR TRACK SAR HRR
CONTROL CONTROL DDB CONTROL CONTROL CONTROL
DDB MODULE DDB DDB DDB DDB
DDB
MODULE MODULE MODULE MODULE

PREDICTION OF
NEXT TIME STEP
KINEMATIC FUSION NODE
(TIME-SPACE-SPECTRUM)
DDB= Dynamic Database
Kinematic
PREPROCESSED PREPROCESSED (MTI) PREPROCESSED (MTI)
TIME SEQUENCED TIME SEQUENCED TIME SEQUENCED
ANCILLARY DATA IMAGES (SAR, MULTI 2d)
(Multi Nd) SIGNATURES (HRR, MULTI 1d)

Figure 4.11. Three Levels of Information Fusion for HRR Tracking and ID.

4.6.4 Simultaneous Tracking and ID Assumptions

Two methods are used in the belief-probabilistic tracking and set-based algorithms. The first, a

modified PDA [2] technique, which we call Measurement Tracking, searches through all the kinematic-

feature position measurements and probabilistically chooses the plausible measurements most likely to be

associated with the object track. Since only object detection exists, it is a coarse measurement. The

second method, Feature-Set Belief Classification is a procedure that combines feature measurements and

54
calculates, filters, and predicts object beliefs for object discrimination and track-quality enhancement. To

demonstrate the algorithms, we use HRR amplitudes and locations as the feature set, since closely spaced

moving objects require ID for correct measurement-object association [92]. The simultaneous belief-

probabilistic and set-based track and ID algorithm integrates these two methods. The interaction between

the Measurement tracking and the Feature-set classification is pose, where pose information consists of

the object articulation.

For the analysis that follows, we assume the depression angle is constant and use articulation-

pose as the interacting term between tracking and ID since both depression and articulation can be

determined from trigonometry. We assume that one object exists for each track and that the measurement

information is available at each track step. The kinematic MTI track-pose cues the HRR sensor to process

a set of beliefs in the object type and returns a belief-pose update to the tracker. For set-based belief

filtering, the region-of-interest includes both the MTI-track and HRR-ID set information for complete set-

intersection. Finally, we assume that the MTI and HRR information are independent looks at a target and

cluttered measurements.

In Chapter 5, we develop the state equations to recursively track and ID objects with the joint

belief-probabilistic data association (JBPDA). Chapter 6 formulates the set-based data association

(SBDA) technique.

55
CHAPTER 5 JOINT BELIEF-PROBABILISTIC DATA

ASSOCIATION TRACKING AND IDENTIFICATION

5.1 Belief Probabilistic State and Measurement Equations

Consider a set of objects8 o, o = 1, .. O corresponding to different tracks, t, where t = 1,.., T with each

track having different trajectories. For each object, we set up the state equations corresponding to a track t. The

set of object states of a track t are assumed to evolve in time according to:

 xt 
t
w
 xt   F 0 0   x
t

 φt   +
 
= 0 I 0 

φt
 0 0 M  k  Belt   wφ  (5.1)
 Belt  k
k+1
t
 UBel  k
• •
where x t = [x t x t y t y t ]T is the 2-dimensional position and velocity states for track t derived from the

t
MTI hits, φ t is the pose state, F is the state transition matrix, M is the belief-state transition matrix, wxk

t
and wφk are a zero-mean mutually independent white Gaussian noise sequence with known covariance matrices

t
Q k, and UBelk is the zero-mean Gaussian uncertainty associated with a known covariance Dk. Note that

the Bel is a state vector representing the tracked object beliefs for n trained objects of interest derived from

HRR identification. The complete state vector is Xt = [x t φ t Bel t ]T .

The F matrix is the state transition from position to velocity information.

 xxt    xt 
t t
 10 T 0 0 x
• •

 yt  =  0   yt 
1 0 0
. (5.2)
 yt  k+1  0  k  yt  k
0 1 T
• 0 0 1 •

8
Objects are possible targets, while targets are identified objects

56
The M matrix is the Markov transition matrix, which represents the similarity of objects. The

similarity of objects represents how the belief in an object type may be related to other objects of the same

or different type. We use the class structure to indicate objects of the same type (e.g. tanks go to tanks and

trucks go to trucks),

 Bel1    : 
t t
Bel1

m11 … m1n
 t  = :
: … :
  Beltn  , (5.3)
 Belt n  : … :

 BelUnk  k+1  n+1 1


m ... mn+1 n+1  k
 BeltUnk  k
where the belief states are 1, …, n for the number of trained objects of interest and we append a belief in

an unknown (Unk) class to capture all of the objects not in the trained set of objects.

The true measurement equation for a track t is:

i = 1:mk
 tzi   f1(xt k)t   vxt 
t t t

 φBel  =  f2(xk,φk)  +  vφ  , (5.4)


 Belti  k  f3(Beltk)  k  UtU  k
 i
t
where i is the set of measurements from 1 to mk at each time step, f1(xk) is a 4×1 kinematic measurement,

t t
f2(φk) is a 1×1 pose measurement from the HRR profile, f3(Belk) is the (n+1) × 1 belief ID measurements,

t t
vxk and vφk are zero-mean mutually independent white Gaussian noise sequence with known covariance

t
matrices Rk, and UUk is the uncertainty of the uncertainty in the belief-ID updates from the HRR

t
sensor/processor with a covariance matrix Bk. Since UUk is both probabilistic and a belief from set

t
information, it needs further definition which will be discuss in section 5.2.2. UUk is a bounded

uncertainty that captures incomplete knowledge and is a pseudo-stochastic statistic.

Since the measurements are nonlinear, the equations for the measurement vector are processed

separately. The kinematic measurements contain position information and are represented as:

 10 0 0 0
 t
=   [ xk ].
t 0 0 0
f1(xk) (5.5)
0 
0 0 1 0
0 0 0

57
t
The object pose φk is a combination of the kinematic-position and belief-ID pose values determined by:

t t t
φk = α φ xk + (1 - α) φ Belk ,0≤α≤1 (5.6)
MTI Track HRR ID

and rearranging

t
 1  t  α   dyt 
-1  k
 v t 
-1  yk 
f2(φkID) = φ - tan - tan . (5.7)
1 - α k 1 - α  dxtk  vxtk
    

where x, y, vx, and vy are the track states. Depending on the accuracy of the sensor, α can be set to

proportional to the resolution of the sensors and Section 5.1.3 further describes the pose calculation. For

the dissertation, we investigate the relationship of simultaneous tracking and ID with α’s set to 0.5, 0.8,

and 1.0. If α = 1, it represents the JBPDA without belief-ID and belief-pose updates and relies only on

kinematic-pose to cue the HRR for an object.

t t
The belief measurement information f3(Belk) = I • Belk, derived from the classification

measurements of the HRR profile, represents the belief update from the HRR signature of each

measurement cued from the MTI scan.

False measurements are uniformly distributed in the measurement space. Tracks are assumed

initialized at an initial state estimate x(0), contain an unknown number of objects determined from the

scenario with one object per track, and have associated covariances. A plausible elliptical validation

region V, with a gate threshold, is set up at every sampling time around the predicted object

measurements and bounds believable MTI-HRR feature-set measurements. Measurements from one object

can fall in the validation region of the neighboring object and is persistent interference. All HRR feature

variables that carry information useful to discern the correct measurement from the incorrect ones are

assumed to be included in the measurement vector. Kinematic object-feature measurements are used in the

kinematic-state estimation of the correct object and are assumed centered at the object state. Further

information is included later when we discuss the probabilistic-belief simultaneous tracking and ID

combination. First, we discuss how objects are classified in the next section.

58
5.1.1 The Belief Functions

To get a belief value, the classification belief filter simulates the confirmation process people

perform by predicting hypotheses in a frame of discernment, Θ. The frame of discernment consists of a

o
collection of matched features, Θ = ∪{f1, ..,fs}, where features are fal , as described be the STaF algorithm

of Section 3.3. Only a subset of the entire combinations of features is possible. Thus, the belief set is a

modification of Shafer’s belief functions to only include a priori trained set of feature combinations. The

probabilistic fusion of extracted features is performed using Dempster’s rule. For individual peak features,

class likelihoods, a posteriori probabilities, and uncertainties from joint likelihoods are calculated (see

Section 3.3.2). These statistics are used to develop a set of beliefs and plausibilities for specific object

hypotheses.

In order to obtain the object beliefs, we need to assess the kinematically gated signals assumed to

comprise the feature set of an object in the measurement space. To determine the object type, the frame of

discernment consists of a collection of matched features over objects, Θ = ∪{o1, ..,on}, where objects are

assigned to specific tracks. Figure 5.1 illustrates the set of data for the signal, feature, object, and

measurement set. Note, in Figure 5.1, that signals are grouped into features and features are grouped for

an object. Objects are assumed to comprise the measurement set. For the objects, we assume they are rigid

bodies, so we can classify the feature set for a single object. Beliefs in object types is computed over

feature measurements across the trained set of objects. One way to represent this analysis is that al ⊇ Γ ⊇

O ⊇ m, where al is the set of ID measurements in ID space, ϕ, using the feature set for object o. In

measurement space, Z, we extract the position measurements, mk, from the set O and clutter. Note that the

dotted circle defines the gated set of kinematic information from the MTI, restricted to the validated

measurement region of interest. From the processing of the set-belief and probabilistic information, we

fuse the probabilistic kinematic tracking information and the feature classification-belief information to

determine the track-ID uncertainty.

59
IDENTIFICATION

- Space - ϕ
ID Measurement Feature Space - F
HRR Hits

Γ
Feature Group
Obejct Pose
ϕ Γ F
a
Signal Set - al Feature Set - Γ TRACKING
l
Kinematic Feature and
Measurements Pose IDENTFICATION
ID Track
TRACKING
Track Measurement Space - Z Kinematic Space - O

Track Pose
Truck Truck ο
BMP2 MTI Hits ο
T72
Ζ Τ
Object Group
Measurement Set - Σ Obejct Set - ξ

Figure 5.1. Tracking and Identification Measurement Space.

5.1.2 Belief Processing

To process the beliefs for each measurement i = 1,…, mk, we extract the feature measurements f

which are the HRR amplitude, a, and location, l, measurements from the set ϕ.. The object-type feature

sets are:

hΓ (F) = p(Γr = F) - extracted set of feature measurements Γ from the set F

gϕ (a l) = p(aq lq = ϕ) - extracted set of amplitude and location measurements from the set ϕ

To compute the belief measurement for each object associated with HRR measurement at mk, we

need to select a set of amplitudes and locations from the entire set of feature measurements. Since we

have assumed a rigid body assumption for the object, we can use the HRR scan as the feature set of

interest. In this case, Belh(F) = p(Γ ⊆ F) = βΓ(F) and Plh(F) = p(Γ ∩ F ≠ 0) = ρΓ(Z) where βΓ and ρΓ

are the belief and plausibility measures of Γ, respectively. Likewise, we can construct independent

60
subsets, Γ, ϕ, of U (the universal set) such that h(Fal ) = p(Γ =Fal ) and g(ϕ) = p(ϕ = Fal ) for all Fal ⊆ U

. Then, the belief [109] is:

to
BelF = (h ⊕ g)(Fal) = p(Γ ∩ ϕ | Γ ∩ ϕ ≠ 0) (5.8)
al | mk

for all Fal ⊆ U. The above equation shows that the combined belief in the object location and amplitude

measurements al of the ID measurement space, is the probability of intersection of the extracted feature

set Γ and the HRR measurement ϕ. Furthermore, it represents a belief in an object, given that it is a

plausible object. To obtain the belief from the classification algorithm, we need to calculate the class a

posteriori probabilities that result from processing a set of the amplitude and location range bins of the

HRR profile.

r
The class a posteriori probabilities generate a belief in a specific object class Belo Fal | mk for
hyp

each hypothesized object. The resulting belief vector is the set of al peaks associated with the object

hypothesis ohyp for measurement mk from object o. Note that we assume each MTI hit i, i = 1, …, mk has

an associated HRR scan from an object o, where o might or might not be the object of interest. Since

beliefs are based on a class hypothesis across hypothesized objects, the resulting set of beliefs generate a

matrix [104], which is like a covariance matrix for all plausible objects and an unknown category

capturing all unknown objects. Each row of the matrix is associated with an object class and each

column of the matrix is associated with a particular class hypothesis. Note, since the a posteriori

probabilities sum to unity, the beliefs and unknown class sum for any given hypothesis to unity by:

O
r
Unkohyp F | m +
la k
∑ Belohyp F | m = 1.
al k
(5.9)
o=1

t t
To combine the object-belief transition from k to k+1, we complete the updates as Belk+1 = M • Belk

where M represents the belief transitions among classes between time steps k and k +1.

Since the object belief is calculated, we can use the conflicting information [109] to determine the

object-type plausibility for each track:

to
Pl F = (h ⊗ g)(Fal ) = p(Γ ∩ ϕ | Γ ∩ ϕ = 0). (5.10)
al | mk

61
From the believability to the plausibility criterions, we have the ID interval of uncertainty:

UBel =  BelF 
to to to
; PlF (5.11)
k  al | mk al | mk 

This uncertainty function, based on the believability and the plausibility criterions, is then used to

assess the track quality given the track association and the HRR feature classification of the object. For

each HRR profile, the target belief and uncertainty vectors are calculated for each measurement. The

generation of the beliefs directly ties the uncertainty that an observed feature is associated with an object

hypothesis. Thus, a high uncertainty occurs when the likelihood that the observed object is not associated

with the hypothesized object. By using uncertainty information, unknown object observations are resolved

and can be used to assess incomplete knowledge. Note that the beliefs and uncertainties for each track

t t t
and objects, o = 1, .., O form the vectors: Belk and UBel . For the belief state measurement Belk , we take
k

the diagonal of the matrix update, but we need the plausibility criterion to propagate the uncertainty

interval for a given belief.

t
To update the belief-probabilistic uncertainty UU , we store the belief and plausibility vectors
k

associated with each object. For each unknown object center-of-gravity measurement, there exists an

associated uncertainty vector that is the representation of the uncertainty in object-classification beliefs, as

shown in Figure 5.2. The uncertainty of uncertainty is a stochastic uncertainty that is based on the belief

uncertainty for a given track which is assumed Gaussian.

Object 1 (O1) of Track 1 (T1)


0 1

Uncertainty
of Object 1

Confindence
of Object 1

Plausibility Pl(T1 - O1)


of Object 1
Believability Bel(T1-O1)
of Object 1

Object 1 Object 1

Figure 5.2. Object Identification Uncertainty Calculation.

62
In order to process the uncertainty for recursive methods, we need to append the belief set

uncertainty with a Gaussian distribution to get stochastic variable. Few have proposed a solution to the

problem since stochastic and set theory are based on different axioms. There is considerable discussion of

Bayesian-Belief interpretations in the literature [119,120,121,122] and some efforts to incorporate

probabilistic measures into belief evidence [123,124,125,126,127]. We present a possible method to

represent the belief uncertainty probabilistically. The incomplete knowledge of the object belief is

represented as the uncertainty, or the interval between the belief and the plausibility measures. Since the

incomplete knowledge could be either attributed to the object in question or any other object, we model the

interval as a stochastic variable beyond the belief, given the mean and covariance of the uncertainty. In

~ to to to
this case, the mean belief state equation for object o, for track t, is Belk = Belk + 0.5 Uz as shown in
k

Figure 5.3. Additionally, we can use the uncertainty for each object as the diagonal elements of the

matrix Bk. To form the belief state measurements of Equation 5.4, the beliefs for each object are

computed and put in vector form and the uncertainty information is updated.

New Belief Value


0 New Belief Value 1 0 1
Uncertainty(O1) U(O1)

Pl(O1) Pl(O1)

Bel(O1)
Bel
O1
Bel O1 O1 Bel O1

(A) High Belief (B) Low Belief

Figure 5.3. Distribution of Set Uncertainty stochastically over the beliefs.

Note that belief functions, based on a set theoretical approach, can be used to modify the combined

track-ID innovation matrix. Thus, by capturing incomplete knowledge in object identification, the belief-

63
probabilistic approach does not overestimate the probability associated with exact modeling. Furthermore, the

belief approach adds robustness to object tracking where objects have similar HRR profiles. The prediction of the

belief state from k to k+1 is done as in standard filtering and hence, is recursive.

Since each object measurement may be of any classification, we have an ID belief vector estimate

on the object. The uncertainty associated with the belief values allow for recursively processing the beliefs

for each new tracking measurement. If the uncertainty is high, then the recursive filtering belief update is

weighted less and captures incomplete knowledge in tracking. By capturing incomplete knowledge, we

show a novel tracking algorithm that uses identity beliefs to associate the object measurement to a track.

We will use this information in determining the belief event in data association. In the next section, we

show how the belief-ID gives an estimate of the belief-pose.

5.1.3 Belief Pose Update

We use the feature measurements for object identification. Each object ID is associated with a

specific, known pose referenced to a fixed sensor which can be used to refine the track-pose estimate

(Equation 5.6) of the object as shown in Figure 5.4. Figure 5.5 shows the belief-ID pose updates the track-

pose estimate in the object’s frame. Object-type beliefs are resolved for a trained set of objects with a

known pose. Since the aperture of the HRR radar is small, we assume that eight peak range bin features

can discern object types. Using only eight features with limited sampling of trained data can give a belief

with a pose estimate of the object; however, the belief-pose estimate by itself may not be robust enough for

tracking. Using track-pose information can increase the pose accuracy of the target ID. Likewise, the

t
target-ID pose can increase the track-pose accuracy. In this case, φ Bel from the classification updates the
k

t
track-pose, φ x .
k

64
x , y o (k-1) Line of Sight
o xo, y o(k)
Vector Velocity
y xo, y o(k) x , y o (k-1)
o φ pose Vector
φ pose x2 φvelocity

x , y o(k)
φtarget y
o
2
y Line of Sight
1 Velocity
Vector Vector

φ LOS φ LOS
x ,y x
1 x x ,y
s s s s

Figure 5.4. Pose Estimate from a Fixed Sensor.

Let sensor coordinates, xs , ys be at [0 ,0] and the object coordinates be xo , yo. To get the line of

sight vector for the position we have:

yo - ys yp
φLOS = tan-1 ⇒ φLOS = tan-1
xo - x s xp

To get the object velocity vector (assuming one time step):

∆y^  y^ ok - y^ ok-1 
φvelocity = tan -1 = tan  ^
-1 
∆x^  xo - x^ o 
 k k-1

and thus, the pose between the line-of-sight (LOS) and the velocity vector is

 y^ p   y^ pk - y^ pk-1 
φ = φLOS - φtarget = tan-1  ^  - tan-1   x^ p - x^ p 

 xp  k k-1

Using Figure 5.4, we have:

 y^ 1  y^ 2 
φ = φLOS - φvelocity = tan-1  ^  - tan-1  ^ 
 y1  x2 

where 1 is for filter’s position estimate and 2 is for the filter’s velocity position estimate since the object is

moving in a straight line, the velocity estimate of position is the truth and the position estimate is from the

tracker. Figure 5.5. illustrates the object’s coordinate reference frame. Thus,

 x^ 2 y^ 1 - x^ 1y^ 2 
φ = φLOS - φvelocity = tan-1  ^ ^ ^ ^ .
 x1 x2 + y1 y2

65
O O
x k+1 x k+1
φ φ Bel φ

φTrack O
φ Velocity
x Ok φ Velocity xk
O O
xk
O x k +1 xk
O x k +1

φTrack Reference φTrack Reference


(a) (b)

Figure 5.5. Pose estimate from (a) Kinematic Information only and (b) Object Identification.

Note the final pose estimate, φ , is determined from the kinematic estimate that is corrected by

the belief-ID pose estimate. In the next section, we show the tracking equations and how association is

performed to resolve objects from clutter.

5.2 Tracking and Identification Belief Filter

The Tracking Probabilistic-Belief Filter devotes equal attention to every validated kinematic or

ID measurement and cycles through object measurements until a believable set of object IDs is refined to

associate one object per track. For an initial set of measurements, a hypothesized number of tracks and

objects of interest is assumed to comprise the entire set. Successive measurements and updates from the

combined feature and track measurements determine the set of plausible goods objects, G. The

measurement filter assumes the past is summarized by an approximate sufficient statistic – track state and

belief state estimates (approximate conditional mean) and covariances for each object.

The measurement-to-track association probabilities are computed across the objects and these

probabilities are computed only for the latest set of measurements. The conditional probabilities of the

joint track-ID association events pertaining to the current time k are defined as θjotk, where θjotk is the

event that object center-of-gravity measurement j originated from object o and track t, j = 1,…, mk; o = 0,

1, …, On, where mk is the total number of measurements for each time step and On is the unknown

number of objects. Note, for purposes of tracking and ID, we define i = 1,…, mk for the entire

measurement set while j = 1,…, mk is for tracking and o = 1…, mk is for object ID.

66
A validation gate for each object bounds the believable joint measurement events, but not in the

evaluation of their probabilities. The plausible validation matrix: Ω = | ωjt | is generated for each object

of a given track which comprises binary elements that indicate if measurement j lies in the validation gate

of track t. The index t = 0 represents for "the empty set of tracks" and the corresponding column of Ω

includes all measurements, since each measurement could have originated from clutter, false alarm, or the

true object.

For a track event, we have:

 i
∆ 1 if θjt ∈ θ
i
where measurement [z]k originated from track t
| ω
^ (θ)| =
jt  (5.12)
 0 otherwise

For a believable event, which is above a predetermined ID threshold,

 1 i
if θoO ∈ θ
i
where measurement [Bel]Ok is associated with object o
∆

^ (θ)| =
oO (5.13)
 0 otherwise

Since JBPDA is tracking multiple objects, o, assuming one for each track, t, it has to determine

the belief in each object from a known data-base comparison. While these beliefs are processed over time

to discern the object, for each measurement, JBPDA must determine if the track-HRR signature is

plausible. JBPDA uses the current beliefs to update the association matrix. If the belief in the object is

above a threshold, JBPDA declares the measurement i, to be plausible for the object. Note, for plausibility,

the threshold is lower than an ID declaration as will be discussed later in the data association matrix.

Since we have assessed the continuous-kinematic information and the discrete-classification

event, we can now assess the intersection of kinematic and ID information for simultaneous object

tracking and ID. Note, ID goes beyond object detection, recognition, and classification, where we define

ID as the classification of an object-type for a given track to associate an object classification to a track.

For instance, two objects of the same class still need to be associated with a specific track. We need to

address feasible events for either a validated kinematic measurement or a validated ID belief. A

67
kinematic-belief joint association event consists of the values in Ω corresponding to the associations in

θjot,

 1 if θijot ∈ θ i
where measurement [z]k originated from track t with


^ (θ)| =
jot  i
a [Bel]ok for a given Oot (5.14)
 0 otherwise

where ω
^ (θ) = ω
jot
^ (θ) ⊕ ω
jt
^ (θ).
oO (5.15)

Note, we define the indices as jot since O is the number of objects which is equal to the number

of tracks.

These joint events will be assessed with “β” weights [2] to determine the extent of belief in the

associations. To process the believability of track associations, augmented with the ID information, we set

t
up a matrix formulation. For example, we have a set of kinematic measurements zi with a Belt and put

them into the event association matrix illustrated below. In Figure 5.6, the upper left of a box represents

the track information where a “1” indicates the kinematic measurement lies within a gated position

measurement. The lower right represents the belief in an object type of any class except the unknown class

where a believable object receives a “1”. Columns are for tracks and rows for measurements.

T1 / Bel1 T2 / Bel2

z 1 Bel z 1 1 0
1
1 1 1
1 0 1
track 1 z2 z 2 Bel
1 0 1
z 1 0 1
2
1 0 0
t1 z4 1 0 1
z 3 Bel 1 0 1
z
3
z5 1 0 1
1 1 1
z 4 Bel 1 1 1
z
4
t 2
1 0 0
z3 1 1 0
z 5 Bel z5 1 1 0
z1 track 2 1 1 0

Figure 5.6. Tracking and Classification Joint Association – circles indicate an “OR”.

In the case of joint association, JBPDA processes event matrices with an “OR” function which

allows for plausible events from either the track or classification plausible events. Using an “OR”

68
function, if either the track or the classification belief indicates a possible event, JBPDA puts a 1 in the

event matrix indicating it is a possible track-ID measurement event. To determine the plausibility of

events, JBPDA uses the validation region for measurements for the track region and uses a threshold, or

classification gate, to determine the match for a target-type classification associated with a given track.

As an example of the “OR” function, we illustrate the process in the Figure 5.7.

0 Kinematic Reject 0 Kinematic Reject


0 Track/ID Reject 1 Track/ID Keep
Belief ID Reject Belief ID Keep
0 1
1 Kinematic Keep 1 Kinematic Keep
1 Track /ID Keep 1 Track/ID Keep
Belief ID Reject Belief ID Keep
0 1

Figure 5.7. Believable Events for the association matrix.

Note, JBPDA only rejects non-believable measurements that lie outside the validation kinematic gate.

For the determination of the weights assigned to these associations, we need to set up the state

and probability values. To get the possibility values, we assume a believable track-classification

association event has:

i) a single object-type measurement from a source:

O
n

∑ ^ (θi ) = 1 ∀ j ,
ωjot jot (5.16)
o=0

ii) and at most one object-type measurement belief originating from a object for a given track:

mk

δt(θ) = ∑ ^ (θi ) ≤ 1
ωjot jot . (5.17)
j=1

^
The generation of event matrices, Ω for each track, corresponding to believable events can be

done by scanning Ω and picking one unit/row and one unit/column for the estimated set of tracks except

for t = 0. In the case, JBPDA generated event matrices for an estimated number of tracks with different

object-types, JBPDA needs to assess the combination of feature measurements from the global set of

feature measurements to infer the correct number of tracked objects that comprise the set. The binary

69
variable δt( θjotk) is called the track detection indicator [2] since it indicates whether a measurement is

associated with the object o and the track t in event θjotk, i.e. whether it has been detected.

The measurement association indicator


mk

τj(θjotk) = ∑ ω
^ (θ
jot jotk) (5.18)
j=1

indicates measurement j is associated with the track t in event θjotk.

In order to determine which targets are correct, JBPDA needs to process separate event matrices

for all the believable objects that can be possibly be associated with a track. Since initially, the number is

the full set, JBPDA needs to form event matrices for the entire set of objects and perform track

maintenance to determine the correct number of object, as shown in Figure 5.8. After the believability of

some measurements are returned, JBPDA can eliminate low-belief objects from the set of plausible

objects. The refined set of objects, G, is the number of good objects, is assessed for tracking which is

shown in the Figure 5.8. Figure 5.8a illustrates the kinematic and object-ID measurements. Figure 5.8b

shows the event matrices for G good objects. After assessing the number of good objects, where G < n,

JBPDA only needs to process event matrices for the validated set of objects. This refined set of good

objects will converge to a single correct object-ID after successive measurements of the tracking and ID

scenario.

70
track 2
t2 O12
track 1 O1 O22 Believable
t1 O2 z : Objects
O 81 :
2
O 91 OG2
Believable On O92
Objects : NonBelievable
OG1 O1 :
Objects
O z 5 O9 z 4 O1 O n2
NonBelievable 11
Objects : : O2
O n1 On :
On
z1 O1
O8
O9 z3 O2
:
:
On
On

(a)

t
x k|k

Kin/ID Validation O 1 Kin/ID Validation O 2 Kin/ID Validation O g


T1 /O1 T2 /O1 Tt/O1 T1 /O2 T2 /O2 Tt/O2 T1 /Og T2 /Og Tt/Og
z 1 /Bel(O1 ) z 1 /Bel(O2 ) z 1 /Bel(Og )
z2 /Bel(O1 ) z2 /Bel(O2 ) z2 /Bel(Og )
z 3 /Bel(O1) z 3 /Bel(O2) z 3 /Bel(Og)
z4 /Bel(O1 ) z4 /Bel(O2 ) z4 /Bel(Og )
z 5 /Bel(O1 ) z 5 /Bel(O2 ) z 5 /Bel(Og )

Data Association Data Association Data Association


for Object 1 for Object 2 for Object g
t t t
x1 x2 xg
k|k k|k k|k

TRACK
FUSION

t
x k+1|k+1
(b)

Figure 5.8. Track Maintenance.

After assessing the event matrices for a set of good objects for each track, JBPDA can obtain the

number of false measurements in event θjot , as:


k

mk

Φ(θjotk) = ∑ [ 1 - τj( θjotk) ]. (5.19)


j=1

For the belief-probabilistic state and covariance, we have

^ ot ot
Xk = E{Xk | Zk} (5.20)

mk


^ ot ot ot
P{X | Zk} = P{Xot | θi (k) Zk} P{θi (k) | Zk} (5.21)
i=1

71
If only MTI position measurements result, we could use the joint probability data association for

each track. However, if the constraints are that each measurement can only come from one track and one

object type exists for a single track, we have to use a joint belief-probabilistic measurement assignment.

The joint track and ID association event belief-probabilities, using Bayes' formula, are:

1
P{θot | Zk} = P{θot | Zk, Zk-1} = ψ p[Zk | θotk,mk,Zk -1] P{θotk | mk } (5.22)

where ψ is the normalization constant.

We have assumed independent events to make the mathematics simpler. Moreover, if we assess

the nonlinear coupling between the MTI and HRR, we can assume that the MTI cues the HRR sensor such

that the measurement update from the HRR is an independent look at the object. Since the MTI is a coarse

processor, and HRR is at a finer resolution, each sensor provides independent and different type of

information.

The number of measurement-to-track assignment events, θjot , is the number of objects to which
k

a measurement is assigned under the same detection event, [mk - Φ]. The track indicators, δt(θjotk), are

used to select the probabilities of detecting and not detecting belief-track events under consideration.

Since the kinematic information has been estimated from the measurements, the detected information can

t
be used to classify objects. With each successive set of measurements, an estimate of the track-pose, φk, is

determined from the position and velocity information from the past set of measurements relative to a

t
fixed reference for the object measurements. φ x is the track-pose estimate that centers the pose match
k

between the HRR trained data set and the HRR measurement. Each measurement, after classification, has

t
a different belief-pose, φ Bel , of an observation to a trained data set. The highest object belief from the
k

trained HRR classification set is used to update the track-pose. If no belief is significantly greater than the

other target beliefs, the average pose is used. The track-uncertainty in each object for a track is updated

with its associated belief-uncertainty (see Section 5.2.2). Over time, if the belief-uncertainty is reduced,

the highest object-belief becomes the object-pose.

72
The likelihood function of the measurements in the right hand side of Equation 5.22 is:

mk

p[Zk | θjot , mk, Z ] = ∏ p[zjo | θjot ,Zk -1]


k -1
(5.23)
k k k
j=1

where mk is the number of measurements in the union of the validation regions at time k. The product

from of the above equation follows from assumption that the states of the targets conditioned on the past

observations are mutually independent.

The condition pdf of a measurement, given its origin, is:

mk
 fjt[zjok] if τj[θjotk] = 1
∏ p[zjok | θjotk,Zk -1] = 
V-1 if τj[ θjot ] = 1
, (5.24)
j=1  k

t t t tj t
where ft[zjok] = N [zjok; z^ kj | k-1, Sk] and ^z kj | k-1 is the predicted measurement for target tj, with associated

tj
innovation covariance Sk.

Measurements not associated with a target are assumed uniformly distributed in the surveillance

region of volume V. Using the above two equations, the pdf of the likelihood of measurements can be

written as follows:
mk

p[Zk | θjot(k),mk,Z k -1
]=V ∏

{ft (k) [ zj(k)]}τj . (5.25)
j=1
t

In the above equation, V-1 is raised to power Φ(θjot ), the total number of false measurements in event
k

θjot and the indicators τj(θjot ) select the single measurement according to their associations in event θjot
k k k

The Prior Probability of a Joint Track-ID Association Event

The prior (to time k) probability of an event θjot , the last term in joint belief-probabilistic association
k

equation, is obtained next. Denoted by δ(θjot ), the vector of target detection indicators corresponds to
k

event θjotk. Note that, given θjotk, the vector δ(θjotk) is completely defined and so is the number, Φ, of

false measurements given above. Therefore:

73
P{θjot | mk} = P{θjot ,δ(θjot ), Φ(θjot ) | mk} . (5.26)
k k k k

The above joint probability can be written as

P{θjot | mk} = P{θjot | δ(θjot ,Φ(θjot , mk} P{δ(θjot ), Φ(θjot ) | mk} (5.27)
k k k k k k

The first term on the right hand side of the above equation is obtained by the following relations [2]:

1) The event θjot for a set of detected objects consists of [mk - Φ] objects.
k

2) The number of measurement-to-object assignment events, θjot , for a set of detected objects is
k

given by the number of permutations of the mk measurements taken as [mk - Φ], the number of

objects to which a measurement is assigned under the same detection event.

Therefore, assuming each such event a prior equally likely, one has

-1
mk
mk!
-1
P{θjot | δ(θjot ), Φ(θjot ), mk} = ( P - Φ) =   (5.28)
k k k
mk
 Φ! 

The last term in the equation above is, assuming δ and Φ independent,

P{δ(θjot ),Φ(θjot ) | mk} = ∏ (PD)δt (1 - PD) 1- δt µF(Φ)


o o
(5.29)
k k t

o
where PD is the detection probably of object o of track t and µF(Φ) is the prior probability mass function

of the number of false measurements (the clutter model). The indicators δt(θ) have been used in above

equation to select the probabilities of detection and non-detection events according to the event θjot(k)

under consideration.

Combing the last three equations yields the prior probability of a joint association event θjot(k)

as

Φ!
P{δ(θjot ), Φ(θjot ) | mk} =   µF(Φ) ∏ (PD)δt (1 - PD)1- δt
o o
(5.30)
k k mk! t

The Posterior Probability of Joint Track-ID Association Event

Combining previous equations yields the posterior probability of joint association event θjot as:
k

74
mk
Φ! 
P{θjot | Z } =  µF(φ) V -φ ∏ {ft (k) [zj(k)]}τj ∏ (PD)δt (1 - PD)1- δt
o o
k
 (5.31)
k  k
c m ! j=1
t t

where Φ, δt, and τj are all functions of the event θjot under consideration.
k

The above equation still needs the specification of the probability mass function (pmf) of the number of

Φ
-λV (λV)
false measurements µF(Φ). Using a parametric JPDA of the Poisson pmf µF(Φ) = e requires the
Φ!

spatial density λ = Φ(k)/V(k) of false measurements [2].

Using the last two relations leads to cancellation of VΦ and Φ!. Furthermore, each term contains

-λV
e and mk! which also cancel since they appear in the denominator c of posterior probability equation,

which is the sum of the numerators.

Thus, the joint association probabilities of the parametric system are

mk
λΦ
{ft [zj ]}τj
c1 ∏ ∏ (PD)δt (1 - PD)1- δt
k o o
P{θjot |Z }= (5.32)
k jk k
j=1 t

where c is the normalization constant.

The decoupled state estimation uses the marginal association probabilities, which are found from

the joint probabilities by summing all the joint events in which the marginal track and classification

events result. We use the beta weights [2] as:

t
βjok ∆= P{θjot | Zk}
k

= ∑ P{θjot | Zk}ω
^
jo(θjot ) . (5.33)
k k
θ

To illustrate the process, Figure 5.9 shows the flow of information for the JBPDA, from

^ t ^ t t
Xk-1|k-1 to Xk|k using the above βk weights.

75
Filter Track Weight Update
Estimate of Predict Select Believable Filter Track Identify & Filter & ID Object Measurements State, Pose
State, Pose & State, Pose & Track Event Measurements Object Measurements Measurements by Belief & ID Beliefs
ID Beliefs ID Beliefs Matrices Combine Pose Associations

Gate Kinematic
Measurements
Z MTI Estimate
i
Pose ∆φ t1
x^ (k | k) Track
t1 ^t
x^ (k | k) φ (k | k) ^t Detection
z t (k) Estimate
Joint φ1 (k | k) δt(εi (x )) t1
i 1 Track/ID ^ t β1
z (k) Track t bel (k | k)
Validation Belief Z bel φ Filter 1
Filter 1 1 ID Belief
Track Matrix
Detection
z t (k) Estimate t2 δt(εo(bel))
2 Joint x^ (k | k)
x^ t (k - 1| k - 1) Propagate t Track t2
x^ (k | k - 1) Belief Track/ID ^ t β2
Track 1 tc
^ t Filter Filter φ2 (k | k) Measurement x^ (k | k)
Σ
^ t
φ (k -1 | k -1) & Belief ^ t
φ (k | k -1) bel o (k | k) t ^ t Association
Z bel 2 (k | k) ^ tc
^ t
^ t conf τ j(εi ) φ (k | k)
bel o(k -1| k -1) bel o (k | k - 1) ^ t
bel (k | k)
t Joint ^ tm # False
Belief Z bel φ x (k | k) tm
Measurements
t
z (k) Estimate Filter o o Track/ID ^ t βo
TRACK t m Track Filter φo (k | k) Φ t (εi )
tm ^t ^ t
x^ (k | k) φ (k | k) bel o (k | k)

ZHRR (a,l)

Figure 5.9. Information Flow for JBPDA.

5.2.1 Track and ID State Estimation

JBPDA decomposes the object state estimation with respect to the location of each object of the latest

set of validated feature-set and kinematic-set measurements. The features have been used to obtain the

classification beliefs in the object types, so we can set up a simultaneous tracking and ID recursion for

each object in the set. ID is the classification of each object for a given track of data. For each object

measurement, we use the total probability theorem to get the conditional mean of the state at time k, written as:

o
m
k


^ t ^ ti ti
Xk|k = Xk|k βk , (5.34)
i=0

^ t
where Xk|k is the update state conditioned on the event that the ith validated object measurement is correct for

track t.

76
In order to combine measurements, we need to update the covariance for the track-ID system. The

measurement information has previously defined as zk, but we need to set up the Hk matrix for the propagation

of measurement information. Since the measurement information is non-linear, we employ the discrete-

o
extended Kalman filter [128]. The Hkt matrix for each time step k, object o, and track t is:

 Hxx 0 0  t
o
Hk t

= Hφx Hφφ 0  , (5.35)
 
 0 0 HBel Bel  k

 10 00 00 00  ∂f2
=
1 
where Hxx = 
0 0 1 0  φφ ∂φ
, H = ,H = I, and for Hφx we need to get the Jacobian for
 - α Bel Bel
0 0 0 0
1

each time step k:

 ∂f2 ∂f2 ∂f2 ∂f2   α   yk 


^ v^ yk ^
-x -v^ xk
k
Hφx = -  ∂x ∂v  =  2  (5.36)
 k xk ∂yk ∂vyk  1 - α  xk + yk v xk + vyk xk + yk
2 ^ 2 ^2 2 ^2 ^2 2
k ^ ^ v^xk + v^ yk 

We can set up the covariance matrix as a function of the uncertainties from the state information of

kinematic, pose, and beliefs. For the covariance propagation, we can use the belief-probabilistic uncertainty

=  BelF  = UtiU , where we diagonalize the uncertainty for a


to to to
information UBel ; PlF
k  al | mk al | mk  k

covariance matrix Bk. The covariance propagation is:

t t t t _t _  Qk 0 
Pk|k-1 = Fk-1 Pk-1 (Fk-1)T + Q k-1, where Q k =   (5.37)
 0 Bk 

for each track t.

We can obtain the innovation covariance Sk with the associated Rk and measured Dk by:

t o t o _t _  Rk 0 
Sk = Hkt Pk|k-1 (Hkt)T + Rk, where Rk =   (5.38)
 0 Dk 

Since Sk is the innovation covariance update, we can use Sk to validate measurements based on the

uncertainty with the associated track and ID beliefs.

Validation:

77
T T
At k, two measurements are available for object o for a given track t: zk-1, and zk , from which

position, velocity, pose, and ID features can be extracted form the belief track vectors. Validation, based

on track and ID information, is performed to determine which track-belief measurements fall into the

kinematic region of interest. Validation can be described as

o
(zk - ^zk|k-1)T [Sk]-1 (zk - z^k|k-1) ≤ γ for l = 1 … mk
t lt t t lt
(5.39)

where γ is a validation threshold obtained from a χ2 table for a DOF (17), Sk stands for the largest among

t
the predicted track belief covariance, i.e., det(Sk) ≥ det(Sk) for t = l,2,...,n. where n is the number of states,

s
and ^zk|k-1 is a combined predicted track belief given by E{zk|{βs}o = 1, Zk-1} where s is the set of object

beliefs for a track.

ti
Data association for β l :

Data association performed for each belief object-track in kinematic region is similar to that in

PDA and the details can be found in [2]. The association probabilities for l validated object measurements

are

t
t el o
βl = o , l = 1, 2, …, mk (5.40)
m
k


t
b+ el
l=1

t b
β0 = o , (5.41)
m
k


t
b+ el
l=1

t -1 t
where el = PG N(0, Sk ) (5.42)

o
b = mk ( 1- PDPG) [PDPGVk]-1 (5.43)

o
where mk is the number of validated object measurements and PG is the probability that augmented belief

track measurements fall into the validation region and PD is a detection probability. The volume of the

validation gate is

1/2
Vk = Cd γd/2 |Sk| , (5.44)

78
where Cd is the volume of the unit hypersphere of dimension d, where d = 17, the dimension of the

augmented belief-track measurement [ 4 kinematic states, 1 pose, 11 object belief states, 1 Unknown

Belief state].

Kinematic belief-probabilistic update:

The object belief-probabilistic track update is performed as a full rate system to combine the state,

innovation, and covariances.

o
m
k


^t ^t t t t
Xk|k = Xk-1|k-1 + Wk βlk νlk (5.45)
l=1

 mk 
o

and
t
Pk|k = β0
t t
Pk|k-1 + (1 -
t
β0 )
*
Pk|k + Wk
t  ∑ β t νtlk [νtlk]T - νtk[νtk]T (Wtk)T (5.46)
l = 1 lk 
o
m
k
t o

* t t t
where Pk|k = [ I - Wk H k t ] Pk|k-1 and νk = βlk νlk (5.47)
l=1

t t o T t -1
and Wk = Pk|k-1 [Hkt] (Sk) (5.48)

o
where Hkt is the measurement matrix that is calculated for each object pose, φ, and estimated position of

track t.

5.2.2 Track and ID Fusion

Since the association events for each hypothesized object are computed, the combined

information will be used to assess the object identity. The update to the global belief-probabilistic track-

ID information for the entire system is performed track fusion for object in a track. For each object, we

have a pose match and a belief in the object. Fusing the information will create a global state update in

the pose, position, and object beliefs. The update the global track-ID covariance for each track t, from the

hypothesized set of Good objects g, g = 1, …, G, is:

79
G


t t
[Pk|k]-1 = [Pg ]-1 . (5.49)
k|k
g=1

Similarly, the global state estimate can be obtained as [129]:


G
^t
Xk|k = [Pk|k] • ( ∑ [Pg ]-1 • Xgk|k) .
^t t t
(5.50)
k|k
g=1

The fusion method is illustrated below where the processes inside the boxes are displayed in Figure 5.10.

^ ^ t
t x
x ^t
^ t φ (k | k)
^ t
X
k-1|k-1
= φ (k -1 | k -1) Track 1 with Hypothesis ^ t
bel
in Object 1 1
^ t ^
bel o t
X
ZMTI Track 2 with Hypothesis Track
k|k
in Object 2
Fusion

^ t
x
Track t with Hypothesis ^t
in Object g φ (k | k)
^ t
bel
g

ZHRR (a,l)

Figure 5.10. Fusion of Objects in JBPDA.

5.2.3 Track Initiation by ID

As an example of how the number of tracks is determined, we use the above information to

estimate the number of tracks for t = 1, …, Thyp where if the hypothesized number of tracks equals the

true number of tracks, Thyp = T. Figure 5.11 illustrates track initiation.

80
T1 / bel1 T2 / bel2 T 3 / bel3
Hypothesized
z 1 bel 1 1 0 0 Number of tracks
z
1 1 1 1 0 t1 t2
1 0 1 0
1 0 1 0
z 2 belz 1 0 1 1
2 Gate t2 t3 tT
1 0 0 1 Measurements t1
1 0 1 0 z1 - {O1} z 3 - {O3} z 2 - {O1}
z 3 belz 1 0 1 0
3 Bel z 4 - {O 6 O7} z 4 - {O6 O7}
1 0 1 0 Updates
z 5 - {O 9 O10} z 6 - {O1}
1 1 1 0
z 4 belz 1 1 1 0 z 1 O1 = 0.27 z4 O6 = 0.20 z2 O1 = 0.58
4
1 0 0 0 z 4 O 6 = 0.20 z 3 O3 = 0.23 New track gets

1 1 0 0 β high track probability


buthigh ID helps to
z 5 belz5 Updates z4 O7 = 0.23 z 4 O6 = 0.60
1 1 0 0 Initiate Track

1 1 0 0 z 5 O9 = 0.31 z4 O7 = 0.23
z 5 O = 0.72 z6 O1 = 0.10
10
track 2
track 1
t2 O6
t 1 O10
O1
O1 :
z4 z6
: z5 z2 On Estimated
On Number of tracks
z3 t3 O1
z1
O2
track 3 :
On t 1 t2 t3

Figure 5.11. Number of Tracks determined from Association Matrix Bel IDs.

In Figure 5.11, we see the target ID helps to confirm how many tracks there are. Note that in

column four, row two, tracking does not indicate an event, but the object ID does. There is reason to

suspect a new track may exist. In the case of a new track being initiated after k = 1, the probability of a

new track decreases, since history has hypothesized a set of tracks. After each measurement update, the

object classification helps to weight the measurement for each track. In this scenario, we have two cases.

In case 1, as illustrated above, the high belief-ID falls outside the kinematic gate and initiates a new track.

In the second case, we might have two objects with high Belief-ID inside the kinematic gate. A new track

is not initiated because is it assumed that one of the objects would eventually fall outside the kinematic

gate and initiate a new track at a later time. After specific object classes are assigned to the tracks, the

object is identified. If all objects are of the same type, it would be a similar analysis to JBPDA without

belief-ID and belief-pose update with all position measurements coming from the same source. However,

spurious measurements would not have the correct ID even though they fall within the track validation

gate. Thus the spurious measurements would not be track-ID plausible.

81
5.3 Summary of Chapter 5

The purpose of this chapter was to introduce the Joint Belief-Probabilistic Data Association (JBPDA)

which combines track and ID information. The method included identification information to resolve the

association event matrix. To obtain a object belief, we used a modification of the STaF algorithm to

associate a object ID with an HRR profile. The pose update and the choice of measurements were

obtained from the object ID and beta weights, respectively. To propagate uncertainty, an uncertainty

calculus was derived to combine a belief and probabilistic uncertainty. Finally, the propagation of

information was similar to the data association tracking methods which included both ID and tracking

information. In the next Chapter, we show the SBDA approach which is related to the JBPDA, but with a

set established for the kinematic information, an “AND” function in the event matrix, mutual information

for object classification, and evidential belief used for object identification.

82
CHAPTER 6 SET-BASED SIMULTANEOUS TRACKING AND ID

This Chapter overviews the set-theory tracking formulation and applies a novel recursion

technique, using the belief filter for the simultaneous tracking and identification. The set-based data

association (SBDA) approach uses a kinematic set of validated position hits, an ID set of validated target

beliefs, and performs an “AND” function to combine kinematic and ID sets. Further, to increase the

robustness of tracking and ID, mutual information is used to perform set fusion over object classifications

for a range of poses. Finally, an evidential belief for target ID is used to fuse belief sets over time. Before

the algorithm is presented, Figure 6.1 illustrates the approach and can be compared to the JBPDA of

Figure 5.9.

Track/ID Set Combination Observation/Training Set Combination

MTI Hits
HRR Signatures Pose
Ranges
Pose
Object Observation Set Objects
k Track Set Object Training Set
HRR Profile
HRR Profiles
Event Objects
Tracks Object Set
Matrix
AND Pose Estimate Mutual Information

Poses STaF Beliefs


Differential Entropy
Propagation β
FUSE Belief/Belief Combination
Position Set/ MTI Hits
Position Set
Combination HRR Signatures Pose
Ranges

Track Set Pose Object Observation Set Objects


k+1 Object Training Set
HRR Profile
HRR Profiles
Event Objects
Tracks Matrix Object Set
AND Pose Estimate Mutual Information

Poses STaF Beliefs


Differential Entropy
Propagation β

Figure 6.1. Overview of the Set-Based Data Association.

83
Figure 6.1 shows that the set-theory approach fuses four sets of information. The first is that of

the trained feature set and the observation set. Mutual information can be used to match a set of HRR

profiles for each object over a range of poses. The result is a differential entropy measure. Additionally,

we can use the STaF beliefs to capture the uncertainty of the system for all objects. The second set is that

of tracks and objects, which is an “AND” function of the event matrix, (Section 5.2). The track/ID and

observation/training set information is used to weight the beta weights. The third set the propagation of

track information over time using the event matrix. In this case, we match MTI position hits over time

through propagation and association. The final set is the fusion of beliefs from k to k + 1 using the FUSE

algorithm. The difference between the JBPDA and the SBDA is in the evidential fusion of set information

where beliefs and uncertainties of objects are accrued over time, the AND function, and mutual

information for object classification over a range of poses. The similarity of JBPDA and SBDA is the

propagation of track information from time k to k +1. The chapter organization begins with Sections 6.1,

6.2, and 6.3 which illustrate the track state, the state estimation, and track set, respectively. Sections 6.4

will introduce the mutual information and the AND function for determining the position measurement

from which to update track pose. Sections 6.6 and 6.7 derive the disjoint and joint uncertainty

propagation for set fusion using the FUSE algorithm. Finally, Section 6.8 will propagate the state

information similar to the JBPD with a modification to the “β” weights.

6.1 Kinematic Tracking Belief Filter

MTI radar tracking assumes that after receiving the energy return from the radar, that an

approximate coarse position of the target results. Since a finite number of range bins are collected, the

center bin is assumed to be the position of the target. Additionally, the radar collection has an associated

depression and azimuth angle to the target. After resolving the direction of movement, the relative pose

of the target is indicated for a belief classification to be performed. Figure 6.2 shows how track

information gives pose estimates to orient target for range-bin feature extraction. Confidence values are

associated with the target classifications derived from the fusion of HRR features, discussed in Section 3.2.

84
In the SBDA, confidences are achieved in two methods. The first is in determining the mutual

information content of the target classification over a range of poses and amplitude features. The second

is the confidence achieved through the “β” weights determined from the AND of the track and ID events.

Mutual
Feature Information AND
Target Tracking Extraction t ^ t1
^ t1 ^ Track
x (k | k) φ (k | k) x (k | k)
Detection
1
f11 α 11
^ t
[{f1...fn}, x] Joint φ (k | k) δt (ε (x)) t1
1 i
Belief Track/ID ^ t β
Track 1 f n1 α1n
Σ
1
Filter t
φ Z bel φ
Filter bel (k | k)
1 Target Track
Ranges φ 1 1 ID Belief
Detection
FUSE
δt (εo(bel))
t2
f12 α 21
Joint
x^ (k | k) t2
2 β
[{f1...fn}, x] Belief Track/ID ^ t 2 Fuse # ID
Filter Filter φ (k | k) Measurement
Track 2 f n2 α2n and Track
Σ
2
φ t t
Association
Ranges φ Z
^
bel (k | k) Information
conf 2 τ ( εi )
j
FUSE
f1m α m1
t ^
tm # False
m x (k | k) Measurements tm
Belief Z bel φ Joint
[{f1...fn}, x] Filter o o Track/ID ^ t βo
f nm αmn
Filter φ (k | k) Φt (ε )
Track M
φ
Σ o i
Add New Track
Ranges φ ^ t

tm t
bel (k | k)
o Prune Number of Tracks
Resolution ^ ^
FUSE x (k | k) φ (k | k)

Time
Propagation of Track

Figure 6.2. SBDA Tracking Model.

The track updates with the fused classifications are used to perform the set-level track and ID.

By fusing track and ID confidences in a recursive manner, a global measure of targets and their tracks can

be outputted to the user or used to add new or prune ghost tracks. Finally, the evidential accrual of beliefs

is used to temporally select the true ID from the AND function for each object hypothesis and the

resolutional information from the kinematic set is gathered to refine the kinematic gate (f(Sk )) to validate

MTI position hits.

We can show the fusion levels in time and resolution as shown in Figure 6.2. As time moves to

the right the HRR range-bin features are processed from the track information. The set-level intersection

of tracking and target beliefs is fused to obtain confidence measures for target-track combinations.

The SBDA is an intelligent method which devotes attention to every believable measurement and

cycles through kinematic features until a target position and pose is reached using mutual information.

85
The filter assumes the past is summarized by an approximate sufficient statistic - state estimates

(approximate conditional mean) and covariances for each target. Each measurement consists of

kinematic and feature-ID measurements. The measurement information is sequenced and batched as

depicted in Figure 6.2, and the kinematic state, x, and ID feature, f, variables are separated. The target

state and true measurement equations are typically evaluated as with known covariance matrices. Cluttered

measurements are uniformly distributed in the measurement space. Tracks are assumed initialized at an

initial state estimate x(0) for a plausible number of targets determined from the recursive update.

The measurement-to-target association probabilities are computed across the targets and these

probabilities are computed only for the latest set of measurements. The conditional probabilities of the

joint-target association events θ are computed at each measurement. A plausible elliptical validation region

V with a gate threshold, is set up at every sampling time around the predicted measurement and is used to

select track-pose and target ID from the mutual information of the classification routine over a set of trained

poses. Measurements from one target can fall in the validation region of the neighboring target and is

persistent interference.

All feature variables that carry information useful for discerning the correct measurement from the

incorrect ones are assumed to be included in the measurement vector. The belief filtering approach differs from

conventional algorithms in how kinematic measurements are used in the estimation of the kinematic state of the

correct target. The top grouping of Figure 6.3 shows the kinematic level similar to the Probability Data

Association Filter (PDAF) [2], with only a single kinematic feature to track. The belief filter, in the SBDA,

utilizes many features and can associate grouped features to object poses over time as shown in the correct target

tracks at the bottom of Figure 6.3.

86
Measurement Space Target Belief-Pose ID Update
Z1= {x,f1,f2,...fn,t} Z 1= {x,f1,f2,...fn,t+1} time
Z = {x,f1,f2,...fn,t} Z2= {x,f1,f2,...fn,t+1}
2
Z = {x,f1,f2,...fn,t} Z = {x,f1,f2,...fn,t+1}
n n

ID - Feature Space
f1
Sensor 1,2,m z y1
Single Feature Track
No Pose / No ID
f2
Sensor 2,m z y2

fn
z yn
Sensor 1,m

Track Space
z1
Resolution
φ
z2
φ
zn zy
Time
zx ID Pose
Integrated Feature Track

Figure 6.3. SBDA Tracking with ID.

Note, from Figure 6.3, if only the kinematic feature information is used, a data association error

could result from closely spaced measurements in a time constrained decision making process. However,

ID feature information, fal, can be used to associate the correct object type with the correct track, where a

is the feature amplitude and l is the feature location. Additionally, a belief-pose update from the MI of

object classification over range a poses can help refine the pose estimate. Thus, simultaneously associating

object kinematic and classification features results in higher belief of true measurement-to-object value

and minimizes the target validation region, since solid objects inherently constrain distances between

features. The SBDA outputs a track-pose estimate, such as aspect angle, for each track given the sensor’s

azimuth and depression angle. By using pose, SBDA can associate the position of the object with the

aspect angle of the HRR sensor for feature target classification that gives a measure of the target belief and

plausibility. Additionally, the mutual information of ID/pose information updates a robust ID-pose

estimate to the tracker. However, the main advantage of mutual information is that the correct

measurement is determined in the ID event matrix.

The plausible object validation matrix is composed of binary elements that indicate if

measurement lies in the kinematic validation gate of target. A joint association event consists of validated

87
associations of the kinematic and object-ID. By using an AND function over track and ID information,

the β weights reflect the combination of track and ID information. Note that if fewer measurements are

validated, it is likely that belief-pose will be similar to the track pose. Thus, the belief-pose update might

no be as large as in the JBPDA case. After selecting the position of choice by using an AND of track and

ID information, the kinematic state xjt is updated and the pose information aspect angle is assessed as

φjt = tan-1 (yjt(k+1) - yjt(k)/ xjt(k+1) - xjt(k)). The algorithm continues to track objects until the pose

measurement is updated for a fixed time step ∆k.

Set-level fusion includes not only rejection, but also the selection of plausible objects for each

track. Additionally, by accruing evidence of feature space, through classification, and time, through

tracking, evidence can be fused for object ID. The determination of the efficient object identity is the result

of set fusion from the track and identity updates. Once a plausible ID is confirmed for a set of objects, the

fused information updates the track information and recalculates the confidence measures. Recursively,

object sets are reduced and plausible tracks are continued, while ghost tracks are eliminated. After a

belief accumulates, fewer hypothesized object-event matrices will be investigated and the true ID will

result.

6.2 Fused Track and Identification State Estimation

Assuming the objects conditioned on the past observations are mutually independent, the

decoupled state estimation (filtering) of the marginal association probabilities, which are obtained from

the joint feature probabilities of track and object-ID belief, is obtained by summing over all joint events in

which the marginal event of interest occurs. The conditional probability of the event for the continuous

kinematic and discrete belief ID is related by “beta” [2] weights assigned to the event associations that are

validated by the kinematic and ID algorithms:

∆ ohyp
βk(θ) = P{θk | Zk} • bk (or | aq lq)φ . (6.1)

At this point, we discuss the differences between the Joint Belief-Probabilistic Data Association

(JBPDA) and the Set-based Data Association (SBDA). The key difference between the algorithms is that

88
the JBPDA uses the kinematic track information and only uses the set processing from the HRR

classification routine. Since states are updated for each time step k, the beliefs and kinematic states vary

over time. The SBDA uses evidential reasoning and thus, the beliefs in target types and the track

information is accrued over measurements. The second difference is that the JBPDA uses an “OR”

function for track and ID event associations such that if the ID system says the target is plausible, it is

retained for possible selection by the tracker. Likewise, the ID system processes kinematic measurements

that fall within the validated region of the tracker. The SBDA uses an ”AND” function between the

kinematic and ID sets to assess the joint association. Thus, only validated ID and validated kinematic

measurements are used. While the SBDA might not be as robust as the JBPDA, it would increase track

accuracy from evidence of object-type and track history. The third difference is processing time. The

JBPDA takes longer to cycle through all measurements in the validated region, while the SBDA just

processes MTI and HRR hits that are simultaneously validated.

6.3 Belief Filtering Cumulative Track and ID Recursive Belief Probability Measure

From the processing of the set information, SBDA fuses the sets of kinematic tracking

information and the sets for feature classification information, (defined in Section 5.2). Figure 6.4 shows

the four sets of interest: signal, feature, object, and measurement. Note that the dotted circle defines the

kinematic and classification set of information, restricted to the measurement region of interest, and thus,

all measurement information intersects.

89
IDENTIFICATION
Signal Space - ϕ Feature Space - F
HRR Hits

Γ
Feature Group
Γ
ϕ F
a
Signal Set- al Feature Set - Γ
l
Mutual
TRACKING Information Event
Belief Pose Track Pose
{Objects, Pose} Matrix
Object Space - O Measurement Space - Z

Object Group Truck Truck


ξ BMP2 Σ
ξ Σ T72
MTI Hits Ζ
Ο
Object Set - ξ Measurement Set- Σ

Figure 6.4. Set Theory Approach to Tracking and Identification.

For the track and ID example, assume that we define the object beliefs for the sets of information:

ohyp ∆
bk (or | aq lq)φ = p(Σ ⊆ ξ | Γ = ϕ) (6.2)

where Σ is the measurement set, ξ is the object set which is composed of the measurements for each

object, Γ is the HRR feature set, and ϕ is the amplitude and location values. We can form sets of

information, h and g, for the kinematic measurements, z(x), object, o, HRR range-bin features, z(f), and

signals, a: (shown in Figure 6.4)

The sets of kinematic information are:

hΣ (Z) = p(Σj = Z) - extracted set of kinematic measurements Σ from the set Z

gξ (o) = p(ξj = o) - extracted set of object measurements ξ from the set O

where O is the object kinematic information that completely intersects the Z measurements as was defined

by bounding the set of all possible kinematic measurements.

The sets for the object classification are the same as the JBPDA:

90
hΓ (F) = p(Γj = F) - extracted set of feature measurements Γ from the set F

gϕ (a l) = p(aj lj = ϕ) - extracted set of amplitude and location measurements from the set ϕ

where F is the feature matrix associated with the HRR target measurements. We assume that the MTI

indicator bounds the region such that kinematic set and the HRR feature set completely intersect.

6.3.1 Tracking Set Combinations

The object measurements are the kinematic measurements being extracted.

hΣ (ZO) = p(Σj = ZT) and gξ (ZO) = p(ξj = ZO) (6.3)

where O is the object kinematic information from the measurement set Z.

The quantity

1
BelZ = (h ⊕ g) (Zo) =∆
o 1-K ∑ h(Σ) g(ξ) (6.4)
Σ ∩ξ = Zo

uses Dempster’s rule of combination, where K =∆ ∑ h(Σ) g(ξ) is called the kinematic conflict between
Σ ∩ξ = ∅

the evidence h and evidence g. In this case, Belh(Z) = p(Σ ⊆ Z) = βΣ(Z) and Plh(Z) = p(Σ ∩ Z ≠ 0) = ρΣ(Z)

where βΣ and ρΣ are the belief and plausibility measures of Σ, respectively. Likewise, we can construct

independent random subsets, Σ, ξ, of U (the universal set) such that m(Zo ) = p(Σ = Zo) and g(ξ) = p(ξ =

Zo ) for all Zo ⊆ U . Then, it is easy to show that

BelZ = (h ⊕ g)(Zo ) = p(Σ ∩ ξ | Σ ∩ ξ ≠ 0) (6.5)


o

91
for all Zo ⊆ U. The above equation shows that the combined belief in the object measurements o of the

kinematic measurement space Z, is the probability of intersection of the extracted objects, ξ, and the

kinematic measurements, Σ. To set up which kinematic measurements in the kinematic set, Figure 6.5

shows that when the position measurements are within the kinematic validation gate (white circle), it is a

valid measurement. Like the set theory approach, more than one measurement may fall inside the gate.

track 2
track 1
t 2 (O )6
t1 (O9 )

z2
KINEMATIC
z4 UNCERTAINTY

z5

z3 Certainty = 1 - shaded Area


z1

track 2
z1 z2 z3 z4 z5 z6 z7 C
0 1 1 1 0 0 0 0.435

Figure 6.5. Set space for Kinematic Information.

To assess the kinematic uncertainty, we investigated a series of possibilities. One possible

method is to assume validated measurements are “certain” and non-validated measurements are

“certainly” not the kinematic information of the track. However, some of the non-validated position

measurements may be the kinematic information of the object for the track. Thus, the uncertainty is

geometrically related to the validation region. If the validated region is small, then the measurements in

the region are more certain relative to the propagated position of the tracker. If the kinematic gate is big,

the measurements inside are less certain, but validated. Thus, the selection of the kinematic gate defines

the certainty of the measurement. The area that is between the validated region and that surrounds all

measurements (shaded area) becomes the inverse of the uncertainty. The shaded area is the ellipse of a

92
measurements minus the ellipse of the validated measurements. If the uncertainty region is big, the

validated region has more certainty. If it is uncertainty region is small, then the validated region is more

uncertain. Thus, at the extreme, if the validated region is only the propagated position, is a certainty of 1,

since the validated measurement is the same as the predicted measurement. At the other extreme, if the

position measurements are all outside the validation region, then the uncertainty is high, since no

measurements are validated. To represent the track uncertainty, the number of position measurements

outside the validation gate and the size of the validation gate are used.
k
i
∑ mi ⊇ V
i=1
UT = (VAll - Vgate), (6.6)
mk

i
where mi ⊇ V are measurements that lie outside the validation gate, mk is the total number of

measurements, and (VAll - Vgate) is the size of the measurements outside the gate. The uncertainty UT

varies between [0,1]. For example, if all measurements lie outside a small validation gate, the uncertainty

is high. If all measurements lie within a small validation gate, the uncertainty is zero. If the gate

surrounds all the measurements, the uncertainty is zero since none of the measurements lie outside the

gate.

6.3.2 Identification Set Combinations

We use the feature measurements for classification, as defined in Chapter 5, and the kinematic

measurements for identification as shown in Figure 6.4. The feature measurements F are the HRR

amplitude, a, and location, l, measurements being extracted from the set ϕ.

hΓ (Fla) = p(Γj = Fla) and gϕ (Fla) = p(ϕj = Fla)

where al is the object HRR amplitude and location information from the feature identification

measurement set F. The quantity

1

to
BelF = (h*g) (Fla) =∆ h(Γ) g(ϕ) , (6.7)
al | mk 1-L
Γ ∩ϕ = Fla

93
uses Dempster’s rule of combination, where L =∆ ∑ h(Γ) g(ϕ) is called the classification conflict
Γ ∩ϕ = ∅

between the evidence m and evidence n. In this case, Belh(F) = p(Γ ⊆ F) = βΓ(F) and Plh(F) = p(Γ ∩ F ≠

0) = ρΓ(Z) where βΓ and ρΓ are the belief and plausibility measures of Γ, respectively. Likewise, we can

construct independent random subsets, Γ, ϕ, of U (the universal set) such that h(Fla ) = p(Γ =Fla ) and

g(ϕ) = p(ϕ = Fla ) for all Fla ⊆ U . Then, it is easy to show that

BelF = (h ⊕ g)(Fla ) = p(Γ ∩ ϕ | Γ ∩ ϕ ≠ 0) (6.8)


la

for all Fla ⊆ U. The above equation shows that the combined belief in the object location and amplitude

HRR measurements al of the feature measurement space F, is the probability of intersection of the

extracted feature set Γ and the HRR measurement ϕ.

Note, since the a posteriori probabilities sum to unity, the beliefs and unknown class sum for

any given hypothesis to unity by:


O
r
Unkohyp F | m +
la k
∑ Belohyp F | m = 1.
al k
(6.9)
o=1

t t
To combine the object-belief transition from k to k+1, we complete the updates as Belk+1 = M • Belk

where M represents the belief transitions among classes between time steps k and k +1.

Since the object belief is calculated, we can use the conflicting information [109] to determine the

plausibility object-type for each track:

to
Pl F = (h ⊗ g)(Fal ) = p(Γ ∩ ϕ | Γ ∩ ϕ = 0). (6.10)
al | mk

From the believability to the plausibility criterions, we have the ID interval of uncertainty:

UBel =  BelF 
to to to
; PlF (6.11)
k  al | mk al | mk 

In order to assess the belief-set combinations, we use the mutual information to get a more robust

classification over a range of poses as shown in Figure 6.9.

94
Kinematic gate
LOS
Vector

+ 10 φ
Velocity
Vector

Pose/ID OK

- 10 φ

Figure 6.6. Using MI to assess a Range of Pose Estimates for HRR ID.

6.4 Mutual Information for Combining Sets of Information

Mutual information is used to assess a pose range of HRR trained data sets to the observed HRR

profile. The object detection problem is to determine a minimum necessary number of features [130,131]

to measure. The number can be determined from a peak amplitudes that constitute the object. In

addition, the association between the stored object HRR signatures O(A(a)) and the observation signature

Y(a) needs to be compared for the same relative size. These detection actions should provide the highest

belief and isolate the object information. After m measurements and comparisons to Y observations, a

measure of mutual information will determine the information-theoretic content. If a belief value is

achieved, a preliminary pose-articulation, or pose of an object, is determined which allows the

classification routine to extract features for initial object type matching. After matching the object type

with the object pose, a fused belief-ID confidence is achieved by way of relative entropy.

The primary feature extracted is mutual information on object pose-articulation. The motivation

for mutual information is that 1) it utilizes the measured probability density function of HRR signature

bins, 2) it can easily be adapted for learning, such as in the case of Hidden Markov models to remove

uncertainties in the problem space, and 3) object-pose information can be used for image registration and

95
the classification process. In the next section, we explore the use of relative entropy which is a more

general representation of mutual information. Relative entropy better represents a comparison between

the object observation and the stored set of HRR features. The mathematical models for object detection,

object-type classification, and object pose determination, are presented.

6.4.1 Mutual Information Model for Moving-Signature Detection

The classification of a set of objects can be achieved by maximization of mutual information,

which has been described by Viola and Wells [132]. The goal is to obtain a learned estimate of the

association, A, that associates the object-measured pose-articulation, O = f(a), and the detected object

observation Y = f(a), by maximizing their mutual information over association estimate:

^ = max [I(O(a), Y(A(a)))]


A (6.12)
A

where a is a variable of peak amplitudes, that ranges over the HRR signature. For the implementation, we

loaded in the HRR signature bins and stored them in a 1 x 101 vector and compared it to that of the stored

vector of information where the observed and trained profiles were referenced to the same center location

bin. Furthermore, we compared a set of HRR signatures over a range of predicted poses. By comparing a

set of range of poses, we tried to reduce uncertainty attributed to pose selection and increase the mutual

information of the observation to the trained set, to increase the accuracy of the belief-pose estimate.

Mutual Information, defined using entropy, is

I(O(x);Y(A(a))) = h(O(a)) + h(Y(A(a))) - h(O(a),Y(A(a))) (6.13)

where h( ) is the differential entropy of a continuous random variable, and is defined as:

h(x) = - ⌠
⌡ pa(a)(log(pa(a)) da . (6.14)

Given the variable measurements in a signature, information on a referenced a, φ information

can be used as a feature of pose-articulation, or independently as object amplitude. The joint entropy of

two variables a and φ is

h(a,φ) = - ⌠
⌡ p(a,φ)(a,φ)(log(p(a,φ)(a,φ)) dadφ . (6.15)

96
Entropy can also be expressed as:

I(O(a);Y(A(a))) = h(Y(A(a))) – h(Y(A(a)) | O(a)) (6.16)

Conditional entropy, h(φ|a), is a measure of uncertainty, variability, or complexity. In this case, we use

the uncertainty representation for the inverse of the confidence.

Information, in the association problem, is divided into three characteristic functions:

1) Entropy of the object for the HRR profile is, independent of A,

2) Entropy of the signature which the object is associated with, and

3) The negative joint entropy of the observation Y with the Object O.

A large negative value exists when the object and the observation are functionally related by

mutual information. Basically, the algorithm learns associations where the observation Y explains the

object O pose-articulation above a desired threshold. Hence, (6.15) and (6.16) are learned associations for

complexity reduction. Furthermore, since the observation was gathered from the track pose, the

comparison to the database is an association to a belief-pose to track-pose as well as object HRR

observation to trained HRR object data set.

Viola [132] used a stochastic gradient descent method to seek a local maximum of the mutual

information criterion. The method employs histograms to approximate entropies and their derivatives.

For this dissertation, we utilize the histograms to approximate the entropies, as shown in Figure 6.6.

Additionally, we use 5 profiles of HRR poses of the observed object and the trained object. The process

gives the hypothesis over the pose measurements and hypothesized objects.

40

35

30

25

20

15

10

0
0 50 100 150 200 250 300

Figure 6.7. Histogram of Probabilities of Amplitudes from HRR profile.

97
6.4.2 Relative Entropy Information Metric for Moving-HRR Signature Detection

The object classification problem can be formulated as a multiple-hypothesis testing problem

[133,134]. In order to perform the hypothesis, we need to alter the mutual information metric. Mutual

information is a special case of a more general quantity called relative entropy, D(p||q), which is the

distance between two probability mass functions p(a) and q(a) [135]. The relative entropy is defined as:

p(a)
D(p||q) = ∑ p(a) log (6.17)
q(a)
a

The relative entropy can serve as a metric since it is always non-negative and if p(a) = q(a), it is

zero. The relative entropy can also be viewed as the exponent in the probability of error in a hypothesis

test between the distributions p(a) and q(a). Thus, the deviation of the probability corresponds to the

distance between the match of probability functions.

The relative entropy metric is used as a relation of probabilities associated with the object

observation and a trained observation signature set. For each of the simulated scenarios, the signature of

the object was captured, centered, and compared to the trained data set for a set of articulation and object

type. Since

p(O(a) Y(A(a)))
I(O(a);Y(A(a))) = ∑ ∑ p(O(a);Y(A(a))) log
p(O(a)) p(Y(A(a)))
(6.18)
A∈Α
a∈ a

Then, for our signature analysis, we have

I(O(a);Y(A(a)) = D(O(a) || O(a);Y(A(a))) (6.19)

which is a relative entropy metric between the object and the search of the learned data set to match a pose

angle with an object type.

The mathematical algorithm for measurement processing is similar to a system with independent

hypotheses [133]. Each individual hypothesis test, denoted Hf, is referenced to the Yth signature set

observation and simply states “the observation contains an object articulation feature”. H hypotheses are

postulated, one for each feature f = 1,…, F, of which we concentrate on articulation classification in this

98
analysis. In I -1 signatures, Hf is false; and in one signature Hf is true. Let f denote the stage of the

detection, where f = 0,1,…, F.

At every stage f > 0, a sensor takes an observation, centers the signature, makes bin

measurements, and compares the bin amplitudes in an observation signature I through the use of entropy

metrics. By convention, let the measurement outcome y(t) = 1 denote a perfect articulation object

correlation and y(t) = 0 denote no detection, Pd. Measurements, which are independent from stage to

stage, have a probability density that is conditioned on the presence or absence of the object and depends

on the probabilities of false alarm Pfa and missed detection Pmd.

Let I(t) = {(i(s), y(s)), s = 0,…,k} be the total information available at stage k, consisting of (i, y)

measurement pairs, i(s) being the sample bin feature and y(s) the realized measurement at each epoch

through stage k for the feature f. Now let Bel(f) = [Belk(t)] = [f1(t), f2(t), … ,fK(t)]T denote the vector of

conditional probability of object estimates for the combination of signature bins for time t. The summed

Belk(t) is the total conditional probability that Hk is true given the accumulated measurements in signature

bins k through stage K, i.e. Belk(t) = P(Hk | I(t)). Denote Bel(0) = [fk(0)] as the vector of initial

probabilities. Assuming that object hypotheses are independent across signatures, values measured in bin

k affect that signature’s pose hypothesis and no other. The independent assumption allows multiple object

hypotheses to be true at the same time.

Focusing on bin features f, two cases present themselves corresponding to whether the

measurement of feature f +1 is useful for a classification or not. Bayes' Rule governs the assimilation of

the measurements, where Belj(t) is our estimate for the conditional probability of feature f before the

measurement in f is processed:

Detection:

Belo (f + 1) = P(Ho(f) | I (f + 1)) = P(Ho(f)|(i(f+1) = j, y(f+1) = detection), I(f)) (6.20)

= P(feature in O | detection of o, I(f))

P(detection of o | object in O) • P(object in j | I(f))


= + P(detection of o | no object in O) • P(no target in O | I(t)))
P(detection of O | object in O) • P(object in o | I(f))

99
By analogy with the above equation, Bayes' update of Belj(f) for the case when the sensor does not report a

object is:

No detection:

Belo (f +1) = P(target in o | no detection of o, I(f)) (6.21)

Note in general that the sum of Belk(f) values across all I signatures is not unity.

6.4.3 Articulation and Object Type Classification Methods

Direct detection is an uninformed method which devotes equal attention to every bin in the

signature. The procedure is to choose a starting signature I* and advance through the signature in the

same bin order each time, taking measurements per signature, processing it to update Belk(f) for that

signature, and then advancing to the next signature in the predetermined bin sequence. When the

signature is completed, the pattern is repeated, starting over with the signature I*. A run is completed

when Y observations have been compared and processed for the set of HRR signatures. In order to ensure

equal numbers of observations in each signature, O is chosen to be a multiple of F so only complete

signatures of object measurements occur. Direct detection implements more simply than the other

detection methods because its cyclic predetermined detection pattern obviates decisions about how to

advance through the signature.

Another detection method, the belief-association rule, attempts to shorten the time required to

determine the object pose by following an “informed” policy from the tracking ID algorithm. The

association rule’s detection policy is to compare measured information only to the most plausible set of

signatures and features, that is, the signature with the highest belief for a object as was shown from the

event matrix output, since the STaF algorithm trained the feature sets. Since all signatures, containing a

known entropy, are equally likely at inception, the detection procedure begins by choosing a signature I*

within the pose range of the trained set, comparing the relative entropy to the observation to update

Belk(f) using a belief fusion method. If I* is a non-object signature and the measurement indicates not a

plausible object, Belk(f) immediately falls below the Belk(f) of every other signature through a conflict

function, thereby allowing the classification to advance to the signatures that now have the largest Belk(f).

100
If multiple signatures have equally large Belk(f)’s, as they will at the beginning of a run, random choice is

used to break the ties. If I* is a non-object signature and the measurement indicates object (a false alarm

occurs), Belk(f) will increase leaving it somewhat larger than the other Belk(f). As more measurements

are added in the signatures I*, Belk(f) will eventually fall below the other Belk(f) and thereby allow

another signatures to increase the probability of mutual information. Analogous arguments apply when I*

is a object signature. At any time k, the comparisons of relative entropy can be used to classify the object.

If I is the object signature, the relative entropy is small and the correct target ID is used. Hence, fewer

object hypothesis matrices are used.

After mutual information is determined by the relative entropy criterion, the belief states are

updated and the maximum belief, if greater than all the rest of the objects, updates the pose. Furthermore

the relative entropy for combining set information over space is combined with the uncertainty over time

to weight the measurement information, which is used in the innovation, and hence affects the β weights.

6.5 Set-Based Event Matrix

From the set kinematic and set classification ID from the mutual information and STaF

algorithm, the event matrix can be determined. The difference between the JBPDA and the SBDA is that

the event matrix is an “AND” function between the ID and track event matrices. Figure 6.7 shows how

the event matrix is processed.

101
T1 / Bel1 T2 / Bel 2

z 1 Bel z 1 1 0
1
1 0 0
1 0 1
track 1 z2 z 2 Bel
1 0 1
z 1 0 0
2
1 0 0
t1 z4 1 0 1
z 3 Bel 1 0 1
z
3
z5 1 0 1
1 1 1
z 4 Bel 1 0 0
z
4
t 2
1 0 0
z3 1 1 0
z 5 Bel z5 1 1 0
z1 track 2 1 1 0

Figure 6.8. Tracking and Classification Joint Association –circles indicate an “AND”.

From the event information, we can see that the objects are associated to a track immediately

after completing the event matrix. To assess the quality of the assignment, we utilize mutual information

at each step through the relative entropy or differential entropy criterion. Figure 6.9 illustrates the AND

approach.

0 Kinematic Reject 0 Kinematic Reject


0 Track/ID Reject 0 Track/ID Reject
Belief ID Reject Belief ID Keep
0 1
1 Kinematic Keep 1 Kinematic Keep
0 Track /ID Reject 1 Track/ID Keep
Belief ID Reject Belief ID Keep
0 1

Figure 6.9. Believable Events for the Association matrix.

Note, SBDA only accepts believable measurements that lie inside the validation kinematic gate.

For the determination of the weights assigned to these associations, we need to set up the state

and probability values. A believable track-classification association event has

i) a single object-type measurement from a source:

O
n

∑ ^ (θi ) = 1
ωjot jot ∀j , (6.22)
o=0

ii) and at most one object-type measurement belief originating from a object for a given track:

102
mk

δt(θ) = ∑ ^ (θi ) ≤ 1
ωjot jot (6.23)
j=1

^
The generation of event matrices, Ω for each track, corresponding to believable events can be

done by scanning Ω and picking one unit/row and one unit/column for the estimated set of tracks except

for t = 0. In the case that SBDA has generated event matrices for an estimated number of tracks with

different object-types, SBDA needs to assess the combination of feature measurements from the global set

of feature measurements to infer the correct number of tracked objects that comprise the set. The binary

variable δt( θjotk) is called the track detection indicator [2] since it indicates whether a measurement is

associated with the object o and the track t in event θjotk, i.e. whether it has been detected. Once the

detection has been determined, SBDA assesses the uncertainty information.

6.6 Tracking and ID Set Combinations

Given the measurements above, the global intersection of the tracking and identification

uncertainty information is the global combined belief:

BelF = (h ⊕ g)(Fla ) ∩ (h ⊕ g)(Zo )


la; Zo

BelF = p(Γ ∩ ϕ | Γ ∩ ϕ ≠ 0) • p(Σ ∩ ξ | Σ ∩ ξ ≠ 0) (6.24)


la; Zo

Since all of the sets of information are available, SBDA considers the global belief function:

BelF = (h ⊕ g) (Fla) (h ⊕ g) (Zo)


la; Zo

which is defined as:

 1  1 
BelF
la; Zo

= 1 - K ∑ h(Σ) g(ξ) 
1-L ∑ h(Γ) g(ϕ) (6.25)
 Σ ∩ ξ = Zo  Γ ∩ϕ = Fla 

Note, (1 – L )(1 – K) captures the interval of certainty.

Since SBDA is concerned with the belief in the target from the amplitude and location measurements:

o ∆
bkhyp (or | aq lq)φ = p(Σ ⊆ ξ | Γ = ϕ) = hϕ (aq lq) ⊕ gξ (o)
i

= BelF = P(ϕj | ξi) (6.26)


la; Zo

103
where ϕj is the feature amplitudes and locations pairs and ξi is the kinematic track updates, β, for object oi.

The weights for the track-ID event association matrix are:

∆ o k
βjt(k) = P{ϕohyp | ξ } • P{θ|Zk} = P{ϕj hyp | Zt = i} (6.27)

and since the global belief is calculated, SBDA can use the conflicting information to determine the

plausibility:

PlF = (h ⊗ g)(Fla ) ∩ (h ⊗ g)(Zo ) =


la; Zo

p(Γ ∩ ϕ | Γ ∩ ϕ = 0) • p(Σ ∩ ξ | Σ ∩ ξ = 0). (6.28)

From the believability to the plausibility criterions, SBDA has the global interval of uncertainty for the

combined track and ID information:

UZ(k) =  BelF 
o
; PlF (6.29)
 la; Zo la; Zo 

6.7 Track and ID Recursive Confidence and Uncertainty Measures

6.7.1 Temporal-Spatial Information Fusion for Disjoint Data

From the analysis of the tracking information and using the recursive methods of Hong, we have

a spatial temporal information fusion for disjoint uncertainty:

O O
O UZcum(k - 1) UZ (k)
UZcum(k) = (6.30)
1 - κZ (k)
O

where κOZ(k)) ∆
= ∑ m(Σ)k-1 n(ξ)k-1 m(Σ)k n(ξ)k for object O of ξ from measurement Z. Also, note
Σ∩ξ =∅

that SBDA is accumulating the uncertainty from the previous measurement and the current measurement

to get Zcum.

Likewise, we show for the object analysis that the update is for each set of features al from:

i j
i;j UF(k) UF(k)
UFcum(k) = n n for target O of ξ from measurement Z. (6.31)
1- ∑ ∑ bk(ai li) bk(aj lj)
j=1 i = 1; i ≠ j

104
so the recursive calculation of the uncertainty for each object O of the measurement al which is

propagated with the object is:

i j i j
i;j UF(k) UF(k) UF-1(k -1) UF-1(k - 1)
UFcum(k) = n n (6.32)
1- ∑ ∑ bk(ai li) bk(aj lj) bk-1(ai li) bk-1(aj lj)
j=1 i = 1; i ≠ j

where F-1 represents the feature set of the previous measurement.

So the uncertainty associated with object feature i from hypothesis O and the measurement Z

la O
j UO (k) UZ (k)
UOhyp(k) = O la O la (6.33)
1 - KZ (k) - LF (k) + KZ (k)LF (k)

and recursively this can be updated for each object O

jcum j
jcum UO (k - 1) UOhyp(k)
hyp
UO (k) = j (6.34)
hyp
1 - MOhyp(k)

j
where MOhyp(k) captures the summation of the target and identification conflicting values. Moreover,

SBDA can get a global confidence function:

jcum j
UO (k) = 1 - COhyp(k)
hyp

jcum j
jcum UO (k - 1) UOhyp(k)
hyp
CO (k) =1- j (6.35)
hyp
1 - MOhyp(k)

This global confidence function, based on the believability and the plausibility criterions, is then

used to assess the track quality given the measurement track association confidence and the HRR feature

classification of the object. Since the value is recursive, it can be used in evidential accrual to alter the

weights of the innovation matrix for object state kinematic updates. If the confidence is high, the

uncertainty is low, then the recursive filtering update is weighted less and captures the incomplete

knowledge in tracking. By capturing incomplete knowledge, we show a novel tracking algorithm that

uses identity beliefs to alter the estimation as well as the prediction of the new object location. This global

confidence function, based on the believability and the plausibility criterions, is then used to assess the

track quality given the measurement track association confidence and the HRR feature classification of the

105
object. To update the certainty and uncertainty models, SBDA stores the values in vectors associated with

O O
each object (i.e. UOhyp(k) and COhyp(k)). For each unknown object center-of-gravity measurement, there

exists an associated certainty and uncertainty vector that is the representation of the object-classification

beliefs, as shown in Chapter 5. The uncertainty of uncertainty is a stochastic uncertainty that is based on

the belief uncertainty. Now, SBDA weights the beta contributions reflecting the confidence and

uncertainty information.

6.7.2 Temporal-Spatial Information Fusion for Joint Data

The “FUSE” algorithm is used for joint data calculations. For the mass probability updates, we

have the table of information as shown for n sensors (n = 2 for HRR and MTI), but we can have any

number of position hits for the sensors for time k, defined spatially as, i = 1, …mk.

Table 6.1. FUSE Algorithm Table.

i i i i i … i i i i
m1k-1 m1k-1 m1k m1k-1 m2k m1k-1 mMk m1k-1 Uk
i i i i i … i i i i
m2k-1 m2k-1 m1k m2k-1 m2k m2k-1 mMk m2k-1 Uk
: : : … : :
i i i i i … i i i i
mMk-1 mMk-1 m1 k m2k-1 mMk mMk-1 mMk mMk-1 Uk
i i i i i … i i i i
Uk-1 Uk-1 m1k Uk-1 m2k Uk-1 mMk Uk-1 Uk
i i … i i
m1 k m2 k mMk Uk

When the elements in the measurement data are joint, the formulas from the previous section

(Section 6.7.1) do not apply since the grouping of terms in the table are case dependent. In order to

perform the FUSE function, a more general approach is taken which combines the cumulative information

mk-1 with the sensory information provided by the MTI and HRR at the kth moment. To determine the

redundancy of the data, SBDA uses the FUSE algorithm [70] developed in MATLAB. Assume SBDA is

given MTI and HRR information to be fused and each sensor measurement contains a vector of the form

i
mMk for i = 1, 2 states and evidence values containing:

106
 state 
1 1
statei valuei

 state3 
2 2
i valuei
3
 : 
data =
i valuei i = 1,2
i

 state 
:
m m
i valuei

The fusion of the two data vectors, datai, i = 1,2, can be done by calling the FUSE algorithm (Appendix

B).

(new_data, new_length) = FUSE(truncation, data1, length1, data2, length2)

where data1 and data2 are the data states and length1 and length2 are the numbers of states, respectively.

The truncation is the threshold determining the evidence of small values that should be discarded. The

routine can be applied recursively to obtain the useful information from the joint sets of information as

shown in the Figure 6.9. The sets of information are: MTI kinematic position, HRR classification and

beliefs, and the MI pose-beliefs.

MTI(1) MTI(1) MTI(1)


HRR (1) HRR (1)
MI(1)

HRR (1)

Integration ( Sensor A to Sensor B)


MI(1)

MTI(2) MTI(1,2) MTI(1,2) MTI(1,2)


HRR (1,2) HRR (1,2)
MI(1,2)
HRR (2) HRR(1,2)

MI(2) MI(1,2)

MTI MTI MTI


MTI(N-1)
(1,N-1) (1,N-1) (1,N-1)
HRR HRR
HRR (1,N-1) (1,N-1)
HRR (N-1)
(1,N-1)
MI
(1,N-1)
MI
MI(N-1)
(1,N-1)

MTI(N) MTI(1,N) MTI(1,N) MTI1,N)


HRR(1,N) HRR(1,N)
MI(1,N)
HRR(N) HRR(1,N)

MI(N) MI(1,N)

Figure 6.10. Calculating Joint Information.

The relation for spatial temporal information fusion for joint uncertainty:

107
O O
O
G (Σ(UZcum(k - 1) UZ (k)) j ∈ m )
k
UZcum(k) = (6.36)
1-κ
O
Z (k)

where κZ (k)) =∆ ∑
O
m(Σ)k-1 m(Σ)k n(ξ)k n(ξ)k+1 for object O of ξ from measurement Z.
Σ∩ξ =∅

6.8 Track Update and Propagation

We have defined the pose, feature, and object-trained sets for fusion, now we restate the

propagation of the track set of information for the recursive updates to the belief filter. As before, we

determine, the covariance propagation as:

t t t t _t _  Qk 0 
where Q k =  
T
Pk|k-1 = Fk-1 Pk-1 (Fk-1) + Q k-1, (6.37)
 0 Bk 

for each track t.

We can obtain the innovation covariance Sk with the associated Rk and measured Dk by:

t o t o T _t _  Rk 0 
Sk = Hkt Pk|k-1 (Hkt) + Rk, where Rk =   (6.38)
 0 Dk 

Since Sk is the innovation covariance update, we can use Sk to validate measurements based on the

uncertainty with the associated track and ID beliefs.

Validation:

T T
At k, two measurements are available for object o for a given track t: zk-1, and zk , from which

position, velocity, pose, and ID features can be extracted form the belief track vectors. Validation, based

on track and ID information, is performed to determine which track-belief measurements fall into the

kinematic region of interest. Validation can be described as

t lt t -1 t lt o
(zk - ^zk|k-1)T [Sk] (zk - z^k|k-1) ≤ γ for l = 1 … mk (6.39)

where γ is a validation threshold obtained from a χ2 table for a DOF (17) and Sk stands for the largest

t
among the predicted track belief covariance, i.e., det(Sk) ≥ det(Sk) for t = l,2,...,n. where n is the number

108
s
of states ^zk|k-1 is a combined predicted track belief given by E{zk|{βs}o = 1, Zk-1} where s is the set of

object beliefs for a track.

ti
Data association for β l :

Data association performed for each belief object-track in kinematic region is similar to that in

PDA and the details can be found in [2]. The association probabilities for l validated object measurements

are

t
t el o
βl = o , l = 1, 2, …, mk (6.40)
m
k


t
b+ el
l=1

t b
β0 = o , (6.41)
m
k


t
b+ el
l=1

t -1 t
where el = PG N(0, Sk ) (6.42)

o -1
b = mk ( 1- PDPG) [PDPGVk] (6.43)

o
where mk is the number of validated object measurements and PG is the probability that augmented belief

track measurements fall into the validation region and PD is a detection probability. The volume of the

validation gate is

1/2
Vk = Cd γd/2 |Sk| , (6.44)

where Cd is the volume of the unit hypersphere of dimension d, where d = 17, the dimension of the

augmented belief-track measurement [ 4 kinematic states, 1 pose, 11 object belief states, 1 Unknown

Belief state].

Kinematic belief-probabilistic update:

The object belief-probabilistic track update is performed as a full rate system to combine the state,

innovation, and covariances.

109
m(k)
C i(k)
ν(k) = ∑ D(Oi || Y)
νi(k) , (6.45)
i=1

where C is the belief-derived confidence measure that captures uncertainty (1 – U) in the tracking and D

captures relative entropy between the ID information. If the relative entropy is low, the value is high and

if (1-U) is high, β is weighted more. Note, this approach is similar to a Kalman Filter with the updating

of an innovation weight. As opposed to basing the weight on the residual or covariance information, it is

based on the interval of uncertainty and the mutual information match. The propagation equations for the

state and covariance information are:


o
m
k


^t ^t t t t
Xk|k = Xk-1|k-1 + Wk βlk νlk (6.46)
l=1

 mk 
o

Pk|k = β0 Pk|k-1 + (1 - β0 ) Pk|k + Wk ∑ βlk νlk [νlk]T - νk[νk]T (Wk)


t t t t * t t t t t t t T
and (6.47)
l = 1 
o
m
k
o

* t t t t
where Pk|k =[ I- Wk Hk t ] Pk|k-1 and νk = βlk νlk (6.48)
l=1

t t o t
and Wk = Pk|k-1 [Hkt]T (Sk)-1 (6.49)

o
where Hkt is the measurement matrix that is calculated for each object pose, φ, and estimated position of

track t. Thus, we have a complete tracking and ID algorithm which utilizes set-theoretic information

from the HRR tracker and classifier.

6.9 Summary of Chapter 6

The purpose of this chapter was to introduce the set-based data association (SBDA) which

combines track and ID information in an evidential approach. The evidential approach was used to

estimate object beliefs. To obtain a target belief, we used mutual information to associate an object ID over

a range of poses. The pose update and the choice of measurements was obtained from the object ID and

beta weights, respectively. The beta weight reflects the validated kinematic ID information. To propagate

uncertainty, a disjoint value was determined as well as a joint value. Finally, the propagation of

110
information was similar to the JBPDA with an modification to the innovation using the relative entropy

metric. For both the tracking and the ID system, we derived the mass probability belief updates for the

kinematic and ID sets. We then fused these set beliefs for a recursive simultaneous tracking and ID belief

function. Chapter 7 will present results for both the JBPDA and the SBDA algorithms.

111
CHAPTER 7 BELIEF FILTER TRACKING AND ID RESULTS

Results include the implementation of the JBPDA and the SBDA from Chapters 5 and 6

respectively. To investigate the tracking and ID methods, an interface was designed to allow for HRR

tracking belief filter parameter selection. As shown in the Figure 7.1, the interface, written in MATLAB,

facilitates a thorough investigation of SAR and HRR tracking and ID simulations for a variety of tracking

scenarios with a varied number of targets and target types. The interface consists of three parts. The top

part presents tracking and ID parameter selections while the middle part displays the results. The bottom

part includes the ID results with probability of correct ID, probability of false alarm, and the certainty

values at the lower right. The track results and the certainty or confidence values, assessed at each track-

update time interval, are displayed to the screen for the desired target of a specific track.

Figure 7.1. HRR Belief Filtering Interface.

112
7.1 The MSTAR Data Set

The HRR data set used in the analysis is derived from the publicly releasable Moving and

Stationary Target Acquisition and Recognition (MSTAR) program database. The MSTAR data consists of

synthetic aperture radar (SAR) data at X-band l x 1 foot resolution. Images were recorded at 15° and 17°

depression angles with full aspect 360° coverage, at approximately one degree spacing in azimuth. The

methodology used to convert the SAR imagery to HRR is discussed in [96]. While the MSTAR data

consists of all aspect data, the pose resolution is ±3° with 157 of 360 aspect profiles collected for each

target. The truth targets are from the 17° depression data set. For each track pose estimate, the closest

target aspects between -5°< φ < 5° is called from the 15° depression data set to represent object-ID

measurements. HRR profiles consist of 101 range-bin magnitudes averaged over the center 8 profiles of

the SAR image for each measured pose estimate. The 11 targets contained in this data set that were used

in the investigation are shown in Figure 7.2.

Armored Personnel Carriers Main Battle Tanks

BTR70 BMP2 M2 M113 T72 M1

Military Transport Vehicles Self-Propelled Guns

HMMWV M35 M548 M109 M110

Figure 7.2. Moving and Stationary Target Acquisition and Recognition (MSTAR) Data Set.

Six targets were trained as to the feature locations of the HRR profiles and five were left unknown, as

shown in Table 7.1. After the features were extracted and trained, an online scenario was used to assess the

tracking and ID methods. The truth consisted of three target tracks of {1,5,8} and {3,6,9} for case 1 and case 2,

respectively. The cluttered measurements for each track consisted of five positional points with an associated

unknown target type. The cluttered object-ID measurements are from the unknown HRR profiles targets.

113
Table 7.1. MSTAR Target Designations.

Known Targets Unknown Targets


bmp2_apc 1 2 btr70_transport
m110_gun 5 4 m109_gun
m2_apc 8 7 m1_tank
hmmwv_jeep 3 10 m548_tank
m113_transport 6 11 t72_tank
m35_truck 9

At the start of the algorithm, track measurements are computed for each time step and assigned a

target type as shown from the left column above. The track data had a truth variance of 50m. In addition,

up to 9 (average 5) cluttered measurements were added to the truth data for each track with a variance of

50m. For tracking and ID, the algorithm receives measurements at each time step and determines the

belief and track pose estimate for each object and track hypothesis. Each track pose is then used to extract

a HRR profile for object classification compared to the true, trained, HRR target profile. A classification

certainty is computed as to the belief in the object for each track and an uncertainty for each object

hypothesis. From the classification and tracking algorithm, the set fuser assesses the accumulated beliefs

and the associated uncertainty intervals to determine a plausible set of targets hypothesized for each track.

By selecting the most plausible targets, the tracking algorithm is updated as to the hypothesized target

number. If more tracks are needed, the algorithm starts new tracks in addition to the ones that were

previously run. If a fewer number of tracks are needed, the algorithm prunes tracks based on the object

type to the ones that were most plausible. The resulting position measurements are used to update the

tracker. At the next time step, another measurement is collected until a finite number of tracks and targets

results. One of the common ways to assess the performance of the system over time is an error analysis.

We use an average position and velocity error metric for tracking performance. For target ID, we note the

average target belief for all time steps and compare it to the truth target type.

The classification feature measurements and the kinematic measurements for identification are

shown in Figure 7.3. Since the aperture of the HRR radar is small, we assume that eight range bin features

approximate the radar aperture.

114
IDENTIFICATION TRACKING
Signal Space - ϕ Feature Space - F Measurement Space - Z

Fixed
Sensor
φ Pose Γ

Γ
ϕ F
Σ
a
Signal Set - al Feature Set - Γ Measurement Set - Σ
l

Figure 7.3. Extracted Feature Sets – Kinematic Σ, Feature Γ, and Signal ϕ.

The two dynamic tracking methods are compared, the joint belief-probability data association

filter (JBPDA) and the set-based data association filter (SBDA), which use the measured HRR and

kinematic position features. The JBPDA uses and OR function for data association while the SBDA uses

an AND function for data association. The method for evaluating performance is a Monte Carlo

simulation and the performance metric is normalized probability of state error. The Monte Carlo

simulation consisted of 10 runs of each track and ID set. It is assumed that the features in question are the

HRR profiles from the MSTAR derived HRR profile data set. Finally, object classification, tracking

detection, and simultaneous tracking and ID routines are implemented using MATLAB® and the HRR

measurements were from the 15° MSTAR data set classified based on the 17° MSTAR data set.

115
7.2 JBPDA Results - OR

Three dynamic tracking and ID methods are compared for two cases: A) Case 1: non-

maneuvering crossing and B) Case 2: maneuvering. In the analysis, we choose to use the JBPDA without

an ID pose update as a baseline because it represents a Bayesian-probabilistic tracking method. The

second method is the belief-probabilistic filtering method which uses the ID-belief pose updates. The

third method is the JBPDA with ID, but without a pose update. The third case highlights the ID effects on

the tracker. Three tank HRR signatures were used in the simulation and clutter consisted of five other

t
objects for each track. For the purposes of the simulation, we set UBelk equal to (1/10)wk which reflects the

resolutions of the sensors. The M matrix was determined statistically and constants used for HRR classification

can be found in [104]. For implementation, we assume a belief in three kinematic objects and use belief-ID and

belief pose updates to confirm the belief in the correct number of objects.

7.2.1 Case 1: Non-maneuvers with Multiobject Crossing

The true trajectory consists of three objects that 1) start with position X = {(2000,11000),

(2000,10950), (2000,10000)} and velocities of (+15 x, -5 y) m/s, (+15 x, -5 y) m/s, and (+15 x, +5 y) m/s,

respectively; 2) they cross each other at a distance of 100 meters; and 3) finish with velocities of (+15 x, -

5 y) m/s, (+15 x, 0 y) m/s, and (+15 x, +5 y) m/s, respectively. Since the MTI system is a coarse tracker

based on position measurements, we simulate the truth measurements as xmeas(k) = xtruth(k) ± rand(50m),

where rand is a random number generator.

We plot the position of the three objects with α = 1.0 for JBPDA without belief and belief-pose

ID update as well as the case of JBPDA with ID, but no pose update, as shown in Figure 7.4. Figure 7.5

shows the JBPDA tracking and ID with α = 0.8 and α = 0.5. For α = 1.0, the HRR profile was extracted

from the position measurement without a belief-ID pose update. An α = 0.8 represents the case where the

weighted pose and state estimate is 0.8 from the tracker and 0.2 from the ID. For α = 0.8, the belief pose

updated the track pose with a weighting of 0.25. An α = 0.5 represents an equal weighting of the tracker

and the identifier for pose updates.

116
4 JBPDA Position - Track (-) Truth (:) JBPDA Position - ID/No Pose
x 10 4
x 10 - Track (-) Truth (:)
O1 - 1 α = 1.0 1.12
1.1 O3 O2 - 5 clutter = 5 O1 - 1 α = 1.0
O3 O2 - 5 clutter = 5
O3 - 8 x(var) = 50 1.1 O3 - 8 x(var) = 50
1.08
O1
1.08 O1 Object 2
1.06
1.06
1.04
Object 3
1.04
1.02
1.02

1 Object 1,2,3 O2 Object 1


O2 1

0.98 0.98
2000 2500 3000 3500 4000 4500 5000 2000 2500 3000 3500 4000 4500 5000

Figure 7.4. JBPDA Positions for Case 1, α = 1.0 with and without ID.

4 JBPDA Position - Track (-) Truth (:)


4 Position - Track (-) Truth (:) 1.12 x 10
1.12 x 10
O1 - 1 α = 0.8 1.1 O3 O1 - 1 α = 0.5
Object 2
O3 O2 - 5 clutter 5 Object 2
1.1 O3 - 8 var(50) O2 - 5 clutter 5
O3 - 8 var(50)
1.08
1.08 O1
O1
1.06 Object 3
1.06 Object 3
1.04
1.04

1.02
1.02

1 1 O2
O2 Object 1
Object 1
0.98 0.98
2000 2500 3000 3500 4000 4500 5000 2000 2500 3000 3500 4000 4500 5000

Figure 7.5. JBPDA Positions for Case 1, with α = 0.8 and α = 0.5.

As detailed by Figure 7.4, the JBPDA without belief and belief-pose ID pose updates has trouble

discerning targets in clutter. We attribute this to the fact that the JBPDA tracker picks the measurement

closest to its predicted value based on the current track movement. Hence, it is not expected to capture the

correct measurement in clutter, especially in the case of multiple targets, because it has no way to discern

between validated measurements. For the case of ID, with no pose update, we see that a significant gain in

track accuracy. For α = 0.8, the ID not only updates the track pose, but discerns the correct target in

117
clutter. For α = 0.5 is more varied since it has equal pose update where the belief pose update is from a

coarser data set than the position measurements.

7.2.2 Case 2: Maneuvers

Case 2 highlights the JBPDA for maneuvering objects. Objects 1) start with position X =

{(2000,10700), (2000,9800), (2000,9300)} and velocities of (+15 x, 0 y) m/s; 2) maneuver around each

other at a distance of 100 meters; and 3) finish with a velocities of (+10 x, +5 y) m/s, (+10 x, +0 y) m/s,

and (+10 x, +5y) m/s. Like case 1, we simulate the truth measurements as xmeas(k) = xtruth(k) ± rand(50m),

where rand is a random number generator. For clarification, the 100th measurement is labeled for each

truth track.

The three methods investigated are: 1) JBPDA without belief and belief-pose update, 2) JBPDA

with ID and no belief-pose update, and 3) JBPDA with ID and belief-pose update. Figure 7.6 shows the

case for α = 1.0 for JBPDA with and without ID and Figure 7.7 illustrates the results for α = 0.8 and α =

0.5. Similar to Case 1, an update of pose helps to keep the tracker updated with the correct target.

JBPDA Position - Track (-) Truth (:) JBPDA(No Pose UpDate) - Track (-) Truth (:)
10600
O1 - 3 α = 1.0 10600
O2 - 6 clutter 5
10400 O2 O3 - 9 var(50)
O2 O1 - 3 α = 1.0
10400 O2 - 6 clutter 5
O3 - 9 var(50)
10200
10200
Object 1
10000
10000
O1 Object 1,3
9800 O1 Object 3
9800
9600 100 100
9600
9400 Object 2
O3 9400 Object 2
100 O3
9200
2000 2500 3000 3500 4000 4500 9200
2000 2500 3000 3500 4000 4500

Figure 7.6. JBPDA Position tracks for Case 2, α = 1.0 with and without ID.

As detailed by Figure 7.6, the JBPDA without belief and belief-pose ID pose updates has trouble

discerning targets in clutter. We attribute this to the fact that the JBPDA tracker picks the measurement

118
closest to its predicted value based on the current track movement. Hence, it is not expected to capture

target maneuvers, especially in the case of multiple targets, because it has no way to discern between

validated measurements. In Figure 7.6(b), we see that with ID, JBPDA captures target maneuvers. For α =

0.8, the ID not only updates the track pose, but discerns the correct target in clutter. The use of pose

information helps to catch target maneuvers, quicker than tracking and ID without a pose update.

JBPDA Position - Track (-) Truth (:) JBPDA Position - Track (-) Truth (:)
10600 10600
O1 - 3 α = 0.8 O1 - 3 α = 0.5
O2 - 6 clutter 5 clutter 5
O2 O3 - 9 var(50) O2 O2 - 6
10400 10400 O3 - 9 var(50)

10200 10200
Object 1 Object 1
10000 10000
O1 O1
Object 3 Object 3
9800 9800

9600 9600

9400 O3 Object 2 9400 O3 Object 2

9200 9200
2000 2500 3000 3500 4000 4500 2000 2500 3000 3500 4000 4500

Figure 7.7. JBPDA Position tracks for Case 2, α = 0.8 and α = 0.5.

The tabulated results are presented in Section 7.4. Plotted in Figure 7.8 is an example of changes

in state of a few of the object beliefs and plausibilities using the JBPDA system.

Object 1 Beliefs Object 1 Plausibilities


1 1
T3
0.9 T3 0.99

0.8 0.98
0.7 0.97
0.6 0.96
Belief Plause
0.5
0.95
0.4
0.94
0.3
0.93
0.2
0.92
0.1
0.91
0 0 20 40 60 80 100 120 140 160 180 200
0.9
0 20 40 60 80 100 120 140 160 180 200
Track Number Track Number

Figure 7.8. JBPDA Beliefs and Plausibilities.

119
7.2.3 No Maneuvers – Bad starting position - Belief-Probabilistic Tracker

We explored how well the algorithm would do if the initial position was incorrect. As we see

from Figure 7.9, the pose update from the belief system overcorrected when the initial pose estimate was

incorrect. Figure 7.9(a) shows only object 2 with an incorrect starting position. Figure 7.9(b) shows object

2 and object 3 with incorrect initial assignments. In both figures, object 2 can be compared showing that

the JBPDA is robust to incorrect track initiation.

Position - Track (-) Truth (:)


x 10 4 x 10
4 Position - Track (-) Truth (:)
1.1 O3 O1 - 1 α - 0.8 O2 1.2
O2 - 5 cl - 10
O3 - 8 x(var) - 50
O1 - 1 α - 0.5
O2 - 5 cl - 10
1.08 O1 O3 O3 - 8 x(var) - 50
1.15
1.06

1.04 O3 1.1 O2
O1
1.02
1.05
O3
1 O1
O2
0.98 1 O1
O2

0.96
0.95
0.94
2000 2500 3000 3500 4000 4500 5000
2000 2500 3000 3500 4000 4500 5000

Figure 7.9. JBPDA Position tracks for Incorrect Track Initiation with α = 0.8 and α = 0.5.

7.2.4 Tracking Unknown Number of Targets

The belief filter track and ID method is evaluated with a Monte Carlo simulation and the

performance metric is the final accumulated target ID belief. As detailed in Figure 7.10(a) - by the true

trajectory, the targets 1) start with position and velocity, 2) pass by each other at a close distance, and 3)

finish with a specified pose. From Figures 7.10(b) and 7.10(c), by assuming the incorrect number of

targets leads to an incorrect target tracking result. However, if the algorithm determines that the number

of targets is not correct, then the filter has the option to alter the number of targets.

120
4
4 Position - Track (-) Truth (:) x 10 Tracking Assuming Three Targets
x 10 1.2
1.2
O1 O1 - 1 α = 0.8 O1 - 1 α = 0.8
O2 - 5 clutter = 5 O2 - 5 clutter = 5
O3 - 8 (var) - 50 O3 - 8 (var) - 50 Object 1
O3 Object 1 1.15
1.15 O3

1.1 1.1
Object 2
Object 2
1.05 1.05
O2 O2
Track Split
1 Object 3 1
Object 3

0.95 0.95
2000 2500 3000 3500 4000 4500 5000 2000 2500 3000 3500 4000 4500 5000

(a) (b)
4 4 Tracking Two then Three Targets
x 10 Tracking Assuming Two Targets x 10
1.2 1.2
O1 O1 - 1 α = 0.8 O1 O1 - 1 α = 0.8
O2 - 5 clutter = 5 O2 - 5 clutter = 5
O3 O3 - 8 (var) - 50 O3 - 8 (var) - 50
1.15 Object 1 1.15 O3 Object 1

1.1 1.1
Object 2
Object 2

1.05 1.05
O2
O2
Track Split
1 Object 3 1 Object 3

0.95 0.95
2000 2500 3000 3500 4000 4500 5000 2000 2500 3000 3500 4000 4500 5000
(c) (d)
Figure 7.10. Assessment of an Unknown Number of Tracks with One target per track.

In Figure 7.10(d), one of many situations run, the algorithm correctly identifies the target track.

In this case, the algorithm employed parsimony9. From Figure 7.10(d), there is a case when the two

targets split. The algorithm is able to catch the split; however, the reversal is an interesting case, and

further implementation of coordination of the tracking and classification resolved the problem.

The plots for Figure 7.10 are : (a) Truth Target Tracks, (b) 3 targets, (c) 2 targets, (d) start with

two targets and then add a third target. At the end of the run, we can see from Figure 7.10c that the

9
From Occam’s razor – A rule that entities should not be multiplied unnecessarily which is interpreted as requiring that the simplest of
competing theories is preferred to the more complex or that explanation of unknown phenomena be sought first in terms of known
quantities.

121
correct targets are identified. However, at the beginning of the simulation, the tracker and identifier had

no information from which to identify the target, so all targets remained plausible. Below is the

normalized confusion matrix for the end of the run:

Table 7.2. Confusion Matrix for Three Targets.

Target Pose Confusion Matrix


Belief Actual Belief True BMP BTR HMV M109 M110 M113 M1 M2 M35 M548 T72
1 BMP 24 18 0.965 0.415 0.625 0.268 0.681 0.748 0.562 0.727 0.746 0.629 0.448
2 BTR 337 341 0.544 0.992 0.682 0.563 0.581 0.612 0.369 0.741 0.371 0.760 0.115
3 M109 328 320 0.662 0.410 0.758 0.784 0.617 0.757 0.510 0.550 0.481 0.294 0.581

7.3 SBDA Results - AND

The SBDA is similar to the JBPDA except that 1) an AND function is used in data association,

2) an evidential belief is used for target ID, and 3) mutual information is used to assess multiple pose

aspect comparisons from the trained data set with the measurement to assess target identity

simultaneously with the object type. Basically, using the evidential target ID, the mutual information is

assessed to get the best pose estimate for the target ID for use in the AND function.

7.3.1 Results of Mutual Information Set Association

The dynamic pose detection and target-belief classification methods were compared for the

mutual-information content of relative entropy. The method for evaluating performance mutual

information at each time-step is a Monte Carlo simulation performance metric of relative entropy DE(o).

Better performing methods will exhibit lower DE(o)’s for equal numbers of measurements, i.e. O(a) =

Y(A(a)), since the relative entropy metric is determined after calculating the absolute value and the value

closest to zero is the minimum.

Relative DE(o) is minimum at time k whenever the probability for an object-articulation in the

HRR signature, O(A(a)), is the same as the probability in the observed or predicted HRR object set, Y(a).

Defining the signature with the correct target articulation as signature I(ktbj), the symbolism that

prescribes the probability DE(o) of experiencing a designation error is:

DE = D(arg min πk (O) = kobj) (7.50)


k

122
DE(o) is a global error metric that looks at the entire set of detection-observation pairs to produce

a single declaration decision of object classification of target articulation and type.

The standard test problem is to find a set of targets with a given articulation based on the pose

estimate from the tracker. Assuming that 1) a single sensor is available to detect the target, 2) articulation

ranges are available from the predicted values from the tracker (e.g. ±5°), and initially 3) each signature is

equally likely to contain the correct articulation; a Monte Carlo run is conducted for o = 11 test profiles

with 101 range bin measurements, k, and a trained set of 11 targets by 360 signatures. During a run, the

simultaneous tracker and ID algorithm chooses the object where measurements occur guided by the

detection policy. Each study consists of a sufficient number of runs for the information metrics, shown in

Table 7.3, and allows DE(O) values to stabilize.

Table 7.3. Orientation information for (Target Az = 115°).

H_x 5.1824 - Horizontal


H_y 4.5670 - Vertical
H_xy 9.5341 – Joint Entropy
H_x|y 4.9671 – Conditional Entropy
H_y|x 4.3518 – Conditional Entropy
I_xy 0.2153 – Mutual Information
D_p|q 0.0271 – Relative Entropy

Below is a sample of information-theoretic features from a test target that can be used to

determine a target’s orientation.

Average Entropy Analysis for Pose and Target

0.6

0.4
Mutual Information

0.2

-0.2
120
15
115 10

pose 110 0
target

(a) (b)

Figure 7.11. Information Metric used to determine the Articulation and Target Type.
(A) the algorithm presentation and (B) a smoothed result of the information.

123
Figure 7.11 presents three-dimensional relative entropy curve for the case of a single observation

target over a set of azimuth angles of different targets. In Figure 7.11, DE(o) is the dependent variable and

o the independent variable of the object number and φ being the number value of the pose azimuth angle.

Note that the angles range ± 5° around the truth target. The results show that the use of relative entropy

information can be used for referencing and classifying the target.

To better illustrate the result, Figure 7.12(a) shows the relative entropy average over all angles

for the range of the target. Figure 7.12(b) presents a two-dimensional relative entropy curve for the case of

a single observation target over a set of azimuth angles for different targets. Also shown is the error

bounds of testing and a vertical bar showing the truth target, target 6, tested at a value of φ = 115°.

Relative Entropy versus Pose


0.08
Ave Target entropy 0.15

0.06 0.1

0.04 0.05

Relative
Entropy
0.02 0

Relative
Entropy -0.05
0

-0.1
-0.02

-0.04 111 112 113 114 115 116 117 118 119 120
1 2 3 4 5 6 7 8 9 10 11
Pose
Target Type

(a) (b)

Figure 7.12. Average Information Metric used to determine the Articulation and Target Type.
(A) Target Types and (B) Articulation values.

Figure 7.13 presents relative entropy filtered curve for the case of all targets for o = 1 to 11. In

Figure 7.13, DE(o) is the dependent variable and o = f(pose) the independent variable of the time, where

the articulation coverage is processed. Figure 7.13(b) is plotted at a higher resolution to show values at o

= 11. Note DE(o) reduces for all targets.

124
Accumulated Entropy over time for targets -3
0.02 x 10 Accumulated Entropy (Target Type) over time f(pose) for targets
2

0.018
1.8

0.016
1.6

0.014
1.4

0.012
1.2
Entropy Value

Entropy Value
0.01
1

0.008 0.8

0.006 0.6

0.004 0.4

0.002 0.2

0 0
1 2 3 4 5 6 7 8 9 10 11 1 2 3 4 5 6 7 8 9 10 11 12
Time f(pose) Time f(pose)

(a) (b)
Figure 7.13. (a) Relative Entropy Beliefs plotted over time and (b) enhanced view to show o = 11.

Figure 7.14 presents the average for all HRR signature targets over time. Note, that by looking at

the target type, clustering occurs between the classes (tank, truck, and transport). The results can be used

for target-type removal, set refinement, and object selection (for use in the AND function). Thus, the

entropy values can be used to eliminate non-plausible targets from the target set. Furthermore, the target

belief drops for those with high DE(t) values. If targets were removed from the plausible set, a multiple-

look HRR scenario could increase the target type belief.

-3
x 10 Ave Accumulated Entropy (Target Type) over time f(pose) for targets
5

t72-tank
4.5
bmp2-tank

4 m1-tank

hmv-jeep
3.5
m2 - tank
Entropy Value 3 m-35 - tank

2.5
btr70-transport
m109-gun
2
m110-gun

1.5 m548-transport

1 m113-transport

0.5

0
1 2 3 4 5 6 7 8 9 10 11
Time f(pose)

Figure 7.14. Classification of target types using Relative Entropy.

125
Note from the runs there is clutter from two sources: similar IDs in classification and the

incorrect measurement. In Figure 7.4 lists the target clutter that would confuse the classification routine.

Table 7.4. Clutter Information for Case 1 and Case 2

Known Targets Case 1 Clutter ID


bmp2_apc 1 t72_tank (11) m1_tank (7)
m110_gun 5 m109_gun (4) btr70_transport (2) m35_truck (9)
m2_apc 8 btr70_transport (2) m35_truck (9)

Known Targets Case 2 Clutter ID


hmmwv_jeep 3 ----
m113_transport 6 m548_tank (10)
m35_truck 9 m109_gun (4) btr70_transport (2) m110_gun (5)

7.3.2 Evidential Accumulation

In the case that we use the FUSE algorithm to propagate the information of the mutual

information classification-belief over time, we can assess the contribution of the SBDA to target

identification.

Object 2: Belief Object 2: Plausibility


1 1
Target 0.9 Target
of O1 - 3 α = 0.5 Target 6 of
Interest 0.8 O2 - 6 clutter 5
0.8 O3 - 9 var(50) Interest
Target 6 0.7
Target
0.6 Declaration 0.6 Target 3
Bel(Ti) 0.5 Target 4 Clutter
Pl(Ti)
Target 8 Rejection
0.4 0.4
Target 1
0.3
O1 - 3 α = 0.5 Target 11
0.2 O2 - 6 clutter 5 0.2
O3 - 9 var(50)
0.1
0 0
0 20 40 60 80 100 120 140 160 180 0 2 4 6 8 10 12 14 16 18 20
Measurement No. Measurement No.

Figure 7.15. Discrimination of Objects – Reducing the Plausibility of Objects.

All targets plausible, but after HRR ID is below threshold, it is eliminated and evidence accrues

for remaining targets. Additionally, the belief in the object increases with repeated correct IDs.

126
Example results were presented for the mutual information (similarity) and the evidence

accumulation (temporally) for the determination of the object type for the AND function. Now we can

present the tracking results to highlight the spatial robustness of pose comparisons for the set-based

approach by enhancing ID in the track and ID algorithm.

7.3.3 Case 1: Non-maneuvers with Multiobject Crossing

The true trajectory consists of three objects that 1) start with position X = {(2000,11000),

(2000,10950), (2000,10000)} and velocities of (+15 x, -5 y) m/s, (+15 x, -5 y) m/s, and (+15 x, +5 y) m/s,

respectively; 2) they cross each other at a distance of 100 meters; and 3) finish with velocities of (+15 x, -

5 y) m/s, (+15 x, 0 y) m/s, and (+15 x, +5 y) m/s, respectively. The truth measurements are

xmeas(k) = xtruth(k) ± rand(50m), where rand is a random number generator.

We plot the SBDA object positions for 1) α = 1.0: SBDA without belief-pose ID update, as

shown in Figure 7.16, and 2) α = 0.8 and 3) 0.5 shown in Figure 7.17 shows the JBPDA tracking and ID

with and α = 0.5. For α = 1.0, the HRR profile was extracted from the position measurement with ID but

without a belief- ID pose update and the case demonstrates how mutual information can be used to select

the correct position measurement from the cluttered measurements using the target ID. For α = 0.5, the

belief pose updated the track pose with equal weighting.

4 SBDA Position - Track (-) Truth (:)


1.12 x 10
O1 - 1 α = 1.0
O3 O2 - 5 clutter 5
1.1 O3 - 8 var(50)
Object 2

1.08 O1

1.06 Object 3

1.04

1.02

1 Object 1
O2

0.98
2000 2500 3000 3500 4000 4500 5000

Figure 7.16. SBDA Positions for Case 1, α = 1.0 and ID.

127
4 SBDA Position - Track (-) Truth (:) SBDA Position - Track (-) Truth (:)
x 10 x 10 4
1.12
O1 - 1 α = 0.8 1.12
O3 O2 - 5 clutter 5 O3
1.1 O3 - 8 var(50)
O1 - 1 α = 0.5
Object 2 1.1 O2 - 5 clutter 5 Object 2
O3 - 8 var(50)
1.08 O1 O1
1.08
1.06 Object 3
1.06 Object 3
1.04
1.04
1.02
1.02
1
Object 1 Object 1
O2 O2
1
0.98 0.98
2000 2500 3000 3500 4000 4500 5000 2000 2500 3000 3500 4000 4500 5000

Figure 7.17. SBDA Positions for Case 1, JBPDA with α = 0.8 and α = 0.5.

From Figure 7.16 and Figure 7.17, we see that not much gain is achieved in a pose update as

opposed to the inclusion of ID in a tracking scenario.

7.3.4 Case 2: Maneuvers

Objects 1) start with position X = {(2000,10700), (2000,9800), (2000,9300)} and velocities of

(+15 x, 0 y) m/s; 2) maneuver around each other at a distance of 100 meters; and 3) finish with velocities

of (+10 x, +5 y) m/s, (+10 x, +0 y) m/s, and (+10 x, +5y) m/s.

Similar to Case 1, three methods were used: 1) α = 1.0: SBDA without belief-pose ID update, 2)

α = 0.8 with belief-pose ID update, and 3) α = 0.5 where the pose is equally determined from the tracker

and the ID information. Figure 7.18 and Figure 7.19 present the results. Note that a pose update has only

a minimal effect since the mutual information, set-based approach, obtains the best position measurement

from the tracker and classification systems which results in similar pose updates. Finally, we see that the

SBDA detects object maneuvers.

128
SBDA Position - Track (-) Truth (:)

10600 O2
O1 - 3 α = 1.0
O2 - 6 clutter 5
10400 O3 - 9 var(50)

10200

Object 1
10000

9800 Object 3
O1
9600

9400 Object 2
O3

9200
2000 2500 3000 3500 4000 4500

Figure 7.18. SBDA Position tracks for Case 2, α = 1.0.

SBDA Position - Track (-) Truth (:) SBDA Position - Track (-) Truth (:)

10600 10600
O1 - 3 α = 0.8 O1 - 3 α = 0.5
O2 O2 - 6 clutter 5 O2 - 6 clutter 5
10400 var(50)
10400 O2 var(50)
O3 - 9 O3 - 9

10200 10200
Object 1 Object 1
10000 10000

O1 Object 3 9800 Object 3


9800 O1

9600 9600

Object 2 9400 O3 Object 2


9400 O3
9200
9200 2000 2500 3000 3500 4000 4500
2000 2500 3000 3500 4000 4500

Figure 7.19. SBDA Position tracks for Case 2, α = 0.8 and α = 0.5.

Now that we have presented both the tracking and ID algorithm performances, we will present

error metrics for both algorithms presented in Section 7.4. First we will present a ROC analysis of the

SBDA.

7.3.5 Receiver Operator Characteristic (ROC) Analysis

For the ID system, a Receiver Operator Curve (ROC) demonstrates the algorithm performance.

Since the belief filter is an evidence accumulation algorithm, increasing the number of HRR looks at the,

coupled by pose, the belief in the object increases. The measure by which the algorithm was assessed for

129
object ID is presented as an entropy metric. The benefit of using the entropy metric in the analysis of the

object articulation and the target identity is that more training profiles can be used to ID the object.

Presented is an example of one of the many runs used to demonstrate the robust behavior of the SBDA

tracking and ID algorithm. Figure 7.20 shows the probability of correct classification given that a target

has been detected or declared Pcc|d and the fitted curves. Three curves are plotted for the number of looks

or tracked observations of the HRR target signature. The HRR signature was gathered over a pose range

and the mutual information determined for the target after 1, 5, and 10 articulation profiles.

Figure 7.20. Belief Assessment of Pcc|d for Target Robustness.

From these results, we see that as the number of looks increases, the belief in the correct

classification increases. Since we are interested in the robustness of the algorithm, we also plot the

performance for a miss-classification Pmis-unkown|d.

Figure 7.21. Belief Assessment of Pmis-unkown|d for Target Robustness.

130
From the Figure 7.21a, we see that the miss-classification seems about equal. However, after

plotting the performance curves for the system in Figure 7.21b, we see that there is an asymptotic

relationship such that the miss-classification decreases as the probability of declaration increases. Such a

case is when the tracker continually follows the object and is able to “lock” onto the target, giving

repeated looks for tracking and identification. Thus, there is an advantage to simultaneously tracking and

identifying an object since mutual information is assessed over the track and ID pose estimates. Finally,

we can show the general belief relations for both the Pcc|d and Pmis-unkown|d.

Figure 7.22. Belief Assessment of Pmis-unkown|d and Pcc|d for Target Robustness.

From the analysis presented in this section, we see that be coupling the tracking and ID

information, we increase the classification of the HRR signature for an object which is an object assigned

to a specific class. To further demonstrate the advantage of the SBDA, we showed the case where three

targets cross so close, that it is difficult for the tracker to pull out the object from the clutter, Figure 7.16

versus 7.4. However, ID updates are the main advantage to data association and not the pose update.

131
7.4 Tabulation of Object IDs for JBPDA and SBDA

In the tables below, we summarize the error statistics and the object identities averaged over the

10 Monte Carlo runs. In Table 7.5 and 7.6, the bold numbers are the least error.

Table 7.5. Average Normalized Position Square Errors.

SQ Pos. Error
Case – JPBDA Track 1 Track 2 Track 3
Case 1: α = 1.0 – No ID / No Pose Update 160.814 155.8964 42.9449
Case 1: α = 1.0 – ID / No Pose Update 45.1725 45.2778 42.5067
Case 1: α = 0.8 – ID / Pose Update 52.7661 53.7672 40.0641
Case 1: α = 0.5 – ID / Pose Update 67.7930 64.4626 46.3878
Case 2: α = 1.0 – No ID / No Pose Update 142.4803 44.0260 127.9010
Case 2: α = 1.0 – ID / No Pose Update 33.0237 41.7192 32.0785
Case 2: α = 0.8 - ID / Pose Update 33.1504 41.7576 31.6651
Case 2: α = 0.5 – ID / Pose Update 53.6054 60.3795 43.5456

SQ Pos. Error
Case – SBDA Track 1 Track 2 Track 3
Case 1: α = 1.0 – ID / No Pose Update 32.1802 31.8726 34.9025
Case 1: α = 0.8 – ID / Pose Update 32.1122 32.0152 34.8515
Case 1: α = 0.5 – ID / Pose Update 33.6571 32.4696 35.6313
Case 2: α = 1.0 – ID / No Pose Update 32.8116 39.5319 29.9639
Case 2: α = 0.8 - ID / Pose Update 37.3472 41.5757 31.1185
Case 2: α = 0.5 – ID / Pose Update 34.8999 39.6870 29.4086

Table 7.6. Average Normalized Velocity Square Errors.

SQ Vel. Error
Case – JBPDA Track 1 Track 2 Track 3
Case 1: α = 1.0 – No ID / No Pose Update 6.7286 7.2709 6.6747
Case 1: α = 1.0 – ID / No Pose Update 2.0479 1.8477 2.7139
Case 1: α = 0.8 - ID / Pose Update 3.2708 2.4414 2.7217
Case 1: α = 0.5 – ID / Pose Update 4.5669 3.7064 3.9726
Case 2: α = 1.0 – No ID / No Pose Update 9.6358 3.7190 9.6512
Case 2: α = 1.0 – ID / No Pose Update 2.7829 3.0748 2.3868
Case 2: α = 0.8 - ID / Pose Update 2.7614 3.1925 2.4817
Case 2: α = 0.5 – ID / Pose Update 5.1233 4.7438 3.3169

SQ Vel. Error
Case – SBDA Track 1 Track 2 Track 3
Case 1: α = 1.0 – ID / No Pose Update 0.8256 0.8950 1.2062
Case 1: α = 0.8 - ID / Pose Update 0.8305 0.8978 1.2022
Case 1: α = 0.5 – ID / Pose Update 1.2078 1.2275 1.5070
Case 2: α = 1.0 – ID / No Pose Update 1.5695 1.9936 1.2251
Case 2: α = 0.8 - ID / Pose Update 1.6498 2.1503 1.2994
Case 2: α = 0.5 – ID / Pose Update 1.5749 2.0430 1.2380

132
Table 7.7 through Table 7.9 presents the target beliefs as suspected objects for the two cases run

with a varying pose classification contribution. For object IDs, the bold numbers represent target

identification belief and the results can be used to discern the target type for each object. The object belief

can be used to ID the object for a specific track and the certainty represents how confident we are in the

measurement. Typically, an object designation as a specific target was not declared until the confidence

achieved a certain threshold. The time at which the decision could be rendered was near 20 time steps.

To show how belief filtering can be used to ID an object, the average belief in an object is presented for

the entire run of 200 time steps.

Note, for the no-pose update case, the incorrect target ID was rendered since track-pose cues the

ID algorithm and the results are omitted. In the case that the algorithm gets confused by incorrectly

associating a position measurement to a track, the algorithm would match the same ID to the track since

the pose updates are the same. In each of the two cases for the JBDA case, an incorrect object ID occurred

and is a result of sequentially processing the ID after the tracking algorithm has been updated.

The SBDA case is an evidential approach, where object-ID is accrued over measurements. Thus

it is expected to do better than the JBPDA case in which object-ID beliefs are propagated over time with

measurement updates at each time step. The evidential accrual case increases the object-ID belief with

each repeated measurement for the object and is displayed with values that are twice the value of those of

the JBPDA case.

133
Table 7.7. Belief, Plausibility, and Certainty for Track 1.
Track 1 Target Belief O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 O11
JBPDA BMP BTR HMV M109 M110 M113 M1 M2 M35 M548 T72
Case 1: α = 1.0 – No Pose Update 0.492 0.000 0.000 0.000 0.000 0.000 0.000 0.003 0.018 0.000 0.000
Case 1: α = 0.8 – Pose Update 0.401 0.000 0.000 0.000 0.000 0.000 0.000 0.003 0.341 0.000 0.000
Case 1: α = 0.5 – Pose Update 0.321 0.000 0.000 0.000 0.000 0.000 0.000 0.003 0.186 0.000 0.000
SBDA
Case 1: α = 1.0 – No Pose Update 0.980 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Case 1: α = 0.8 – Pose Update 0.979 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Case 1: α = 0.5 – Pose Update 0.981 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
JBPDA
Case 2: α = 1.0 – No Pose Update 0.017 0.000 0.393 0.000 0.000 0.000 0.000 0.003 0.000 0.000 0.000
Case 2: α = 0.8 - Pose Update 0.002 0.000 0.341 0.000 0.000 0.000 0.000 0.004 0.010 0.000 0.000
Case 2: α = 0.5 – Pose Update 0.002 0.000 0.401 0.000 0.000 0.000 0.000 0.004 0.007 0.000 0.000
SBDA
Case 2: α = 1.0 – No Pose Update 0.000 0.000 0.984 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Case 2: α = 0.8 - Pose Update 0.000 0.000 0.980 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Case 2: α = 0.5 – Pose Update 0.000 0.000 0.980 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000

Track 1 Target Plausibility O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 O11


JBPDA BMP BTR HMV M109 M110 M113 M1 M2 M35 M548 T72
Case 1: α = 1.0 – No Pose Update 0.968 0.916 0.924 0.917 0.929 0.928 0.915 0.925 0.925 0.925 0.917
Case 1: α = 0.8 – Pose Update 0.980 0.916 0.924 0.917 0.900 0.912 0.907 0.920 0.940 0.922 0.907
Case 1: α = 0.5 – Pose Update 0.971 0.911 0.928 0.911 0.912 0.923 0.911 0.920 0.921 0.923 0.910
SBDAF
Case 1: α = 1.0 – No Pose Update 0.995 0.006 0.006 0.006 0.015 0.006 0.015 0.006 0.006 0.006 0.006
Case 1: α = 0.8 – Pose Update 0.995 0.006 0.006 0.006 0.015 0.006 0.015 0.006 0.006 0.006 0.006
Case 1: α = 0.5 – Pose Update 0.995 0.007 0.007 0.007 0.013 0.008 0.012 0.006 0.007 0.006 0.006
JBPDA
Case 2: α = 1.0 – No Pose Update 0.918 0.911 0.959 0.911 0.912 0.934 0.911 0.920 0.933 0.920 0.912
Case 2: α = 0.8 - Pose Update 0.925 0.907 0.970 0.905 0.916 0.937 0.905 0.925 0.935 0.918 0.907
Case 2: α = 0.5 – Pose Update 0.925 0.907 0.974 0.905 0.916 0.937 0.905 0.925 0.935 0.918 0.907
SBDA
Case 2: α = 1.0 – No Pose Update 0.006 0.006 0.995 0.006 0.011 0.006 0.011 0.006 0.006 0.006 0.006
Case 2: α = 0.8 - Pose Update 0.006 0.006 0.997 0.006 0.014 0.006 0.014 0.006 0.006 0.006 0.006
Case 2: α = 0.5 – Pose Update 0.006 0.006 0.997 0.006 0.014 0.006 0.014 0.006 0.006 0.006 0.006

Track 1 Target Certainty O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 O11


JBPDA BMP BTR HMV M109 M110 M113 M1 M2 M35 M548 T72
Case 1: α = 1.0 – No Pose Update 0.519 0.088 0.083 0.079 0.088 0.075 0.081 0.088 0.086 0.088 0.079
Case 1: α = 0.8 – Pose Update 0.421 0.088 0.083 0.090 0.088 0.075 0.081 0.088 0.191 0.081 0.079
Case 1: α = 0.5 – Pose Update 0.570 0.086 0.102 0.087 0.072 0.081 0.088 0.079 0.120 0.079 0.083
SBDA
Case 1: α = 1.0 – No Pose Update 0.980 0.989 0.989 0.989 0.980 0.989 0.980 0.989 0.989 0.989 0.989
Case 1: α = 0.8 – Pose Update 0.980 0.989 0.989 0.989 0.980 0.989 0.980 0.989 0.989 0.989 0.989
Case 1: α = 0.5 – Pose Update 0.981 0.988 0.988 0.988 0.983 0.988 0.983 0.989 0.988 0.989 0.989
JBPDA
Case 2: α = 1.0 – No Pose Update 0.094 0.087 0.430 0.086 0.084 0.162 0.085 0.081 0.148 0.087 0.084
Case 2: α = 0.8 - Pose Update 0.072 0.088 0.382 0.090 0.084 0.078 0.090 0.077 0.072 0.085 0.088
Case 2: α = 0.5 – Pose Update 0.072 0.088 0.431 0.090 0.110 0.078 0.090 0.077 0.072 0.085 0.088
SBDA
Case 2: α = 1.0 – No Pose Update 0.989 0.989 0.984 0.989 0.984 0.989 0.984 0.989 0.989 0.989 0.989
Case 2: α = 0.8 - Pose Update 0.989 0.989 0.980 0.988 0.981 0.989 0.980 0.989 0.989 0.989 0.989
Case 2: α = 0.5 – Pose Update 0.989 0.989 0.980 0.988 0.981 0.989 0.980 0.989 0.989 0.989 0.989
Table 7.8. Belief, Plausibility, and Certainty for Track 2.
Track 2 Target Belief O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 O11
JBPDA BMP BTR HMV M109 M110 M113 M1 M2 M35 M548 T72
Case 1: α = 1.0 – No Pose Update 0.043 0.000 0.003 0.000 0.275 0.002 0.000 0.000 0.003 0.001 0.000

134
Case 1: α = 0.8 – Pose Update 0.030 0.000 0.003 0.000 0.412 0.002 0.000 0.002 0.003 0.001 0.000
Case 1: α = 0.5 – Pose Update 0.036 0.000 0.008 0.000 0.372 0.001 0.000 0.001 0.002 0.002 0.000
SBDA
Case 1: α = 1.0 – No Pose Update 0.000 0.000 0.000 0.000 0.978 0.000 0.000 0.000 0.000 0.000 0.000
Case 1: α = 0.8 – Pose Update 0.000 0.000 0.000 0.000 0.978 0.000 0.000 0.000 0.000 0.000 0.000
Case 1: α = 0.5 – Pose Update 0.000 0.000 0.000 0.000 0.979 0.000 0.000 0.000 0.000 0.000 0.000
JBPDA
Case 2: α = 1.0 – No Pose Update 0.005 0.000 0.000 0.000 0.000 0.630 0.000 0.003 0.000 0.000 0.000
Case 2: α = 0.8 - Pose Update 0.007 0.000 0.000 0.000 0.000 0.320 0.000 0.003 0.310 0.002 0.000
Case 2: α = 0.5 – Pose Update 0.007 0.000 0.000 0.000 0.000 0.519 0.000 0.003 0.000 0.002 0.000
SBDA
Case 2: α = 1.0 – No Pose Update 0.000 0.000 0.000 0.000 0.000 0.984 0.000 0.000 0.000 0.000 0.000
Case 2: α = 0.8 - Pose Update 0.000 0.000 0.000 0.000 0.000 0.977 0.000 0.000 0.000 0.000 0.000
Case 2: α = 0.5 – Pose Update 0.000 0.000 0.000 0.000 0.000 0.977 0.000 0.000 0.000 0.000 0.000

Track 2 Target Plausibility O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 O11


JBPDA BMP BTR HMV M109 M110 M113 M1 M2 M35 M548 T72
Case 1: α = 1.0 – No Pose Update 0.933 0.913 0.925 0.910 0.945 0.920 0.910 0.919 0.943 0.927 0.913
Case 1: α = 0.8 – Pose Update 0.933 0.913 0.950 0.920 0.970 0.950 0.910 0.910 0.943 0.927 0.903
Case 1: α = 0.5 – Pose Update 0.937 0.915 0.941 0.906 0.942 0.921 0.908 0.920 0.960 0.936 0.913
SBDA
Case 1: α = 1.0 – No Pose Update 0.007 0.007 0.007 0.007 0.995 0.007 0.016 0.007 0.006 0.006 0.006
Case 1: α = 0.8 – Pose Update 0.007 0.007 0.007 0.007 0.995 0.007 0.016 0.007 0.006 0.006 0.006
Case 1: α = 0.5 – Pose Update 0.007 0.007 0.007 0.007 0.995 0.008 0.015 0.006 0.006 0.006 0.006
JBPDA
Case 2: α = 1.0 – No Pose Update 0.916 0.914 0.919 0.913 0.914 0.985 0.913 0.916 0.922 0.919 0.914
Case 2: α = 0.8 - Pose Update 0.911 0.913 0.925 0.907 0.910 0.923 0.907 0.917 0.924 0.920 0.908
Case 2: α = 0.5 – Pose Update 0.918 0.913 0.925 0.907 0.910 0.933 0.907 0.913 0.974 0.920 0.908
SBDA
Case 2: α = 1.0 – No Pose Update 0.007 0.007 0.007 0.007 0.010 0.995 0.010 0.006 0.006 0.006 0.006
Case 2: α = 0.8 - Pose Update 0.008 0.008 0.008 0.008 0.016 0.996 0.014 0.006 0.006 0.006 0.006
Case 2: α = 0.5 – Pose Update 0.008 0.008 0.008 0.008 0.016 0.996 0.014 0.006 0.006 0.006 0.006

Track 2 Target Certainty O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 O11


JBPDA BMP BTR HMV M109 M110 M113 M1 M2 M35 M548 T72
Case 1: α = 1.0 – No Pose Update 0.105 0.083 0.097 0.087 0.325 0.105 0.085 0.081 0.138 0.096 0.082
Case 1: α = 0.8 – Pose Update 0.080 0.083 0.051 0.087 0.431 0.080 0.085 0.088 0.080 0.096 0.082
Case 1: α = 0.5 – Pose Update 0.061 0.087 0.074 0.089 0.281 0.062 0.082 0.092 0.059 0.064 0.081
SBDA
Case 1: α = 1.0 – No Pose Update 0.988 0.988 0.988 0.988 0.978 0.988 0.979 0.988 0.989 0.989 0.989
Case 1: α = 0.8 – Pose Update 0.988 0.988 0.988 0.988 0.978 0.988 0.979 0.988 0.989 0.989 0.989
Case 1: α = 0.5 – Pose Update 0.988 0.988 0.988 0.988 0.980 0.988 0.980 0.988 0.989 0.989 0.989
JBPDA
Case 2: α = 1.0 – No Pose Update 0.084 0.085 0.088 0.083 0.084 0.640 0.082 0.090 0.107 0.087 0.081
Case 2: α = 0.8 - Pose Update 0.084 0.086 0.083 0.088 0.088 0.353 0.090 0.093 0.098 0.080 0.089
Case 2: α = 0.5 – Pose Update 0.084 0.086 0.083 0.088 0.088 0.531 0.090 0.093 0.030 0.080 0.089
SBDA
Case 2: α = 1.0 – No Pose Update 0.988 0.988 0.988 0.988 0.985 0.985 0.985 0.989 0.989 0.989 0.989
Case 2: α = 0.8 - Pose Update 0.987 0.987 0.987 0.987 0.987 0.977 0.981 0.989 0.989 0.989 0.989
Case 2: α = 0.5 – Pose Update 0.987 0.987 0.987 0.987 0.987 0.977 0.981 0.989 0.989 0.989 0.989
Table 7.9. Belief, Plausibility, and Certainty for Track 3.
Track 3 Target Belief O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 O11
JBPDA BMP BTR HMV M109 M110 M113 M1 M2 M35 M548 T72
Case 1: α = 1.0 – No Pose Update 0.029 0.001 0.034 0.000 0.003 0.005 0.000 0.472 0.044 0.008 0.000
Case 1: α = 0.8 – Pose Update 0.029 0.001 0.034 0.001 0.003 0.005 0.003 0.471 0.091 0.008 0.000
Case 1: α = 0.5 – Pose Update 0.010 0.002 0.022 0.002 0.002 0.277 0.000 0.174 0.014 0.002 0.000
SBDA
Case 1: α = 1.0 – No Pose Update 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.985 0.000 0.000 0.000

135
Case 1: α = 0.8 – Pose Update 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.980 0.000 0.000 0.000
Case 1: α = 0.5 – Pose Update 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.981 0.000 0.000 0.000
JBPDA
Case 2: α = 1.0 – No Pose Update 0.043 0.002 0.009 0.000 0.000 0.002 0.000 0.030 0.427 0.031 0.001
Case 2: α = 0.8 - Pose Update 0.009 0.000 0.003 0.000 0.000 0.001 0.001 0.019 0.520 0.002 0.002
Case 2: α = 0.5 – Pose Update 0.009 0.000 0.003 0.000 0.000 0.001 0.001 0.019 0.386 0.002 0.002
SBDA
Case 2: α = 1.0 – No Pose Update 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.969 0.000 0.000
Case 2: α = 0.8 - Pose Update 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.983 0.000 0.000
Case 2: α = 0.5 – Pose Update 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.983 0.000 0.000

Track 3 Target Plausibility O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 O11


JBPDA BMP BTR HMV M109 M110 M113 M1 M2 M35 M548 T72
Case 1: α = 1.0 – No Pose Update 0.929 0.916 0.927 0.914 0.915 0.915 0.913 0.967 0.931 0.926 0.915
Case 1: α = 0.8 – Pose Update 0.929 0.916 0.927 0.914 0.915 0.915 0.913 0.968 0.931 0.927 0.915
Case 1: α = 0.5 – Pose Update 0.926 0.918 0.934 0.910 0.914 0.948 0.908 0.951 0.937 0.926 0.914
SBDA
Case 1: α = 1.0 – No Pose Update 0.006 0.006 0.006 0.008 0.006 0.008 0.015 0.991 0.006 0.006 0.006
Case 1: α = 0.8 – Pose Update 0.006 0.006 0.006 0.008 0.006 0.008 0.015 0.995 0.006 0.006 0.006
Case 1: α = 0.5 – Pose Update 0.007 0.007 0.007 0.007 0.013 0.008 0.013 0.995 0.006 0.006 0.006
JBPDA
Case 2: α = 1.0 – No Pose Update 0.940 0.909 0.924 0.906 0.906 0.919 0.914 0.917 0.967 0.934 0.906
Case 2: α = 0.8 - Pose Update 0.948 0.916 0.938 0.905 0.911 0.928 0.915 0.950 0.971 0.948 0.908
Case 2: α = 0.5 – Pose Update 0.948 0.916 0.938 0.905 0.911 0.928 0.915 0.950 0.971 0.948 0.908
SBDA
Case 2: α = 1.0 – No Pose Update 0.008 0.008 0.008 0.008 0.025 0.006 0.023 0.006 0.995 0.006 0.006
Case 2: α = 0.8 - Pose Update 0.007 0.007 0.007 0.007 0.010 0.006 0.010 0.008 0.994 0.006 0.006
Case 2: α = 0.5 – Pose Update 0.007 0.007 0.007 0.007 0.010 0.006 0.010 0.008 0.994 0.006 0.006

Track 3 Target Certainty O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 O11


JBPDA BMP BTR HMV M109 M110 M113 M1 M2 M35 M548 T72
Case 1: α = 1.0 – No Pose Update 0.095 0.080 0.102 0.082 0.083 0.080 0.082 0.499 0.108 0.077 0.080
Case 1: α = 0.8 – Pose Update 0.095 0.080 0.102 0.082 0.083 0.080 0.082 0.520 0.108 0.077 0.080
Case 1: α = 0.5 – Pose Update 0.079 0.079 0.084 0.086 0.083 0.010 0.088 0.216 0.076 0.072 0.081
SBDA
Case 1: α = 1.0 – No Pose Update 0.989 0.989 0.989 0.989 0.981 0.989 0.981 0.980 0.989 0.989 0.989
Case 1: α = 0.8 – Pose Update 0.989 0.989 0.989 0.989 0.980 0.989 0.981 0.980 0.989 0.989 0.989
Case 1: α = 0.5 – Pose Update 0.988 0.988 0.988 0.988 0.983 0.987 0.983 0.981 0.989 0.989 0.989
JBPDA
Case 2: α = 1.0 – No Pose Update 0.098 0.088 0.080 0.090 0.090 0.078 0.082 0.108 0.454 0.093 0.090
Case 2: α = 0.8 - Pose Update 0.055 0.080 0.059 0.090 0.084 0.068 0.080 0.064 0.580 0.049 0.088
Case 2: α = 0.5 – Pose Update 0.055 0.080 0.059 0.090 0.084 0.068 0.080 0.064 0.410 0.049 0.088
SBDA
Case 2: α = 1.0 – No Pose Update 0.988 0.988 0.988 0.988 0.970 0.989 0.973 0.989 0.969 0.989 0.989
Case 2: α = 0.8 - Pose Update 0.988 0.988 0.988 0.988 0.985 0.989 0.985 0.987 0.983 0.989 0.989
Case 2: α = 0.5 – Pose Update 0.988 0.988 0.988 0.988 0.985 0.989 0.985 0.987 0.983 0.989 0.989

7.5 Baseline Case

136
To reference the baseline of the Joint Probabilistic Data Association Filter (JPDAF) [2] approach,

the plots below show that the JPDAF performs well with constant heading objects.

JBPDA(no ID/ No Pose UpDate) JBPDA(ID/ No Pose UpDate)


4 JPDAF Position - Track (-) Truth (:) 4 4
Track (-) Truth (:) Track (-) Truth (:)
1.12 x 10 1.12 x 10 1.12 x 10
O1 - 1 α = 1.0 O1 - 1 α = 1.0
O1 - 1 clutter 5
O2 - 5 clutter 5 O2 - 5 clutter 5
O2 - 5 var(50) Object 2 1.1
1.1 O3 - 8 O3 - 8 var(50) 1.1 O3 O3 - 8 var(50) Object 2

1.08 O3 1.08 1.08

1.06 1.06 1.06 Object 3


Object 3
O1 O1
1.04 1.04 1.04

1.02 1.02 1.02


O2
1 Object 1 1 Object 1
1 O2

0.98 0.98 0.98


2000 2500 3000 3500 4000 4500 5000 2000 2500 3000 3500 4000 4500 5000 2000 2500 3000 3500 4000 4500 5000

Figure 7.23. JPDAF and JBPDA for Constant Heading Objects.

4 JBPDA Position - Track (-) Truth (:) 4 JBPDA Position - Track (-) Truth (:)
1.12 x 10 1.12 x 10
O1 - 1 α = 0.8 O1 - 1 α = 0.5
O2 - 5 clutter 5 Object 2 O2 - 5 clutter 5
Object 2
1.1 O3 - 8 var(50) 1.1 O3 - 8 var(50)

1.08 O3
1.08
O3
1.06 Object 3 1.06 Object 3

O1
1.04 1.04 O1

1.02 1.02

1 O2 Object 1
1 Object 1
O2
0.98
2000 2500 3000 3500 4000 4500 5000 0.98
2000 2500 3000 3500 4000 4500 5000

Figure 7.24. JBPDA for Constant Heading Objects, α = 0.8 and α = 0.5.

137
4 SBDA Position - Track (-) Truth (:) 4 SBDA Position - Track (-) Truth (:) 4 SBDA Position - Track (-) Truth (:)
1.12 x 10 1.12 x 10 1.12
x 10
O1 - 1 α = 0.5
O1 - 1 α = 0.8
Object 2 O2 - 5 clutter 5 Object 2
1.1 O1 - 1 α = 1.0 1.1 O2 - 5 clutter 5
1.1
O2 - 5 clutter 5 Object 2 O3 - 8 var(50) O3 O3 - 8 var(50)
O3 - 8 var(50)

1.08 1.08 O3 1.08


O3

1.06 1.06 Object 3 1.06


Object 3 Object 3
1.04 O1 1.04 O1 1.04 O1

1.02 1.02 1.02

1 O2
Object 1 1 Object 1 1
O2 Object 1
O2
0.98 0.98 0.98
2000 2500 3000 3500 4000 4500 5000 2000 2500 3000 3500 4000 4500 5000
2000 2500 3000 3500 4000 4500 5000

Figure 7.25. SBDA for Constant Heading Objects, α = 1.0, α = 0.8, and α = 0.5.

As we can see from the plots, the JBPDA is similar in performance to the standard tracing

algorithm of the JPDAF when the objects move with constant headings. Also, the results show that using

the pose update for constant heading systems might introduce errors in the tracking of objects. Tables

7.10 and 7.11 show that the errors are approximately the same for the standard JPDA and the JBPDA and

SBDA in the dissertation.

Table 7.10. Average Normalized Position Square Errors for Constant Heading.

SQ Pos. Error
Case – JPBDA Track 1 Track 2 Track 3
Case 3: JPDAF – No ID / No Pose Update 46.0840 46.9884 31.3716
Case 3: α = 1.0 – No ID / No Pose Update 42.7600 48.4068 31.6147
Case 3: α = 1.0 – ID / No Pose Update 30.3449 32.6880 26.9010
Case 3: α = 0.8 – ID / Pose Update 28.7443 25.7366 27.1253
Case 3: α = 0.5 – ID / Pose Update 40.1355 34.6788 32.8302
Case – SBDA Track 1 Track 2 Track 3
Case 3: α = 1.0 – ID / No Pose Update 30.5447 33.5493 26.7431
Case 3: α = 0.8 – ID / Pose Update 31.7335 32.6143 28.5653
Case 3: α = 0.5 – ID / Pose Update 33.0325 32.6528 29.3414

Table 7.11. Average Normalized Velocity Square Errors for Constant Heading.

SQ Vel. Error
Case – JPDA Track 1 Track 2 Track 3
Case 3: JPDAF – No ID / No Pose Update 2.0817 2.1155 1.5347
Case 3: α = 1.0 – No ID / No Pose Update 2.8104 3.3473 1.5459
Case 3: α = 1.0 – ID / No Pose Update 0.8734 1.0254 0.8700
Case 3: α = 0.8 - ID / Pose Update 1.8890 1.5308 1.9287
Case 3: α = 0.5 – ID / Pose Update 4.6500 2.6945 3.1157
Case – SBDA Track 1 Track 2 Track 3
Case 3: α = 1.0 – ID / No Pose Update 0.8800 1.0196 0.9026
Case 3: α = 0.8 - ID / Pose Update 0.9611 1.0125 0.9801
Case 3: α = 0.5 – ID / Pose Update 0.9443 1.0241 0.9336

138
7.6 Summary of the Results Section

Chapter 7 presented some results from the simulations for the JBPDA and the SBDA. Two

scenarios were run to see the benefits and limitations of the algorithms. The difference between the

tracking and ID methods was the calculation of the event matrix; however, the output was the selection

of the measurement (MTI position hit and the HRR Object ID). Since the end result would be the same

from the data association, given that the algorithms choose the correct measurement for data

association, the only difference was that the completely set-based approach accumulated evidence in the

target beliefs and implemented faster. For comparison to the standard data-association tracking

algorithm, we showed that the SBDA and JBPDA performed as well as for a constant heading case.

Additionally, the pose update was investigated. It was shown that the pose update from the

tracker helps in the maneuvering case, albeit a small value. The significant tracker gain was in object

identification to discern the position measurement from clutter.

The results are as follows:

1) Pose update from ID helps in tracking.

2) Resolution of tracking and ID should be optimized for sensor resolution.

3) The full-state approach with No ID update performs worse than tracking with ID.

4) ID significantly improves tracking.

5) Pose updates help focus HRR collection for correct target ID.

6) SBDA is less robust than JBPDA, but if the system knows what target it is looking for, the

SBDA performs better than the JPBDA.

139
8.0 DISCUSSION

The results show that measurement tracking alone (JBPDA without belief ID and belief-pose

updates) incorrectly associates some of the measurement data of the other objects with that of the tracked

object. The JBPDA feature-identification tracking algorithm, which uses a amplitude and range-bin

features with measurement and object uncertainty, detects the HRR profile of an object and assigns the

correct measurement to the objects in clutter. In the case of the JBPDA filter, using the ID information

weights the position measurements by the Bel ID. Since the belief update further refines the kinematic

probability of the position measurements, it is expected to do better – more information. It is noted that

the discrimination of the objects was cluttered with HRR profiles near the pose angle of the object which

was deemed to be a more challenging problem than spurious partial HRR measurements. The belief filter

was able to ID an object to discern the measurement from clutter and estimate a pose to update the track

state. However, the optimal weighting of track-pose to belief-pose was found to be 0.8 to 0.2, respectively.

For the SBDA, the “AND” function was used instead of the “OR” function for the event matrix.

While the AND is less robust, it did capture the same information as the OR function and the results were

similar due to the choice of events; however, a significant gain was achieved in accumulating evidence in

the object type.

We ran many situations with different maneuvers, targets, and target types and similar results

were obtained. In the case that the targets had a direct track, with no heading changes, the belief filter

with pose update did worse do to the coarse nature of the HRR target set. The HRR target set was

collected for pose angles of ±3°, however if the signature was corrupted, it was removed from the data set.

Thus, in some cases the testing pose angle was > 6° from the estimated pose. For example, if the direct

track was constant at 000° and the estimated pose was 002°, the belief matching update was at ~358° to

008°. The results for the belief tracker overcorrected at each step producing oscillations in the track as

opposed the JBPDA without ID pose update which maintained a constant track.

140
Although MHT is considered the optimal tracking filter, many implementations result from some

sort of pruning of the hypothesis such as gating, clustering, and N-scanback approximations [136]. These

pruning techniques help to reduce the computational complexity of the algorithm; however, the

computations required to generate a hypothesis and the subsequent ones to prune them still require a

considerable amount of time. The JBPDA and SBDA take advantage of additional sensors, or two

simultaneous scans, from two sensors to essentially prune hypothesis before they are initiated. As

compared to other tracking algorithms, the JBPDA and SBDA processes more computations than

JPDAF[2], but should be less than MHT algorithms. Like all pruning techniques, the JBPDA and SBDA

are suboptimal, but it takes advantage of another radar processing mode to track objects. The JBPDA and

SBDA seeks to go beyond standard tracking techniques to not only track the object, but identify the target

it as well.

Finally, the belief filter goes beyond Bayesian type methods to capture incomplete knowledge. A

tracking and ID filter can be developed based on purely Bayesian methods [42]; however, since the

Bayesian ID information is normalized by only the available information, incorrect target classifications

result [102] because the Bayesian method only chooses the most likely classification from the given

measurements and can result in incorrect pose updates. We added robustness to the algorithm by

hypothesizing over plausible objects. Like other algorithms that seek to alter the Kalman filter innovation

[137], we use identification information to aid in tracking and tracking information to cue HRR

classification. Object maneuvering detection is done by the combined track-ID state-innovation gating.

Additionally, the position measurements from the MTI are confirmed with ID information updates, which

is more than preprocessing the measurement [138], since the MTI and HRR information is fused in real-

time.

141
9.0 CONCLUSIONS

The dissertation presented the joint belief-probabilistic data association (JBPDA) and the set-

based data association (SBDA) approaches for simultaneous multi-target tracking and identification. The

techniques demonstrate promise for cluttered tracking problems where strict position measurement data

association might not be effective for multitarget tracking. The JBPDA is more robust than the SBDA

and the JBPDA without belief-ID and belief-pose updates for identifying closely spaced maneuvering

targets; however, it requires multiple sensors, is limited by the resolution of the sensors and the training

set, and requires more computations. The dissertation demonstrated the use of the JBPDA using simulated

MTI hits and real HRR data and found the weighting to 0.8 for the tracker and 0.2 for the identifier,

respectively as the solution that minimized position error. The SBDA is less robust than the JBPDA, but

using evidence accumulation resulted in a reduced tracking state error than the JBPDA.

In the implementation of the JBPDA and the SBDA, the correct object was found for each track.

Additionally, target maneuvers were captured which is an improvement over standard data association

techniques. Significant gains were achieved with identification; however, pose update gains were small

due to the resolution of the data set.

142
9.1 Summary of Work

The dissertation overviewed the development of a multiresolutional data association technique to

track and identify targets. From the data-association problems that are inherent in kinematic tracking

algorithms, we overcame these problems by associating ID measurement information with that of tracking

information. The results from a simulated tracker MTI and real HRR target IDs demonstrated that the

JBPDA and SBDA were robust, stable, and implementable.

9.2 Contributions of Dissertation

The contributions of the work apply to three research thrusts. The first is information fusion. In

the fusion of sensory information, similar sensors are assumed to process similar information. However,

as sensors and algorithms vary, the fusion of information becomes more complex. For the case of the

dissertation work, uncertainty calculus, as derived from the belief and probability axioms, has been a

debated topic for numerous years. We presented one possible way to combine uncertainty information

which facilitated a recursive algorithm for updating beliefs for a tracking scenario. For the Uncertainty

and Artificial Intelligence (UAI) community, the combination of information warrants attention because

the stability of the algorithm as shown through Monte-Carlo simulations is verified and could be useful for

other techniques.

The second contribution is in data-association tracking. Fusing tracking and identification

information enhanced algorithms that typically only process kinematic measurements. Additionally, set

theory tracking has had no results presented with real data and the work supports the novelty and

usefulness of the algorithms. Although theory has been developed to support set-theory tracking, no

algorithms have thoroughly defined how the information would be processed.

The third major contribution is that of HRR tracking. Only a few people have worked in the field

of HRR tracking and while initial work of others follows the formulation of the dissertation, the

contribution of the dissertation goes beyond Bayesian techniques to track and ID HRR signatures in a

robust manner. All of the other contributors to the research area have assumed that the most probably

143
target match is the HRR ID. By using the set theory approach and the propagation of beliefs, we have

demonstrated that maintaining a belief in a set of targets in more robust to classification errors. We have

demonstrated that the STaF algorithm can be used in a tracking scenario and have made some minor

changes to facilitate a recursive tracking and ID algorithm.

Finally, the contribution of the research is to complement the others in the field and nothing in

the dissertation has shown to discredit traditional algorithms. The traditional algorithms, given their set

of assumptions are valid in the dissertation work. We merely relaxed assumptions to design a more robust

way to track and ID targets from HRR signatures of a moving target in clutter.

9.3 Future Work

With every research project undertaken, there were many assumptions of the work used to

implement the algorithm. Further investigations will be needed with higher resolution data and further

relaxations of the scenario. Below is a brief set of ideas to further the work

- Combine Belief updates with IMM, Multiresolutional, and MHT algorithms

- Run more scenarios with a variety of target maneuvers

- Increase the target and clutter density

- Incorporate group tracking in the scenario

- Increase the pose accuracy of the target training set

- Incorporate other sensors such as GPS, IFFN, and ESM

- Demonstrate how a human can use the confidence information

- Use HRR and SAR in the move-stop-move scenario

144
Acknowledgements

The worked was supported by the Air Force Research Laboratory and the authors would like to

thank the Air Force Research Lab (AFRL) System-Oriented HRR Automatic Recognition Program

(SHARP) and Defense Advanced Research Project Agency (DARPA) TRUMPETS program for their help

in collecting and processing the HRR data.

145
REFERENCES

[1] D.B. Reid, ‘An algorithm for tracking multiple targets’, IEEE transactions on Automatic Control,
Vol. 24, pp. 282-286, 1979.
[2] Y. Bar-Shalom and X. Li, Multitarget-Multisensor Tracking: Principles and Techniques, YBS,
New York, 1995.
[3] L. Hong and T. Scaggs, ‘Real-time optimal filtering for stochastic systems with multiresolutional
measurments,’ Systems and Control Letters, Vol. 21, No. 5, pp. 381-387, 1993.
[4] L. Hong, W. Wang, M. Logan, and T. Donohue, ‘Multiplatform multisensor fusion with adaptive
rate data communication,’ IEEE Trans. On Aerospace and Electronic Systems, Vol. 33, No. 1, pp.
274-281, 1997.
[5] L. Hong, ‘Multiresolutional Distributed filtering,’ IEEE Transactions on Automatic Control, Vol.
39, No. 4, pp. 853-856, 1994.
[6] L. Hong, ‘Multiresolutional multiple-model target tracking,’ IEEE Transactions on Aerospace and
Electronic Systems, Vol. 30, No. 2, pp. 518-524, April 1994.
[7] L. Hong, ‘Multirate Interacting Multiple Model Filtering for Target Tracking Using Multirate
Models,’ to appear in IEEE Trans. On Automatic Control.
[8] Z. Ding and L. Hong, ‘Decoupling probabilistic data association algorithm for multiplatform
multisensor tracking,’ Optical Engineering, ISSN 0091-3286, Vol. 37, No. 2, Feb. 1998.
[9] P. Maybeck, Stochastic Models, Estimation and Control, Academic Press, New York, 1979.
[10] T. C. Wang and P.K. Varshney, ‘A Tracking Algorithm for Maneuvering Targets,’ IEEE Trans.
AES, Vol. 29, No. 3, July 1993.
[11] L. Hong, N. Chou, S. Cong, and D. Wicker, ‘Interacting Multipattern Data Association (IMPDA)
for dim Target Tracking’, to appear in Signal Processing.
[12] F. Daum, ‘Book Review of Multitarget-Multisensor Tracking: Principles and Techniques,’ IEEE
AES Systems Mag., pp. 39 – 42, July, 1996.
[13] O. E. Drummond, ‘Multiple Sensor Tracking with Multiple Frame, Probabilistic Data Association,’
Signal and Data Processing of Small Targets 1995, SPIE, Vol. 2561, 1995, pp. 322-336.
[14] S. S. Blackman and R. Popoli, Design and Analysis of Modern Tracking Systems, Artech House
Publisher, Boston, 1999.
[15] Z. Ding and L. Hong, ‘Decoupling probabilistic data association algorithm for multiplatform
multisensor tracking,’ Optical Engineering, ISSN 0091-3286, Vol. 37, No. 2, Feb. 1998.
[16] A. B. Poore and O.E. Drummond, ‘Track Initiation and Track Maintenance Using
Multidimensional Assignment Problems,’ Network Optimization, P.M. Pardalos, D. Hearn, and W.
Hager, Eds. New-York: Springer-Verlag, 1997, pp. 407-422.
[17] E. P. Blasch and L. Hong, ‘Sensor Fusion Cognition using Belief Filtering for Tracking and
Identification,’ SPIE Aerosense, Vol. 3719, pg. 250-259, 1999.
[18] E. P. Blasch and L. Hong, ‘Simultaneous Tracking and Identification,’ Conference on Decision
Control, Tampa, FL, December 1998, pg. 249-256.
[19] A. Farina and F. A. Struder, Radar Data Processing Techniques, Vol. 1, Introduction and
Tracking, Vol. 2, Advanced Topics and Applications, Research Studies Press, Wiley, New York,
1985, 1986.
[20] R. Mahler, ‘Random Sets in Information Fusion’, in Random Sets: Theory and Applications, Eds.
J. Goutsias, R.P.S. Mahler, H. T. Nguyen, IMA Volumes in Mathematics and its Applications, Vol.
97, Springer-Verlag Inc., New York, pp. 129-164, 1997.
[21] S. Mori, ‘Random Sets in Data Fusion: Multi-object State-Estimation as a Foundation of Data
Fusion Theory,’ in Random Sets: Theory and Applications, Eds. J. Goutsias, R.P.S. Mahler, H.T.
Nguyen, IMA Volumes in Mathematics and its Applications, Vol. 97, Springer-Verlag Inc., New
York, pp. 185-207, 1997.
[22] S. Mori, ‘Random Sets in Data Fusion Problems,’ IRIS National Symposium on Sensor and Data
Fusion, Vol. 1, pp. 1-15, 1997.

146
[23] E. P. Blasch and L. Hong, ‘Set-theory Correlation Free Algorithm For HRRR Target Tracking,’
IRIS National Symposium on Sensor and Data Fusion, Vol. 1, pp. 155-164, 1999.
[24] S. S. Blackman and R. Popoli, Design and Analysis of Modern Tracking Systems, Artech House,
Boston, MA, 1999.
[25] S. S. Blackman, Multiple Target Tracking with Radar Applications, Artech House, Norwood, MA,
1986.
[26] A. Farina and F. A. Studer, Radar Data Processing, Vol. 1; Introduction and Tracking, Vol. II:
Advanced Topics and Applications, Research Studies Press, Letchworth, Hertfordshire, England,
1985.
[27] W. D. Blair, ‘Toward the Integration of Tracking and Signal Processing for Phased Array Radar’,
in Proc. 1994 SPIE Conf. Signal and Data Processing of Small Targets, Vol. 2235, Orlando, FL,
Apr. 1994.
[28] P. Bogler, Radar Principles with Applications to Tracking Systems., Wiley, 1990.
[29] E. Daeipour, Y. Bar-Shalom, and X.R. Li, ‘Adaptive Beam Pointing control of a Phased Array
radar Using an IMM Estimator’, in Proc. 1994 American Control Conf., pp. 2093-2097, Baltimore,
MD, June 1994.
[30] J. R. Layne and E. P. Blasch, ‘Integrated Synthetic Aperture Radar and Navigation Systems for
Targeting Applications’, Technical Report WL-TR-97-1185, Wright Labs, WPAFB, OH, Sept.
1997.
[31] R. I. Odom, G. M. Stuart, and F. D. Gorecki, ‘Design and Performance Analysis of a JPDAF
Tracker for electronically Scanned Radar’, In Proceedings of AIAA Guidance and Control
Conference, Boston, MA, Aug., 1989.
[32] R. B. Washburn, T. Kurien, A. L. Blitz, and A. S. Wilsky, ‘Hybrid State Estimation Approach to
Multiobject Tracking for Surveillance Radars’. Technical Report 180, Alphatech, Inc. Burlington,
MA, Oct., 1984.
[33] L. D. Stone, M.V. Finn, and C. A. Barlow, Unified Data Fusion, Tech Rep., Metron Corp., January
26, 1996.
[34] L. D. Stone, M. V. Finn, and C. A. Barlow, ‘Uncluttering the Tactical Picture,’ IRIS Sensor and
Data Fusion Conf., Vol. 2, pp. 127 - 145, 1997.
[35] J. A. O’Sullivan, S. P. Jacobs, M. I. Miller, and D. L. Snyder, ‘A Likelihood-based Approach to
Joint Target Tracking and Identification,’ 27th Asilomar Conf. On Signals, Systems, & Computers,,
Vol. 1, pp. 290 - 294, Nov 1993.
[36] S. P. Jacobs and J. A. O’Sullivan, ‘High Resolution Radar Models for Joint Tracking and
Recognition,’ IEEE National Radar Conf., pp. 99 - 104, May 1997.
[37] K. Kastella, ‘Joint multitarget probabilities for detection and tracking,’ SPIE AeroSense '97, April
21-25, 1997.
[38] S. Musick and K. Kastella, ‘Comparison of Sensor Management Strategies for Detection and
Classification,’ 9th National Symposium on Sensor and Data Fusion, March 1996.
[39] E. Libby, Application of sequence comparison methods to multisensor data fusion and target
recognition, Ph.D. Dissertation, AFIT, June 1993.
[40] E. Libby and P. Maybeck, ‘Sequence Comparison Techniques for Multisensor Data Fusion and
Target Recognition,’ IEEE Transactions on Aerospace and Electronic Systems, Vol. 32, No. 1, pp.
52 – 65, Jan 1996.
[41] J. R. Layne, ‘Automatic Target Recognition and Tracking Filter,’ SPIE AeroSense – Small Targets,
April 1998.
[42] J. R. Layne and D. Simon, ’A Multiple Model Estimator for Tightly coupled HRR ATR and MTI
Tracking’, 1998 SPIE Aerosense Conference, Algorithms for SAR Imagery V1, 1999.
[43] X. R. Li and Y. Bar-Shalom, ‘Detection threshold selection for tracking performance optimization,’
IEEE Transactions on Aerospace and Electronic Systems, Vol. AES-30, No. 3, July 1994, pp. 742–
749.
[44] X. R. Li and Y. Bar-Shalom, ‘Detection threshold selection for tracking performance optimization,’
Proc. IEEE 1992 Conf. on Command, Control, Communications and Intelligence Technology and
Applications, Utica, NY, June 1992, pp. 195–199.

147
[45] J. H. Mitzel, ‘Multitarget Tracking applied to ATR with Imaging Infrared Sensor,’ Ch. 9
Multitarget-Multisensor Tracking: Advanced Applications, Y. Bar-Shalom, Ed., Artech House,
1990, pp. 325–327.
[46] R. A. Mitchell and J. J. Westerkamp, ‘Robust Statistical Feature Based Aircraft Identification,’
IEEE AES, Vol 35, No. 3, pp. 1077 -1094, 1999.
[47] R. A. Mitchell and J. J. Westerkamp, ‘Statistical Feature Based Target Recognition,’ NAECON,
1998, pp. 111-118.
[48] Y. Bar-Shalom, H. M. Shertukde, and K. R. Pattipati, ‘Extraction of Measurements from an
Imaging Sensor for Precision Target Tracking’, IEEE Transactions Aerospace and Electronic
Systems, Vol. 25, pp. 863-872, Nov. 1989.
[49] Lee, K.M., Z. Zhi, R. Blenis, and E. Blasch, ‘Real-time vision-based tracking control of an
unmanned vehicle,’ IEEE Journal of Mechatronics - Intelligent Motion Control. October. pp. 971 -
978., June 1995.
[50] E. P. Blasch, ‘Flexible Vision-Based Navigation System for Unmanned Aerial Vehicles,’. Mobile
Robotics IX: SPIE’s International Symposium on Photonic Sensors and Controls for Commercial
Applications. Boston, MA, pp. 58 – 67, Oct 1994.
[51] C. Y. Chong. K.C. Chang, and S. Mori, ‘Tracking Multiple Targets with Distributed Acoustic
Sensors’, in Proc. 1987 American Control Conf., Minneapolis, MN, June 1987.
[52] P. Maybeck and . E. Mercier, ‘A Target tracker using Spatially Distributed IR Measurements’,
IEEE Transactions Automatic Control, Vol. 25, pp. 222-225, Apr. 1980.
[53] M. Efe and D. Atherton, ‘A Tracking Algorithm for both Highly Maneuvering and Non-
maneuvering Targets,’ CDC ’96, San Diego, CA, 1997, pg. 3150 – 3155.
[54] H. M. Shertukde and Y. Bar-Shalom, ‘Detection and Estimation for Multiple Targets with Two
Omnidirectional Sensors in the Presence of False Measurements’, IEEE Trans. Acoustics, Speech
and Signal Processing, ASSP-38:749-763, May 1990.
[55] M. A. Abidi and R.C. Gonzales. Data Fusion in Robotics and Machine Intelligence, Academic
Press, Inc., 1992.
[56] J. J. Clark and A. L. Yullie. Data Fusion for Sensory Information Processing System, Kluwer
Academic Publishers, 1990.
[57] R. C. Lou and M. G. Kay, Multisensor Integration and Fusion for Intelligent Machines and
Systems, Ablex Publishing Corp, 1995.
[58] B. V. Dasarathy, Decision Fusion, IEEE Computer Society Press, Los Alamitos, CA, 1994.
[59] P.K. Varshney,’ Multisensor Data Fusion,’ Electronics and Communication Engineering Journal,
vol. 9. pp. 245-253, Dec. 1997.
[60] P. K. Varshney, ‘Scanning the special issue on data fusion,’ Proc. IEEE, Vol. 85, pp. 3-5, 1997.
[61] T. Kirubarajan, Y. Bar-Shalom, K.R. Pattipati and L.M. Loew, ‘Interacting Segmentation and
Tracking of Overlapping Objects from an Image Sequence’, Proc. 35th IEEE Conf. Decision and
Control, San Diego, CA, Dec. 1997.
[62] E. Oron, A. K. Kumar and Y. Bar-Shalom, ‘Precision Tracking with Segmentation for Imaging
Sensors’, IEEE Trans. Aerosp. Electronic Systems, AES-29(3):977-987, July 1993.
[63] L. Hong, ‘Multiresolutional Distributed filtering,’ IEEE Transactions on Automatic Control, Vol.
39, No. 4, pp. 853-856, 1994.
[64] L. Hong, ‘Multiresolutional multiple-model target tracking,’ IEEE Transactions on Aerospace and
Electronic Systems, Vol. 30, No. 2, pp. 518-524, April 1994.
[65] Shafer, G., A Mathematical Theory of Evidence, Princeton University Press, Princeton, NJ,1976.
[66] Shafer, G. and J. Pearl (Eds.), Readings in Uncertainty Reasoning, Morgan Kuafmann Publishers,
San Mateo, CA,1990.
[67] R. Yager, J. Kacprzyk, and M. Fedrizzi. Advances in the Dempster-Shafer Theory of Evidence,
John Wiley and Sons, New York, 1994.
[68] G. J. Klir and T. A. Folger, Fuzzy Sets, Uncertainty, and Information, Englewood Cliffs, NJ,.1988.
[69] I. R. Goodman and H. T. Nguyen, Uncertainty Models for Knowledge based systems, North
Holland, Amsterdam, The Netherlands,.1985.

148
[70] L. Hong and A. Lynch, 'Centralized/Distributed Temporal-Spatial Information Fusion by
Dempster-Shafer Techniques with Applications to Target Identification,' IEEE Transactions on
Aerospace and Electronic Systems, vol. 28, No. 4., pp. 1144-1153, Oct. 1992.
[71] S. Mori, C.Y. Chong, E. Tse, an R. P. Wishner, ‘Tracking and Classifying multiple targets without
a priori identification,’ IEEE Transactions on Automatic Control, 31, (1986), pp. 401-409.
[72] S. Mori, ‘Random Sets in Data Fusion: Multi-object State-Estimation as a Foundation of Data
Fusion Theory,’ in Random Sets: Theory and Applications, Eds. J. Goutsias, R.P.S. Mahler, H.T.
Nguyen, IMA Volumes in Mathematics and its Applications, Vol. 97, Springer-Verlag Inc., New
York, pp. 185-207, 1997.
[73] S. Mori, ‘Random Sets in Data Fusion Problems,’ SPIE, Vol. 3163, pp. 278-289, 1997.
[74] I. R. Goodman, PACT: An Approach to combining linguistic-based and probability information in
correlation and tracking, Tech Report 1386, Naval Ocean Command and Control Ocean Systems
center, RDT&E Division, San Diego, CA, July 1992.
[75] T. Quach and M. Farooq, ‘A fuzzy logic-based target tracking algorithm,’ SPIE Vol. 3390, pp.
476–487, 1998.
[76] R. P. S. Mahler, ‘Random Sets in Information Fusion’, in Random Sets: Theory and Applications,
Eds. J. Goutsias, R.P.S. Mahler, H.T. Nguyen, IMA Volumes in Mathematics and its Applications,
Vol. 97, Springer-Verlag Inc., New York, pp. 129-164, 1997.
[77] I. R. Goodman, R. P. S. Mahler, and H.T. Nguyen, Mathematics of Data Fusion, Kluwer Academic
Publishers, Boston, MA, pp. 91-337, 1997.
[78] E. Waltz and J. Llinas, Multisensor Data Fusion, Artech House, Inc. 1990.
[79] D. L. Hall. Mathematical Techniques in Multisensor Data Fusion, Artech House, Inc. 1992.
[80] P. L. Bogler, ’Shafer-Dempster Reasoning with Applications to Multisensor Target Identification
Systems’, IEEE Transactions. Systems, Man, and Cybernetics, vol. 17. pp. 968-977, Dec. 1987.
[81] R. A. Dillard, ‘Computing Probability Masses in Rule-Based Systems,’ NOSC Technical Document
545, Naval Ocean Systems Center, Sept. 1982.
[82] L. Hong, ’Recursive Temporal-Spatial Information Fusion, Proc. of IEEE Conf. On Decision and
Control, pp. 3510-3511, Tucson, AZ, Dec. 1992.
[83] L. Hong, 'Recursive Algorithms for Information Fusion Using Belief Functions with Applications
to Target Identification,’ Proc. of IEEE 1st Intl. Conference on Control Applications, pp. 1052-
1057, Dayton, OH, Sept. 1992.
[84] L. Hong, 'Distributed Filtering Using Set Models with Confidence Values,' Proc. of 1992 American
Control Conf., pp. 2129-2133, Chicago, IL, June. 1992.
[85] E. P. Blasch, ‘Learning Attributes For Situational Awareness in the Landing of an Autonomous
Aircraft,’ Proceedings of the Digital Avionics Conference, San Diego, CA, October, pp. 5.3.1 –
5.3.8., 1997.
[86] D. Marr. Vision, San Francisco, CA: W. H. Freeman, 1982.
[87] J. K. Aggarwal. Multisensor Fusion for Computer Vision, Springer Verlag, 1993.
[88] K. Fukunaga, Introduction to Statistical Pattern Recognition. Academic Press, Inc., second edition,
1990.
[89] D. R. Wehner, High Resolution Radar, Artech House, Inc., Norwood, MA, 1987.
[90] W. G. Carrara, R. S. Goodman, and R. M. Majewski, Spotlight Synthetic Aperture Radar - Signal
Processing Algorithms, Artech House, Inc., Norwood, MA, 1995.
[91] E. P. Blasch and M. Bryant, ‘Information Assessment of SAR Data For ATR,’ Proceedings of
IEEE National Aerospace and Electronics Conference. Dayton, OH, July, pp. 414 – 419, 1998.
[92] E. P. Blasch, 'SAR Information Exploitation Using an Information Filter Metric,' 7th ATRWG,
Monterey, CA, March 2-4, 1999.
[93] E. P. Blasch, S. Alsing, and R. Bauer. ‘Comparison of bootstrap and prior probability synthetic
data balancing method for SAR target recognition,’ SPIE Int. Sym. On Aerospace/Defense
Simulation and Control, ATR, Orlando, FL, 13-17 April, 1999.
[94] M. Fennell and R. Wishner, ‘Battlefield awareness via synergistic SAR and MTI exploitation’
IEEE Aerospace and Electronic Systems Magazine, Feb. 1998.
[95] E. P. Blasch, ‘Sensor Management Issues for SAR Target tracking and Identification,’ EuroFusion,
Stratford-Upon-Avon, UK, Oct. 1999, pg. 279-286.

149
[96] J. J. Westerkamp, S. Worrell, R. Williams, D. Wardell, and M. Ressler, ‘Robustness issues for id
air-to-ground moving target ATR,’ In Automatic Target Recognition Working Group, Huntsville,
AL, Oct. 1997.
[97] A. Shaw, R. Vashist, T. Rakshit, ‘Performance of eigen-template-based ATR for unknown targets’,
1998 SPIE Aerosense Conference, Algorithms for SAR Imagery V1, 1999.
[98] J. J. Westerkamp, et. al, ‘Robust Feature-based Bayesian ground target recognition using decision
confidence for unknown target rejection’, 1998 SPIE Aerosense Conference, Algorithms for SAR
Imagery V1, 1999.
[99] R. Bhatnagar, ‘Belief function based approach for classification of HRR signatures, 1998 SPIE
Aerosense Conference, Algorithms for SAR Imagery V1, 1999.
[100] R. Williams and D Gross et. al, ‘Analysis of a 1D HRR moving target ATR’, 1998 SPIE Aerosense
Conference, Algorithms for SAR Imagery V1, 1999.
[101] R. Williams, J. Westerkamp. D. Gross, and A. Palomino, ’HRR NCTR of Ground moving targets
for UAV, Attack, and Space-Based Applications, NATO Conference, 1999.
[102] R. A. Mitchell and J. J. Westerkamp, ‘Statistical Feature Based Target Recognition,’ NAECON,
1998, pp. 111-118.
[103] R. A. Mitchell and J. J. Westerkamp, ‘High range resolution radar target identification using a
statistical feature based classifier with feature level fusion,’ in ATRWG, Huntsville, AL, October
1997.
[104] R. A. Mitchell. Robust High Range Resolution Radar Target Identification using a Statistical
Feature Based Classifier with Feature Level Fusion. PhD thesis, University of Dayton, Dayton,
OH, December 1997.
[105] E. P. Blasch and J. Gainey, ‘Feature Based Biological Sensor Fusion,’ Intl. Conference on Info.
Fusion, 1998, pp. 702-709.
[106] E. P. Blasch and L. Hong, ‘Simultaneous Tracking and Identification,’ Conference on Decision
Control, Tampa, FL, December 1998, pg. 249-256.
[107] A. N. Steinberg, C. L. Bowman, and F. E. White, 'Revisions to the JDL Data Fusion Model,'
Fusion99, Sunnyvale, Ca, 1999.
[108] R. Kruse, E Schwencke, and J. Heinsohn, Uncertainty and Vagueness in Knowledge-Based
Systems, Springer-Verlag, New York City, New York, 1991.
[109] R. P. S. Mahler, ‘Random Sets in Information Fusion’, in Random Sets: Theory and Applications,
Eds. J. Goutsias, R.P.S. Mahler, H.T. Nguyen, IMA Volumes in Mathematics and its Applications,
Vol. 97, Springer-Verlag Inc., New York, pp. 129-164, 1997.
[110] K. Hestir, H.T. Nguyen, and G.S. Rogers, ‘A random set formalism for evidential reasoning,’
Conditional Logic in Expert Systems (I.R. Goodman, N.M. Gupta, H.T. Nguyen, and G.S. Rogers,
eds.) Amsterdam, The Netherlands: North-Holland, 1991, pp. 309-344.
[111] H. T. Nguyen, ‘On random sets and belief function,’ Journal of mathematical Analysis and
Applications, Vol 65, pp. 531-542, 1978.
[112] P. Smets, ‘The transferable belief model and random sets,’ International Journal of Intelligent
Systems, Vol 7, pp. 37 – 46, 1992.
[113] A. J. Gonzalez and D. Dankel, The Engineering of Knowledge-Based Systems, Theory and
Practice, Prentice Hall, Englewood Cliffs, New Jersey, 1993.
[114] D. Buede, ‘Shafer-Dempster and Bayesian reasoning: A response to ‘Shafer-Dempster reasoning
with applications to multisensor target identification’,’ IEEE Transaction on Syst., Man and Cyber.,
vol.18, pp.1-10, 1988.
[115] A. P. Dempster, ‘Construction and local computation aspects of network belief functions,’ Ch 6 of
Influence Diagrams, Belief Nets, and Decision Analysis, eds. R. M. Oliver and J. Q. Smith, Wiley,
1990.
[116] G. Shafer, ‘Propagating belief functions in qualitative Markov trees,’ International Journal of
Approximate Reasoning, Vol. 3, pp. 383-411, 1987.
[117] M. T. Fennell and Richard P. Wishner, ‘Battlefield awareness via synergistic SAR and MTI
exploitation,’ IEEE Aerospace and Electronic Systems Magazine, Feb. 1998.
[118] F. E. Daum, ‘Bounds on Performance for Multiple Target Tracking,’ IEEE Transactions on
Automatic Control, Vol. 35(4). pp. 443-446, Apr. 1990.

150
[119] G. Shafer, A Mathematical Theory of Evidence, Princeton University Press, Princeton, NJ, 1976.
[120] G. Shafer and J. Pearl (Eds.), Readings in Uncertainty Reasoning, Morgan Kaufmann Publishers,
San Mateo, CA, 1990.
[121] R. Yager, J. Kacprzyk, and M. Fedrizzi. Advances in the Dempster-Shafer Theory of Evidence,
John Wiley and Sons, New York, 1994.
[122] J. C. Spall, ed. Bayesian Analysis of Time Series and Dynamic Models, Marcel Dekker Inc., New
York, 1988.
[123] R. Hummel and M. Landy, ‘Evidence as Opinions of Experts,’ UAI86, Proceedings of the Second
Conference on Uncertainty in Artificial Intelligence, Elsevier Science Publishing Co., New York,
NY 1988, pp. 136-143.
[124] P. Fua, ‘Using Probability Density Functions in the Framework of Evidential Reasoning’,
Uncertainty in Knowledge-Based Systems, Lectures Notes in Computer science, Springer Verlag
[125] P. Fua, ‘Deriving and Combining Continuous Possibility Functions in the Framework of Evidential
Reasoning,’ from UAI collections, pp. 85-90, 1984.
[126] E. H. Ruspini, ‘Approximate Deduction in Single Evidential Bodies,’ UAI Collections, pp. 215-
222. 1983.
[127] D. Dubois and H. Prade, Possibility Theory – An Approach to Computerized Processing of
Uncertainty, Plenum Press, New York, pp. 1986.
[128] M. S. Grewal and A. P. Andrews, Kalman Filtering – Theory and Practice, Prentice Hall, Upper
Saddle River, NJ,1993.
[129] K. Shanmugan and A. Breipohl, Random Signals: Detection, Estimation, and Data Analysis, John
Wiley and Sons, New York, 1988.
[130] S. Musick and K. Kastella, 'Comparison of Sensor Management Strategies for Search and
Classification', 9th National Symposium on Sensor Fusion, Naval Postgraduate School, Monterey,
CA, March 11 - 13, 1996.
[131] E. Zelnio and F. Garber, 'Characterization of ATR performance Evaluation,' SPIE Signal
Processing, Sensor Fusion, and Target Recognition V, Orlando, FL, April 8 - 10, 1996.
[132] P. Viola and M. Wells, 'Alignment by Maximization of Mutual Information,' The International
Conference on Compute Vision, June, 1995.
[133] D. Castañon 'Optimal Detection Strategies in Dynamic Hypothesis Testing,' IEEE Transactions on
Systems, Man, And Cybernetics, Vol. 25, No. 7, July 1995, pgs. 1130-1138.
[134] V. Raghavan, P. K. Willet, K. R. Pattipati and D. L. Kleinman, 'Optimal Measurement Sequencing
in M-array Hypothesis Testing Problems', in Proc. of 1992 American Controls Conference,
Chicago, IL, June 1992.
[135] T. Cover and J. Thomas, Elements of Information Theory, John Wiley and Sons, 1991.
[136] A. Farina, A. Graziano, and R. Miglioli, ‘Multiple Hypothesis Tracking (MHT) for monoradar and
multiradar tracking’, published by Alenia Systems, Italy, Mar. 1996.
[137] T. C. Wang and P. Varshney, ‘A Tracking Algorithm for Maneuvering Targets’, IEEE Trans. on
Aero. and Elec. Sys., Vol. 29, No. 3, July 1993.
[138] T. C. Wang and P. Varshney, ‘Measurement preprocessing approach for target tracking in a
cluttered environment’, IEE Proc-Radar, Sonar Nav., Vol. 141, No. 3, June 1994.

151
LIST OF ABBREVIATIONS

BF – Belief Filter
CDF – Cumulative Distribution Function
DA – Data Association
GMTI – Ground Moving Target Indicator
HRR – High Range Resolution Radar
ID - Identification
IMM – Interactive Multiple Model
MHE – Multiple Hypothesis Estimation
MHT – Multiple Hypothesis Tracking
MI – Mutual Information
MLE – Maximum Likelihood Estimator
MME – Multiple Model Estimator
MSE – Mean Squared Error
MSTAR – Moving and Stationary Target Acquisition and Recognition Program
MTI – Moving target Indicator
JBPDA – Joint Belief-Probabilistic Data Association
JPDAF – Joint Probability Data Association Filter
PDF – Probability Distribution Function
PMF – Probability Mass Function
SAR – Synthetic Aperture Radar
SBDA – Set Based Data Association
STaF – Statistical Feature-Based Classifier by Rick Mitchell

152
APPENDIX A TRACKING

The optimal standard tracking algorithm is the MHT algorithm and Bar Shalom’s book details a variety of
tracking algorithms. While many approaches are expressed, for ease of comparison, they are represented in
the table below where the proposed belief tracking algorithms for comparison is addressed.

Table A.1 Overview of Tracking Algorithms.

Adapted from [12].

Algorithm Time No. of Unresolved Relative Data Performance Computational Complexity


Horizon Data Modeled in In Defense Multiple Target
(No. Assoc. Algorithm Environment
Samples) Hyp.
Unresolved Resolved Data Exact Solution Aprx.
Data Solution

NN 1 1 No Poor Poor Low Low


NN-M 1 1 Yes Fair Poor Low Low
PDA 1 1 No Poor Fair Low Low
JPDA 1 1 No Fair Good Exponential Medium
JPDAM 1 1 Yes Good Good Exponential Low
NN-JPDA 1 1 No Fair Good-excellent Polynomial Medium
Assignment 1 1 No Fair Good-excellent Polynomial Medium
DP – Verturbi Many 1 No Poor Good Polynomial Medium
Hough Transform Many 1 No Fair Good Polynomial Medium
MHT Many Many No Good Optimal Exponential High
MHT-M Many Many Yes Best Excellent Exponential High
Morefield Many Many No Fair Excellent Exponential High
SM-EKF Many Many No ? Good Polynomial High
SM-ENKF Many Many No ? Excellent Exponential High
Branching Many Many No Fair Excellent Bounded Med.-High
Branching-M Many Many No Good Excellent Bounded Med.-High
MDA Many Many No Good Excellent - High
MDA-M Many Many Yes Excellent Excellent - High
Exact N-Best Hyp. Many N No Good Excellent - Medium
IMM Many Many No Good Good Bounded High
IMM-PDAF Many Many No Fair Good Exponential Medium
JMP Many Many No ? Excellent Exponential Low
ATR-Filter Many Many No Excellent Excellent Bounded High
MPDAF Many Many No Excellent Excellent Exponential High
ATRF Many Many Yes Excellent Excellent Bounded Medium
BF Many Many Yes Excellent Excellent Exp.-Bounded High

Acronyms
BF – Belief Filter DP – Dynamic Programming
IMM – Interactive Multiple Model JPDAF – Joint probability Data Association Filter
SM – Symmetric Measurements KF – Kalman Filter
MPDAF – Multiple Pattern Data Association Filter NN- Nearest Neighbor
M –modified JMP – Joint multiprobability
BF – Belief filter ATRF – Automatic Target Recognition Filter

153
APPENDIX B FUSE ALOGRITHM

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% SBDA: Using Dempster Shafer Evidential Belief Filtering
%
% Inputs: measurements HRR Beliefs, MTI data
% Data = [O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 O11 Bel ]
% Objectives: Identify Object Type : O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 O11
% Truncation = threshold by which states with small belief values are discarded
% Data, Length = vector parameters to be fused
% Outputs: Fused data vector
% New Data - length is number of unique, significant states and evidence belief values
% New Length - length is number of unique, significant elements present in fused data vector
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [new_data,new_length] = fuseds(truncation,data1,length1,data2,length2)

% function to fuse data for recursive Dempster-Schafer


% Data X - frame of discernment
% Belief Function (Bel) P(X) -> [0, 1] subset of X given values btwn [0 1]
% Bel(A) - degree of belief that X belongs to set A
% Power set is all subsets of X: {A,B,C}, {A,B}, {A,C}, {B,C}, {A}, {B}, {C}, {null}
% Reduce computational complexity - exploit redundancy
% 1. Boundary conditions: Bel(null) = 0; Bel(X) = 1
% 2. Monotonicity: If A (contained in) B, then Bel(A) < Bel(B)
% 3. Continuity:
% 4. Subsets: Bel(A1 U A2) >= sum(Bel(Ai)) - sum(Bel(Ai intersect Aj)+ - +
% Note: for probability above equation set = ; belief uses <=
% 4a. Bel(A) + Bel(notA) <= 1
% 5. Plausibility Pl(A) = 1 - Bell(notA) or Bel(A) + Pl(notA) = 1
% defined as support info by A and info not refuted in set notA
% thus, pl(A) >= Bel(A)
% Note if pl(A) = Bel(A), then it is a probability function

% Basic probability assignment m, where m is btwn [0,1];


% Bel(A) = sum((over B contained in A) m(B)) such that m(null) = 0, and sum(m(A)) = 1
%

%% DEMPSTERS RULE OF EVIDENCE COMBINATION


% Basic probability assignments m1(A) m2(B)
% then m12(C) = sum (over C = A intersect B) [m1(A)*m2(B)/1-K] if C != null
% = 0 if C == null
% where K = sum(A intersec B = null)[m1(A)*m2(B)] = conflict function

% GET THE SET DATA WHERE BOTH ELEMENTS ARE CONTAINED IN BOTH SETS
for i = 1:length1
for j = 1:length2
combined_data((i-1)*length2+j,1:11) = data1(i,1:11) & data2(j,1:11); % And data
combined_data((i-1)*length2+j,12) = data1(i,12)*data2(j,12); % Multiply
end;
end;

% Remove all redundant states in combined vector by merging like states into one state and removing the duplicate

for i = 1:length1*length2-1 % Total combined vector length


for j = i+1:length1*length2
if i ~= j
o_data(1,1:11)=combined_data(i,1:11) - combined_data(j,1:11); % Redundant states
zero=0; % set the check value
for k = 1:11
if o_data(1,k) ~= 0 % Discount the non "1" values
zero = 1;
end;
end;
if zero == 0
combined_data(i,12) = combined_data(i,12)+combined_data(j,12);

154
combined_data(j,:) = 0*ones(1,12);
end;
end;
end;
end;

% Remove all nonzero states so that the vector has minimal length

count = 0;
for i = 1:length1*length2
zero = 0;
for k = 1:11
zero = zero + combined_data(i,k);
end;
if zero ~= 0
count = count+1;
nonzero_combined_data(count,:) = combined_data(i,:);
end;
end;
nonzero_length = count;

% Remove all states that have evidence values less than the truncation level

count = 0;
for i = 1:nonzero_length
if nonzero_combined_data(i,12) > truncation
count = count+1;
truncdata(count,1:12) = nonzero_combined_data(i,1:12);
end;
end;
truncated_length = count;

% Sum the evidence values of remaining unique, significant states

sum = 0;
for i = 1:truncated_length
sum = sum + truncdata(i,12);
end;

% Normalize the evidence values of those states such that all evidence values sum to one

for i = 1:truncated_length
truncdata(i,12) = truncdata(i,12)/sum;
end;

% Pass those unique, significant states and normalized evidence values on to calling program
new_data = truncdata;
new_length = truncated_length;

155

You might also like