Professional Documents
Culture Documents
Abstract—Autonomous on-orbit satellite servicing and inspec- articulated model could provide the necessary information to
tion benefits from an inspector satellite that can autonomously users on the ground with a fraction of the data transmission
gain as much information as possible about the primary satellite. requirement of the raw imagery. This motivates the creation
This includes performance of articulated objects such as solar of algorithms that can characterize the performance of the
arrays, antennas, and sensors. This paper presents a method target satellite autonomously using computer vision. Devel-
of characterizing the articulation of a satellite using resolved
monocular imagery. A simulated point cloud representing a oping autonomous methods is also important for autonomous
nominal satellite with articulating solar panels and a complex repair missions [1] or deep space missions that have little or
articulating appendage is developed and projected to the image no ability to communicate with human operators on Earth.
coordinates that would be seen from an inspector following
a given inspection route. A method is developed to analyze Research in the area of computer vision for proximity op-
the resulting trajectory matrix. The developed method takes erations is very diverse, ranging from stereo vision used
advantage of the fact that the route of the inspector satellite is to estimate target satellite moment of inertia (MOI)[2] to
known to enable the reprojection error to be used as an objective demonstration of fully autonomous rendezvous and capture
function for optimization of a model that best describes the [1]. Some of these algorithms rely upon markers on the
feature points seen. Once the model is calculated, it can be
compared to the known truth. Simulating the input data allows primary satellite that assist the computer vision algorithm in
different inspection routes to easily be evaluated. Particularly, determining pose [3],[4]. Others rely on prior knowledge of
the effects of fly-by and natural motion circumnavigation inspec- the primary satellite’s configuration [5],[6]. Some methods
tion routes with sub-optimal illumination are investigated using rely on stereo vision systems,[2],[7] some on monocular
a set of newly proposed evaluation metrics. vision systems,[8],[9],[6] and some use laser illumination
of reflective markers [1]. Regardless of the method, they
all use computer vision to identify features in images and
then matches those features from frame to frame (or in
TABLE OF C ONTENTS corresponding frames in the case of stereo systems). The
relative position of the features is then estimated using some
1. I NTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 type of estimation filter such as an extended Kalman filter
2. M ETHODS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 (EKF) or a particle filter. The attitude of the primary satellite
3. R ESULTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 is then estimated by using feature points to define a pri-
4. C ONCLUSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 mary reference frame,[8],[10] or by using a known model
ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 of the primary satellite [5]. Alternatively, if the orientation
of feature points is known, the attitude can be calculated
R EFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 directly and used as the measurement in the EKF [4],[8],[10].
B IOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Feature points can also be used to create a 3D model of the
satellite [11]. Apart from their own previous work[12],[13],
the authors have not discovered previous research attempting
1. I NTRODUCTION to use computer vision to determine if the primary satellite
is in-fact a rigid body, or if it is undergoing some type of
Due to the nature of the space environment, direct human articulation.
interaction with on-orbit satellites is incredibly dangerous
and costly. Therefore, with the exception of a few manned While articulation has not been investigated for space ap-
missions, interaction with satellites is traditionally limited plications specifically, there has been extensive work on
to windows of radio communications and observations from identifying and characterizing articulation in the computer
telescopes. These methods are inadequate for inspection vision field. The seminal works of Yan and Pollefeys [14]
and monitoring of a satellite to determine performance. To and Trasadern and Reid [15] extend the concepts of structure
perform these monitoring tasks, an inspector satellite in close from motion through factorization of a trajectory matrix
proximity to the primary satellite could use sensors, such [16] from rigid bodies only to multiple linked rigid bodies.
as a camera, to characterize the primary satellite and verify Numerous researchers have further extended these concepts
the performance of articulated objects such as sensors, solar to characterize articulated motion.
arrays, robotic arms, and communications antennas. Imagery
contains a lot of information, however it is also difficult to Yucer et al.[17] developed a method of reconstructing an
transmit to the ground due to the large size of video files. An object with multiple articulated motions using optimization
techniques. First, a ‘ray-space optimization’ method is used
U.S. Government work not protected by U.S. copyright
1
to convert each 2D image trajectory into a 3D trajectory. This the effect of inspection routes with sub-optimal illumination
method requires knowledge of the camera motion a priori, on these metrics is investigated.
however this requirement is not necessarily restraining in
an application in which the relative trajectory between the
inspector and the primary satellite is known. The 3D trajecto- 2. M ETHODS
ries of each point are then used to segment the motion into N
rigid bodies. Once segmented, an optimization routine is used Method overview
to find a rotation matrix, shape, and translation for each rigid The method developed herein extend [12] and [13] in attempt-
body that minimizes the error in reprojecting the points to ing to detect and characterize the articulation and shape of
image coordinates. Next, the kinematic chain is estimated and a satellite from a simulated trajectory matrix that would be
another optimization is done with the articulation constraints seen from a monocular camera on an inspector satellite. The
enforced. method begins with an inspection route and the time history
of a point cloud representing a nominal satellite, shown in
Paladini et al.[18] present a method in which the motion Figure 1a, with articulated motion. Given the inspection
matrix (identifying the motion of the camera) and shape ma- route and the point cloud, a trajectory matrix is created
trix (identifying the shape of the object) are solved for in an using a pinhole camera model, taking into account occlusion
alternating fashion with least squares. First, an initial solution and shadowing. The trajectory matrix is a 2F × P matrix
to the motion matrix is found using the method outlined by where F is the number of image frames and P is the number
Tresadern and Reid[15]. This is then refined by optimizing of feature points. The trajectory matrix and the inspector
a cost function that includes the metric constraints and the satellite trajectory are then used to characterize the satellite
articulation constraints. They use the method in Marques and its articulated motion. Figure 1a provides an overview of
and Costeira[19] to enable their method to handle missing the current method. This method is a combination of multiple
data. Results suggest the algorithm is capable of handling static optimization routines similar to the process outlined by
trajectory matrices in which over 60% of the columns contain Yucer et al.[17].
missing data without detriment to accuracy of the recovered
shape. Simulation set-up
Zhang and Hung[20] approach the problem of recovering For this work a nominal satellite was developed for testing.
articulated structure from motion as an ellipsoid fitting prob- This satellite consists of a main body, two articulating panels,
lem. They begin by noting that each subset of the trajectory and a 6-component articulating arm. Each of the joints is a
matrix representing the points on a rigid body can be repre- revolute joint meaning that it articulates about a constant axis.
sented as a 3D ellipsoid. Since points corresponding to the Figure 1b shows the model satellite which is approximately
same motion will lie on the same ellipsoid, this can be used 2 units deep (x-direction), 10 units wide (y-direction), and 10
to segment the points. They use the error in fitting a point units tall (z-direction) with 8 articulation joints with labels
to an ellipsoid as a metric with which to segment the feature and articulation axes shown. Points were randomly spread
points into separate motions. over each face to build the point cloud shown in Figure 1c.
In this work the two panels and only 3 of the joints in the
Much of the literature on recovery of articulated structure articulating arm were activated. The effect of different num-
from motion is focused on capturing the motion of humans. bers of active appendage joints is assessed in previous work
Some of this research can be applied to articulated motion [13] where all 8 joints are active with 6 in a single kinematic
of any kind. For instance, Fayad et al.[21] present a method chain. Adding links to the chain increases the complexity,
of automatically recovering the 3D shape and structure of an however with 6 joints in the appendage this method was
articulated body. Their method uses an optimization routine demonstrated to be capable of building an accurate model of
to assign feature points to the motion that minimizes the the satellite [13]. For this work, the effects of sub-optimal
total re-projection error of the solution. The method allows lighting conditions are investigated. To minimize variation,
points to belong to more than one motion. This overlap of the motion of the satellite is the same in all cases. Over the
points on multiple motions defines the joint between the two length of the simulation, joints 1 and 2 articulate linearly from
objects and allows the kinematic chain to be built within -1.26 radians to 1.26 radians while joints 3, 5, and 6 articulate
the optimization framework. Russell et al.[22] expand on linearly from -0.94 radians to 0.94 radians.
the method of Fayad et al. to include the capability to
segment motion into independent objects and further segment Two types of inspection routes are investigated in this work: a
independent objects into multiple parts based on dependent 2 × 1 elliptical natural motion circumnavigation (NMC),[24]
motions such as articulation. and a fly-by in which the inspector route consists of a straight
line path in the relative frame. In each case, the route can
Understanding articulated motion is also important to enable be phased for the ‘best’ illumination conditions or it can
robots to learn about how objects move in order to allow be offset by some illumination offset angle (θ). The ‘best’
the robots to manipulate the objects in the future. With this illumination conditions are assumed to be those in which
motivation, Pillai et al.[23] developed a training process by the primary satellite is most illuminated as viewed by the
which the parameters of everyday articulate objects, such as inspector satellite, however, due to the nature of the space
doors and drawers are determined from video of a person environment, even these illumination conditions are sub-
using those objects. optimal as compared to a terrestrial application in which an
object can be lit from all sides. Figure 2 shows the meaning
The method in this work builds on existing work in the of θ for fly-by routes and NMCs. Fly-by routes consist of
computer vision field and applies it to the inspection of the inspector satellite moving past the primary satellite on a
a satellite in space. An algorithm is developed to build linear path in the relative frame that is parallel with the in-
an articulated model from simulated imagey taken from an track direction. The route is simulated to occur over 1 hour
inspector satellite on a particular inspection route. Evaluation (assuming the primary satellite is in a 24 hour circular orbit).
metrics to quantify the quality of the model are developed and In this case, θ is related to the location in the orbit where
2
Figure 1. a) Outline of the proposed method. b) Diagram showing component and articulation axis naming. c)
Example point cloud representation of the satellite.
Figure 2. Inspection route diagrams illustrating the illumination offset parameter (θ). a) Fly-by inspection route. b)
NMC inspection route.
3
the fly-by occurs. For an NMC the zero illumination offset Knowing that a point is stationary also allows the calculation
condition (θ = 0) represents a route in which the phasing of its position to be simplified to a triangulation problem that
of the NMC is such that the inspector is always between the can be solved using linear least squares. For a stationary
primary satellite and the Sun. In this case, θ represents sub- point S1p = S2p = ... = SFp , therefore equation (1) becomes
optimal phasing of the NMC with the primary satellite orbit. S p = Ci + µpi Dip . When written for each of the F frames
there are 3F equations. Since S is no longer different for each
Given the inspector route, the trajectory matrix is created by frame there are only F+3 unknowns. This set of equations
translating visible points from the point cloud to the camera (see equations (3)) can be written in the form of Ax = b
image plane at each position in the inspection route where an (equation (4)) and solved with linear least squares. Note that
image is taken (100 images, an arbitrarily selected number, in this framework, points are independent, and therefore the
are taken for each route simulated in this work). Points are optimization is performed on each point separately using only
visible if they are illuminated by the Sun vector, they are on the frames in which the point is visible.
a face that is pointed toward the camera, and they are not
occluded by another part of the satellite. An error function is S p − µp1 D1p = C1
used to gradually fade points in and out of view as the angle
between the Sun or camera and the normal vector translates S p − µp2 D2p = C2
through 90◦ . Visible points are translated to the camera .. .
image plane using a pinhole camera model. For simulation . = ..
purposes, it is assumed that throughout the trajectory the cam- S − µF DFp
p p
= CF (3)
era is at a distance from the primary satellite that produces
resolved imagery of appropriate resolution for feature point
extraction and tracking. In all cases, the primary satellite is Sp
I −D1p 0 ··· 0
simulated to be in geostationary orbit at equinox. The effect p C1
µ1
of Earth shadowing is not simulated. Note that while the I 0 −D2p ··· 0 µp C2
. = ... (4)
. .. .. .. .. 2
nominal satellite contains panels that resemble solar arrays, .
no effort was taken to ensure their motion is consistent with . . . . . ..
I 0 0 ··· −DFp CF
a solar array. Both sides of the array may be illuminated at µpF
different portions of the route.
Ray space optimization Note the appropriate selection of γs will depend on the noise
The ray space optimization technique outlined in Yucer et in the inspection trajectory and feature point locations. In
al.[17] provides an excellent method of estimating 3D shape the presence of noise, a simple threshold may not be capable
from a 2D image coordinates when the camera motion is of segmenting points that are stationary, in which case this
known. The method parametrizes the 3D location of each method should not be used to segment points on the main
point (p) in a given frame (i) using the camera center (Ci ), body.
the direction vector from the camera center to the point (Dip )
and the distance along the ray from the camera center to the Segment points
point (µpi ) using the following equation: The nature of a circumnavigation inspection route yields a
trajectory matrix with a significant amount of missing data.
Sip = Ci + µpi Dip (1) This makes motion segmentation challenging. Figure 3
shows an example trajectory matrix where empty elements
are shown in black and elements containing data are shown in
where Sip is the 3D location of point p in frame i. There are white. Spectral clustering is a popular method of segmenting
many valid 3D paths that can result in the same 2D image data [25]. While there are multiple methods of spectral clus-
coordinates, therefore, an added assumption is made that the tering, they all operate on some type of similarity matrix. For
points move smoothly from frame to frame. This leads to a
P points, a similarity matrix is P × P in dimension with each
cost function that is only a function of the depth µpi at each element representing how similar the point corresponding to
frame. its column is to the point corresponding to its row. The
F −1
type of similarity metric used varies widely. Some, such
X as LSA [26] use a metric based on the angles between the
µp ) =
Ers (~ ωip k(Ci + µpi Dip ) − (Ci+1 + µpi+1 Di+1
p
)k2 (2) subspaces of nearest neighbors, while others use a metric
i=1
based on the range and velocity between points [20]. Once
the similarity metric is chosen, there are numerous ways of
segmenting the data. For this work we investigated both
The term ωip is a weighting term that is based on the 2D spectral clustering using k-means [27] as well as recursive
distance of the point in frame i + 1 from the epipolar line 2-way spectral clustering [28],[17]. Both methods involve
corresponding to the point location in frame i. This weighting solving for the eigenvectors of the Laplacian matrix. The
method effectively keeps the 3D location of the point when Laplacian matrix is the similarity matrix minus a diagonal
it is moving near its location when it is static [17]. The matrix of the row sums of the similarity matrix. The reader
distance from the epipolar line can also be used to determine is directed to [29] for additional information on spectral
if the point is stationary. With the assumption that the clustering.
camera motion is known with respect to the main body of
the satellite, points that are stationary are points on the main Particularly in the case of an NMC, there are many points
body (or components that are rigidly attached to the main on the same rigid component that do not share any common
body throughout the inspection). Points that have an average frames. Therefore it is difficult to segment them onto the
distance from the epipolar line below a threshold (γs ) are same rigid component. When added to the fact that the
considered stationary and are segmented to the main body number of rigid components is not assumed to be known a
[13]. priori, over segmentation is necessary to ensure each segment
4
The rotation matrix (Rn ) is parameterized using an Euler axis
(~a) and an Euler angle (φ) and represents the rotation matrix
from the segment’s body frame (b) and the world frame (w).
The reference frames used in this work are shown in Figure
Figure 3. Example trajectory matrix mask for 1 4. The Euler axis is not constrained to be unit length, instead
complete NMC. it is normalized before being used to calculate Rn .
To begin optimization, an initial guess must be supplied for
consists of points primarily from the same component. The k- all the optimization variables. In many cases, the ray space
means clustering technique requires the number of segments optimization results provide a good method of initializing
(k) to be given while the 2-way spectral clustering technique the shape (Ωn ) and the translation (T n ). However, when
used by Yucer et al. [17] only requires an estimate for the the camera rotation is less than the articulated motion some
number of segments. In testing for this application both issues were discovered [13]. Since the ray space optimization
methods produced similar results with approximately 90% of routine attempts to find the shortest path that models the
points segmented correctly. For this work, a similarity matrix observed motion, when the camera motion is less than the
based on the range and 2D velocity [21] between image points articulated motion the solution tends toward a path in which
was used with spectral clustering using k-means [27],[30]. the point follows a trajectory similar to the camera motion
The value of k was judiciously set to 10 for fly-by routes and rather than the articulated motion. If this solution is used
22 for NMC routes. to initialize the rigid body optimization, the error propagates
through the process resulting in an inaccurate model.
Rigid body optimization
To resolve this issue, another method was developed to
Once the points are segmented, the next step is to find how initialize the shape and rotation parameters using the scaled
all the rigid bodies are moving. Similar to Yucer et al.[17] we orthographic projection model [31]. Using a scaled ortho-
sought the translation, rotation, and shape that minimized the graphic camera model, the trajectory matrix can be written as
reprojection error and a smoothness constraint. W = αRΩ + T where α is a scaling based on the distance
F from the camera to satellite (α = |rWf C | ), R is the 2F × 3
X
min kWin − Pi (Rin Ωn + T̃n 2 motion matrix containing the first two rows of the rotation
i )k +
R,T,Ω
i=1
matrix between the camera frame and the body frame at each
| {z } time step, Ω is the 3 × P shape matrix in the body frame, and
Reprojection Error T2D is the translation from the world frame origin to the body
XF frame origin projected into the 2D image plane. Since the
λrb
acos(.5(trace(Rin (Ri−1 n
)T ) − 1)) + kTin − Ti−1
n
k
scale factor (α) and the rotation matrix between the camera
i=2
frame and the world frame are known from the inspection
| {z } route, the scaled orthographic projection equation can be used
Smoothness Constraint to solve for the resulting shape from any body rotation and
(5) translation as follows.
This optimization is performed for each segment. W n are the
columns in the trajectory matrix for the n-th segment. P is W̃ = RΩ = 1
αW − T~2D (7)
the camera matrix that projects points in the world frame to Ω=R W †
(8)
the camera image plane. Ωn is the 3D location of the points
in the body frame. Rn is the rotation matrix that rotates body Wcalc = α(RS + T~2D ) (9)
frame for segment n to the world frame. T̃ n is the translation E = kW − Wcalc k (10)
from the origin of the world frame to the center of the points
in Ωn while T n is the translation from the origin of the world
frame to the origin of the body frame. They are related by: This set of equations is used to initialize the rotation parame-
~ The motion matrix R is determined for 5,000
ters (~a and φ).
T̃in = Tin − Rin Ω̄n (6) seeds consisting of a randomly selected constant Euler axis ~a
and a linear set of Euler angles φ ~ of a random slope. The
Using T̃ n in the reprojection error portion of the cost function translation (T2D ) is taken from the ray space optimization
helps to decouple the rotation from translation [13]. results. For each R the error (E) between Wcalc and W
is calculated. The seed with the lowest error is used to
There are multiple rigid body shape and motion combinations ~ the resulting Ω is used to initialize shape.
initialize ~a and φ;
that can minimize the reprojection error. Since the rigid The translation is initialized to be the center of the available
bodies are most likely to follow smooth paths, the smooth- 3D estimates at each frame (Tin = S̄in ) of the ray space
ness constraint is added to encourage solutions that follow optimization results. This is a highly non-linear optimization
a smooth rotational and translational paths. λrb is used to problem with numerous local minima. While this method of
weight the smoothness constraint. The appropriate value of initialization worked well for the cases run, alternatives, such
λrb will be dependent on the scaling of the trajectory matrix. as using a heuristic optimization solver for initialization or a
Setting it too low will not enforce smoothness and may lead global optimization algorithm may produce superior results.
to a jittery path while setting it too high will minimize the
rigid body motion at the expense of reprojection error. To Outlier rejection is conducted every 100 iterations during
allow the method to be robust to scaling, the value of λrb rigid body optimization. Points that have a reprojection error
is choosen adaptively. Before optimization begins, and after higher than some multiplier (γf ) times the average per point
every 100 iterations during optimization, the value of λrb is reprojection error and points that have an average range to all
adjusted so that the smoothness cost is no more than 25% and other points higher than γr standard deviations from the mean
no less than 5% of the reprojection error. range between points are rejected as outliers.
5
Figure 4. Reference frames used in this work. The ‘world frame’ (w) corresponds to the CW frame and moves with
the primary satellite in its orbit. Each component has a ‘body frame’ (b) attached to it which is arbitrarily assigned.
The ‘camera frame’ (cam) is attached to the inspector camera with the positive z axis along the imaging axis and the
positive x axis aligned with the camera’s direction of travel in the world frame.
Combine segments For each segment, the closest two segments from each of
Since the data has been over-segmented, by choosing a larger the four adjacency matrices are selected for testing to deter-
than expected number of segments, the next step is to com- mine if they should be merged. Rigid body optimization is
bine any segments (merge) that are part of the same rigid performed on the combined segments as outlined in section
body. To do this, four adjacency matrices that compare the 2. If the resulting function value is below 200% (γc ) of
location and rotation of each segment to each other segment the average function values of the segments independently,
are created. If two segments are in fact the same rigid the two segments are merged and the adjacency matrices are
body, the rotations should be the same, therefore the rotation recalculated. This is continued until all segments have been
b ,b checked without triggering a merge [13].
between their body frames (Ri n1 n2 ) should be constant for
all common image frames. Build kinematic chain
b ,bn2 The next step is to determine the kinematic chain that best
Ri n1 = (Ri1 )T Ri2 (11) describes which components are linked to each other. This
b ,b is done by evaluating the average range adjacency matrix
The average variance of the elements of Ri n1 n2 over all mentioned in the previous section. The kinematic chain is the
common frames (CF) is used to define the (n1 , n2 ) position minimum spanning tree of a graph (G) with each component
of an N × N adjacency matrix where N is the number of as a node and edges defined by the average distance between
segments. components in common frames (CF ).
7
The first term enforces the single axis of rotation constraint matched to the most appropriate truth joint the articulation
while the second term encourages smooth motion by only parameters can be compared. However, the articulation
penalizing articulation that is not at a constant rate. Rip,c is parameters of the calculated model are in a body frame that
calculated from Riw,p and Riw,c . Mip,c is calculated from the is unrelated to the body frame used to create the truth model.
To alleviate this, the articulation axis and joints are compared
optimization variables (âp , âc , φi ) by first aligning the axes in the world frame (w) as follows (equations (22)), where N
and then rotating about it as outlined in equation set (17) [13]. is the number of joints, and F is the number of frames with
data. Since the joints are revolute, any location on the axis
As expressed in Yucer et al.[17], the joint location translated is acceptable, therefore for the joints the error is measured as
into the world frame will coincide for two linked parts. the distance from the joint to the line represented by the true
joint location and articulation axis in the world frame.
Rip J p + Tip = Ric J c + Tic , ∀ i ∈ CF
" #
N Fj
For components linked by a universal joint there is a single Eâ = 1
P 1
P
acos(|(âjtrue,w )T âjcalc,w |)
Jp and Jc that best satisfy this constraint. For a revolute N
j=1
Fj
i=1
joint, any point on the axis will satisfy this constraint. To " #
N Fj
mathematically favor the joint locations that are closest to 1 1
kâjtrue,w j j
P P
the component center, the norm of the joint locations is also EJ = N Fj
× (Jcalc,w − Jtrue,w )k (22)
j=1 i=1
minimized giving the following cost function [13].
CF The articulation angle is evaluated using the error in the range
X of angles covered, Eφrange , and the rate of change of the
min [k(Rip J p + Tip ) − (Ric J c + Tic )k2 angle Eφ̇ .
Jp ,Jc
i=1
+ λb (kJp k + kJc k)] (20) N
1
|∆jtrue − ∆jcalc |
P
Eφrange = N
(23)
j=1
As the kinematic chain grows, the complexity of the problem ~ j ) − min(φ ~j )
increases, as does the number of variables. The number of ∆j = max(φ (24)
" #
optimization variables in equation (14) is (4 × 3 × (N − 1)) + 1
PN
1
F
Pj i i
(CF ×(N −1))+(3×P ). Each articulation parameter (âp , âc , Eφ̇ = N Fj
|φ̇true − φ̇calc | (25)
j=1 i=2
φ, Jp , Jc ) for each joint has an influence on the components
and joints below it in the kinematic chain. This means the φ̇i = φi−1 − φi (26)
optimization variables are highly interdependent resulting in
a complex non-linear problem. However, the articulation
parameters do not have an influence on the components above
the joint in the kinematic chain. Using this relationship, a In addition to evaluating the results directly, it may be inter-
method termed Incremental Joint Addition (IJA) was devel- esting to understand how the satellite is capable of moving,
oped in [13]. IJA determines the articulation parameters in or more generally, what volume around the satellite are the
order, according to the kinematic chain by optimizing at every appendages capable of reaching. This can be termed the
step. Equation (14) is optimized N-1 times, each time another satellite’s workspace. To calculate the satellite’s workspace,
joint is added to the chain. This decreases the complexity the space around the satellite is discretized into a grid of
since each time a set of parameters is introduced into the m × m × m = M points. Each joint is moved through
optimization they only influence the reprojection error for one its range of articulation angles at Y increments. A convex
component. It also provides a method of further combining hull is created encompassing each shape in the world frame
components that are likely to be the same rigid body. If the in the current increment and the previous increment. The
change in Euler angle for a joint is low, the two components grid points in M that fall within the convex hull [32] are
likely can be combined into a single component. Results annotated as covered. The collection of points in M that
using the IJA method as well as optimizing over all joint are covered represents the workspace of the satellite. Both
parameters together (AJP) are presented in previous work the calculated workspace and the true workspace are found
[13]. For the work in this paper the IJA method is used in this manner. They are compared using three percentages.
exclusively. Pwc is the percentage of the truth workspace covered by the
calculated model, Pwoc is the percentage of additional space
Evaluate results covered by the calculated model, and Pgc is the percentage
Multiple metrics are used to evaluate the results. Because the of the grid points where the true workspace matches the
trajectory matrix is simulated from a known point cloud, the calculated workspace. These values are calculated as follows
position of each point at each time step is known. This allows where B is an M × 1 vector with Bi = 1 when grid point i
calculation of the normalized reconstruction error [21] (E3D ) is covered and Bi = 0 when grid point i is not covered.
which is direct comparison of the calculated point cloud to denotes element-wise multiplication.
the known point cloud in the world frame using equation calc
B truth
P
BP
(21) where Struth
i are the true world frame point locations Pwc = B truth
(27)
of all points calculated in image frame i and Scalci are the P calc
B P |B truth
−1|
calculated world frame point locations in image frame i. Pwoc = B truth
(28)
P calc truth
F 1−|B −B |
P
kStruth
i −Scalc
i k Pgc = M (29)
i=1
E3D = F
(21)
kStruth Note that the workspace calculation does not consider phys-
P
i k
i=1
ical limits to motion imposed by other components. In other
The calculated kinematic chain may contain more or fewer words, two components may occupy the same physical space
joints than the truth model. Once the calculated joints are during some combination of motions [13].
8
Figure 5. Example results. a) Truth (blue ·) and calculated (red ∗) 3D position of points for an example frame. b)
Calculated points, kinematic chain, joint locations, and axes for an example frame. Points in each component are a
different color. c) Grid positions in both true and calculated workspace (blue ·), Grid positions in only true workspace
(red ·), Grid positions in only calculated workspace (green ·).
In this work, the effect of fly-by routes and NMCs with sub-
optimal illumination phasing were investigated. As expected,
for both fly-by and the NMC, the results degrade as the illu-
mination offset angle increases, with significant degradation
occuring beyond approximately ±75◦ . Results show that the
additional point visibility gained by routes with over 50%
NMC completion does not contribute to improved quality of
the model. This suggests that NMC inspection routes could
be cut-off after 50% completion, or if the route is continued,
data from the first 50% of the NMC should be used to create
the model and any remaining data should be added with
some type of model refinement method. Further work will
investigate model refinement methods.
Figure 9. Results from Illumination offset angle and
Percentage of completed NMC testing. Contour plot
represents a 4th order polynomial fit of both ACKNOWLEDGMENTS
experimentation variables to the equally weighted The authors would like to acknowledge Dr Frank Chavez and
combination of all evaluation metrics. Dots represent others at the Air Force Research Laboratory Space Vehicles
each trial colored by the value of the combined metric at directorate for their guidance in this research.
that point.
11
Table 1. Parameters. Method of determination: NA = R. T. Howard and R. D. Richards, Eds., vol. 6220, 2006,
Noise analysis; PPK=Prior problem knowledge; pp. 1–12.
CFR=Cost function ratios; PD=Point density; RA=Risk
assessment [7] G. Fasano, M. Grassi, and D. Accardo, “A stereo-
vision Based System for Autonomous Navigation of
Parameter Name/description Value Method an In-Orbit Servicing Platform,” in AIAA Infotech@
of Aerospace, Seattle, Washington, 2009, pp. 1–10.
determi-
nation [8] N. K. Philip and M. R. Ananthasayanam, “Relative
γs Stationary threshold: determines 1−6 NA position and attitude estimation and control schemes for
when a point is considered sta- the final phase of an autonomous docking mission of
tionary spacecraft,” Acta Astronautica, vol. 52, pp. 511–522,
k Number of segments: determines 10 PPK 2003.
the number of segments for spec- or
tral clustering. (sec. 2) 22 [9] S. J. Kelly, “A Monocular SLAM method to Estimate
γf Reprojection error fair share mul- 8 PD Relative Pose During Satellite Proximity Operations,”
tiplier for outlier rejection. (sec. Master’s Thesis, Air Force Institute of Technology,
2) 2015.
γr Range rejection multiplier for out- 4 PD,
lier rejection. (sec. 2) PPK [10] F. Yu, Z. He, B. Qiao, and X. Yu, “Stereo-vision-based
γc Segment merge criteria: allow- 2 RA relative pose estimation for the rendezvous and docking
able increase in function value for of noncooperative satellites,” Mathematical Problems in
acceptable merge. (sec. 2) Engineering, vol. 2014, p. 12, 2014.
γBB Joint bounding box expansion: 0.5 PPK [11] V. Ghadiok, J. Goldin, and D. Geller, “Gyro-Aided
determines the size of the bound- Vision-Based Relative Pose Estimation for Autonomous
ing box around the component
shape. (sec. 2) Rendezvous and Docking,” Advances in the Astronauti-
η Coefficient in joint penalty. (eqn. 100 CFR cal Sciences, vol. 149, pp. 713–728, 2013.
(14)) [12] D. H. Curtis and R. G. Cobb, “Satellite Articulation
λa Rotation parameter initialization 5 CFR Sensing using Computer Vision,” in Proceedings of
smoothness weight. (eqn. (19)) AIAA SciTech 2017, 2017.
λb Joint location initialization close- 5 CFR
ness weight. (eqn. (20)) [13] ——, “Satellite articulation characterization from an
γa Minimum articulation angle mean 0.007 RA image trajectory matrix using optimization,” in Ad-
rate. (IJA algorithm [13]) rad. vanced Maui Optical Space Surveillance Technologies
per Conference, 2017.
frame
γm Segment merge criteria: allow- 5 RA [14] J. Yan and M. Pollefeys, “A factorization-based ap-
able increase in function value for proach to articulated motion recovery,” in Proceedings
acceptable merge. (IJA algorithm of the IEEE Computer Society Conference on Computer
[13]) Vision and Pattern Recognition, vol. 2, 2005, pp. 815–
821.
[15] P. Tresadern and I. Reid, “Articulated structure from
R EFERENCES motion by factorization,” in Proceedings of the IEEE
[1] R. T. Howard, A. F. Heaton, R. M. Pinson, and C. K. Computer Society Conference on Computer Vision and
Carrington, “Orbital Express Advanced Video Guid- Pattern Recognition, vol. 2, 2005, pp. 1110–1115.
ance Sensor,” in IEEE Aerospace Conference Proceed- [16] C. Tomasi and T. Kanade, “Shape and motion from
ings, 2008, pp. 1–10. image streams under orthography: a factorization
[2] B. E. Tweddle and D. W. Miller, “Computer Vision- method.” International Journal of Computer Vision,
Based Localization and Mapping of an Unkown, Unco- vol. 9, no. 2, pp. 137–154, 1992.
operative and Spinning Target for Spacecraft Proximity [17] K. Yucer, O. Wang, A. Sorkine-Hornung, and
Operations,” Dissertation, Massachusetts Institute of O. Sorkine-Hornung, “Reconstruction of Articulated
Technology, 2013. Objects from a Moving Camera,” in IEEE International
[3] C.-C. J. Ho and N. H. McClamroch, “Automatic space- Conference on Computer Vision Workshops, 2015, pp.
craft docking using computer vision-based guidance 28–36.
and control techniques,” Journal of Guidance, Control, [18] M. Paladini, A. Del Bue, J. Xavier, L. Agapito,
and Dynamics, vol. 16, no. 2, pp. 281–288, 1993. M. Stošić, and M. Dodig, “Optimal metric projections
[4] B. E. Tweddle and A. Saenz-Otero, “Relative Computer for deformable and articulated structure-from-motion,”
Vision-Based Navigation for Small Inspection Space- International Journal of Computer Vision, vol. 96, no. 2,
craft,” Journal of Guidance, Control, and Dynamics, pp. 252–276, 2012.
vol. 38, no. 5, pp. 969–978, 2015. [19] M. Marques and J. Costeira, “Optimal shape from mo-
[5] B. J. Naasz, J. Van Eepoel, S. Z. Queen, C. M. South- tion estimation with missing and degenerate data,” in
ward, and J. Hannah, “Flight results from the HST SM4 2008 IEEE Workshop on Motion and Video Computing,
Relative Navigation Sensor system,” in 33rd Annual WMVC, no. 1, 2008, pp. 2–7.
AAS Guidance and Control Conference, Breckenridge, [20] P. B. Zhang and Y. S. Hung, “Articulated Structure
Colorado, 2010. from Motion through Ellipsoid Fitting,” in International
[6] S. J. Hannah, “A relative navigation application of UL- Conference of Image Processing, Computer Vision and
TOR technology for automated rendezvous and dock- Pattern Recognition, 2015, pp. 179–186.
ing,” in Proceedings of SPIE, Spaceborne Sensors III, [21] J. Fayad, C. Russell, and L. Agapito, “Automated articu-
12
lated structure and 3D shape recovery from point corre- B IOGRAPHY [
spondences,” in Proceedings of the IEEE International
Conference on Computer Vision, 2011, pp. 431–438.
David Curtis received his B.S. degree in
[22] C. Russell, R. Yu, and L. Agapito, “Video Pop-up: Mechanical Engineering from Clarkson
Monocular 3D Reconstruction of Dynamic Scenes,” in University in 2005 and his M.S. degree
European Conference on Computer Vision. Springer in Aeronautical engineering from the Air
International Publishing, 2014, pp. 583–598. Force Institute of Technology in 2009.
[23] S. Pillai, M. R. Walter, and S. Teller, “Learning He is currently a Ph.D. candidate at
Articulated Motions From Visual Demonstration,” in the Air Force Institute of Technology.
Robotics: Science and Systems, 2014. His research interest are in the fields
of computer vision and optimal control,
[24] C. Sabol, R. Burns, and C. A. McLaughlin, “Satel- particularly for space applications.
lite formation flying design and evolution,” Journal of
Spacecraft and Rockets, vol. 38, no. 2, pp. 270–278,
2001. Richard Cobb received his B.S. from the
[25] R. Vidal, “Subspace Clustering,” IEEE Signal Process- Pennsylvania State University in 1988,
ing Magazine, no. March, pp. 52–68, 2011. his M.S. from the Air Force Institute of
Technology in 1992, and a Ph.D. from
[26] J. Yan and M. Pollefeys, “A general framework for the Air Force Institute of Technology in
motion segmentation: Independent, articulated, rigid, 1996. He is currently a professor of
non-rigid, degenerate and non-degenerate,” in European Aerospace Engineering at the Air Force
Conference on Computer Vision. Springer Berlin Institute of Technology. His research fo-
Heidelberg, 2006, pp. 94–106. cuses on dynamics and control of space
[27] A. Y. Ng, M. I. Jordan, and Y. Weiss, “On Spectral structures for space-based remote sens-
Clustering: Analysis and an Algorithm,” Advances in ing, and optimization and control for aerospace applica-
Neural Information Processing Systems, pp. 849–856, tions.
2002.
[28] J. Shi and J. Malik, “Normalized cuts and image seg-
mentation,” IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 22, no. 8, pp. 888–905, 2000.
[29] U. V. Luxburg, “A Tutorial on Spectral Clustering A Tu-
torial on Spectral Clustering,” Statistics and Computing,
vol. 17, no. March, pp. 395–416, 2006.
[30] I. Buerk, “SpectralClustering,”
p. 2, 2012. [Online]. Available:
https://www.mathworks.com/matlabcentral/fileexchange/34412-
fast-and-efficient-spectral-clustering
[31] J. L. Mundy and A. Zisserman, Geometric Invariance
in Computer Vision, J. Mundy and A. Zisserman, Eds.
The MIT Press, 1992.
[32] J. D’Errico, “Inhull,” 2006. [Online]. Available:
https://www.mathworks.com/matlabcentral/fileexchange/10226-
inhull
13