You are on page 1of 5

1.

Introduction to Q-400 Digital Image Correlation (DIC)

The Q-400 Digital 3D Image Correlation System (DIC) is an optical measuring system, for true,
full-field, non-contact three-dimensional measurement of shape, displacements and strains on
components and structures.

The object under investigation must have a stochastic pattern on the surface, if such is not
naturally occurring it must be applied. This pattern is observed by the two cameras. The
deformations and distortions of this pattern due to deformation or movement of the object are
recorded by the cameras. These images are automatically analyzed with special high accuracy
correlation algorithms. The result, a set of data is generated containing the start contour of the
object at beginning of the measurement and the three dimensional displacement vector of each
object point due to the object deformation. Furthermore, the surface strain components are
derived at every surface point.

The System is easily and quickly to be calibrated and enables the measurement of displacements
of each surface point with sub pixel accuracy. Consequently, strains as small as 0.01 % can be
resolved. On the other hand, also very large deformations with strains of several 100% can be
analyzed. Multiple graphic displays and analysis tools allow comfortable data analysis and
reporting. Various data export functions deliver the interfaces for further data processing in
external program systems.

1.1. Principle of Digital Image Correlation


Two imaging sensors observing an object from different
angles provide, similar to human vision, enough
information to perceive the object as three dimensional.
Using a stereoscopic camera setup each object point is
focused on a specific pixel in the image plane of the respective camera.

With the information of the imaging parameters for each camera


(intrinsic parameters: focal length, principle point and distortion
parameters) and the orientations of the cameras with respect to
each other (extrinsic parameters: rotation matrix and
translation vector), the position of each object point in three
dimensions can be calculated.
Figure 1 Principle of stereoscopic
Using a stochastic intensity pattern on the object surface, setup
the position of each object point in the two images can be
identified by applying a correlation algorithm.
1.2. Correlation Algorithm
The correlation algorithm is based on the tracking of the grey value pattern G1 ( x, y ) in small
local neighbourhood facets

Figure 2 Grey value pattern and facet G1 Figure 3 Transformed grey value pattern and facet G2

As a result of loading/movement the facet coordinates ( x, y ) are transformed into ( xt , y t )


(figure 3) with an assumed pseudo-affine mapping:

xt (a 0 , a1 , a 2 , a3 , x, y )  a 0  a1 x  a 2 y  a3 xy
.
y t (a 4 , a5 , a6 , a 7 , x, y )  a 4  a5 x  a 6 y  a 7 xy

The possible transformations consist of translations, stretch,


shear and distortion.

Within the correlation algorithm the transformation


parameters are determined by minimizing distance between
the observed grey value pattern G2 ( x , y ) in the second
image and the original pattern G1 ( x, y ) by applying the
coordinate transformations ( xt , y t ) plus photogrametric
corrections which considers of different contrast and
intensity level of the images:
GT ( x , y )  g 0  g1G ( xt ( x, y ), y t ( x, y )) and

min
a0 ,, a7 , g 0 , g1

x, y
G2 ( x , y )  GT ( x , y )

By using this correlation algorithm a matching accuracy of


better than 0.01 pixel can be achieved.

Figure 4 Parameters of the pseudo-


affine transformation
1.3. Calibration
The quality of the measurement relies on exact
knowledge of the intrinsic and extrinsic parameters
of the system. The model for the imaging is based
on a pin hole model (Error: Reference source not
found). Thus the projection of the object point on
the CCD is defined by the intersection of the line
from the object point through the principle point and
the CCD. The distance of the principle point to the
image plane is the focal length f, the projection on
the image plane the position of the optical axis on
the CCD.

Figure 5 Pin-hole camera model In addition to these pin-hole model parameters


distortion is also taken into account; radial as well as tangential distortion parameters are
calculated.
The calibration is easily done by taking images
of a calibration panel under different
perspective views.

The calibration panel which is used in this


system is a chess pattern where the corners
are the calibration targets. Additional circular
markers define the position of the X- and Y-
axis.

The calibration process is integrated in the


measurement software. During the calibration
process these markers are detected
automatically and displayed online on the
monitor. The color of the marker represents
the quality of the detection. Figure 6 Calibration target showing acceptance
markers
If the cameras capture the images of
the calibration panel for both cameras
at the same time, the calibration
parameters from both cameras will be
calculated simultaneously.

A bundle-adjustment algorithm
calculates the intrinsic parameters
(focal length, principle point, distortion
parameters) for each camera, as well
as the extrinsic parameters (translation
vector and rotation matrix).
If more than four images of the
calibration panel are captured, the
calculations are performed online and
the actual parameters as well as the
quality of the calibration are displayed.
Typically eight images are sufficient to
calculate all calibration parameters
Figure 7 Calibration parameters accurately.
The online procedure and the direct user feedback allow an easy, reliable and fast calibration of
the system.

1.4. Acquisition
Depending on the type of camera which is used for the acquisition, several parameters and
adjustments of the cameras can be made for controlling of the exposure time, frame rate,
brightness, contrast and area of interest.
Typically a series of a few up to several hundreds of images are acquired and saved during the
experiment. The acquisition can be started manually or fully automatically using various types of
triggering. In general the images are continuously acquired in a ring buffer and after a trigger
signal is given, depending on the settings for the pre-/post trigger, the acquisition is stopped
immediately or will be continued until the required images are captured. These images, combined
with the calibration parameters, are the base for the evaluation.

1.5. Evaluation
From the series of acquired images the steps which are used for the correlation can be selected.
One of the steps is defined as a reference, from which the correlation starts. Additional steps can
be used for a refresh for the correlation in order to be able to follow even large distortions of the
grey value pattern.
The evaluation consists of the following steps:

1.5.1.Definition of Fields of Evaluation and Search of the Starting Point


Starting from a reference image the field of evaluation can be created. It is possible to create
multiple independent areas of evaluation. In this step the position of the facets for the correlation
is defined.
Each of the areas which is evaluated is marked with a starting point. In the first step the system
identifies the position of these starting points in all of the images. This process is automatically
working but the user has also the possibility to interact manually. If all starting points are found
correctly the full field correlation will work without problems.

1.5.2.Full Field Correlation in Order to Calculate the Contour and the Displacement
By applying the correlation algorithm
the position of all object points can be
identified in the image from both
cameras. Using the intrinsic and
extrinsic parameters of the system the
3-dimensional coordinates for each
object point can be calculated and
therefore the 3-dimensional contour
of the object is determined.
Following the changes of the grey
value pattern for each camera along Figure 8 Change of the grey value pattern due to deformation
the series of images from the loading
steps, the displacement of the object is calculated (figure 8). The matching accuracy of the
correlation algorithm is typically 0.01 pixel so the resolution in displacement which can be
achieved is down to 1/100 000 of the field of view. For a A4 paper size this gives a resolution in
displacement of down to 3-4 µm.

1.5.3.Calculation of Strain from the Contour and Displacement Data


For every object point in the field of evaluation the 3-dimensonal displacement is known. Taking
the contour of the object into account the displacement in tangential direction can be determined.
By building the gradient of the tangential displacement in orthogonal directions the strain in these
directions as well as the shear strain can be calculated. Out of these the principal strains and the
directions of the principal strains are calculated.

You might also like