You are on page 1of 6


In the last decade, the use of multimedia devices like personal digital assistants, mobile phones, etc. has dramatically increased These devices, typically small and thin, usually have video acquisition capability. Many of these applications use camera mounted on a hand held device or a mobile platform and this will result in video sequences are affected by unwanted shakes and jitters. Unstable video may result from the shaking of users hand while capturing the scene. The same problem arises in presence of cameras placed on moving supports like car, airplane etc. or fixed cameras operating outdoors where the atmospheric conditions like the wind and the vibrations produced by passing vehicles make the recorded video unstable. In this situation making a stable video is a very challenging task. Video stabilization technology is used to avoid visual quality loss by removing unwanted shakes and jitters of a video capturing device without influencing moving objects or intentional camera motion. A stabilized video is defined as a motionless video where the camera motion is completely removed. Video stabilization techniques results in high visual quality and stable video footages. 1.1 Digital Video Stabilization The digital video stabilization systems operate on the captured image data. It tries to smooth and compensate the undesired motion. Each frame of the video sequence is processed in order to remove the unwanted motion from the video sequence. Digital video stabilization is done in three steps. 1.1.1 Motion Estimation This step derives the parameters of the transform occurred between subsequent frames. Displacement of one frame to the next is defined by a horizontal translation, a vertical translation, and a rotation component. The task of this step is to find these three global motion parameters to which a new incoming frame must be subjected to fit as closely as possible to the previous frame.

1.1.2 Motion filtering The estimated motion in the previous step may be due to the motion of an object in the scene or due to unwanted camera movements. This step in digital video stabilization discriminates intentional motion from unwanted motion. 1.1.3 Image warping Stabilized image reconstructed through proper image warping. Geometric transformation to eliminate the unwanted camera motion is done to the current frame. Thus the frame is stabilized with respect to its reference frame. The missing data in the boarders are filled using the technique mosaicing or trimming.

2.1 Digital video stabilization through curve warping techniques An algorithm for video stabilization using curve warping technique was proposed by A. Bosco In this algorithm a technique called Dyanamic Time Warping (DTW) was used. DTW is used to find similarity between two curves.

Figure 2.1 Digital video stabilization algorithm using curve warping technique

In this approach they extracted frame signature in both horizontal and vertical direction from each frame. Frame signatures corresponding to consecutive frames were analysed to get the global motion vector between the two frames with the help of an optimal warping path. The global motion vector from each successive frame are integrated using motion vector integration. The resulting motion vector is low pass filtered to get the unintensinal motion and digital stabilizer block stabilizes the current frame according to the new absolute global motion vector. The drawback of this method is the high computations required to find the motion vector from frame signatures of successive frames. Due to the same reason they are unacceptable for embedded applications. 2.2 Video stabilization using principal component analysis and scale invariant feature transform in particle filter framework Feature based approach for motion estimation was proposed by Yao et al using Scale Invarient Feature Transform (SIFT). In this approach the interest or key points are extracted using SIFT. The dimensionality of the feature space is first reduced by the principal component analysis (PCA) method using the features obtained from a scale invariant feature transform (SIFT), and the resultant features are hence termed as the PCA-SIFT features.

Figure 1.2 Digital video stabilization algorithm based on PCA-SIFT and particle filtering

Initial motion between frames are obtained using RANdom SAmples Consensus (RANSAC) method. Adaptive particle filter is used to find the global motion parameters. Unintentional motion is modeled as a non linear system. The PF provides greater efficiency and extreme flexibility in solving non-linear and non-Gaussian problems by using the concept of important sampling wherein the intractable integrals in the optimal Bayesian solution for estimating the current state from the past observations, are replaced by discrete sums of weighted samples drawn from the posterior distribution. Particle filter rectifies the motion vectors obtained in RANSAC method. SIFT-BMSE (SIFT Block Mean Square Error) cost function is proposed to disregard the foreground object pixels and reduce the computational cost. Motion compensation is done on each frame corresponding to the vector corresponding to the unintentional motion. Mosaic method is used to fill up the undefined areas in the compensated frames. Thus full frame stabilization is obtained. SIFT features are accurate and robust but a faster method Speeded Up Robust Features (SURF) provides feature point matching in much faster way.

2.3 Robust Video Stabilization Based on Particle Filtering with Weighted Feature Points SURF based video stabilization approach using particle filtering with weighted feature points was proposed by C.Song et al. In this approach feature points were obtained using SURF. SURF is a faster approach than SIFT. Initial motion estimation was done using RANSAC. Global motion estimation is done using particle filtering with weigthed feature points. The intentional motion is estimated using Kalman filtering. The unintentional motion is obtained by subtracting the intentional motion from the total motion vectors. Kalman filter uses measurements that observed over time, containing noise (random variations) and other inaccuracies, and produces values that tend to be closer to the true values of the measurements and their associated calculated values. Finally the unintentional motion is compensated to obtain stable video sequences. This approach based on SURF is found more accurate and faster than SIFT based approach.

Figure 1.3 Digital video stabilization algorithm based on SURF and particle filtering



A. Bosco, A. Bruna, S. Battiato, G. Bella, and G. Puglisi, Digital video stabilization through curve warping techniques, IEEE Trans. Consum. Electron., vol. 54, no. 2, pp. 220224, May 2008.


E. J. Keogh, and M. Pazzani, Derivative dynamic time warping, Proceedings of the First SIAM International Conference on Data Mining (SDM'2001), pp. 1-11, 2001.


S. Yao, G. Parthasarathy, and D. Thyagaraju, Video stabilization using principal component analysis and scale invariant feature transform in particle filter framework, IEEE Trans. Consumer Electron, vol.55, no.3, pp.1714- 1721, August. 2009.


S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, A tutorial on particle filters for on-line non-linear/non-Gaussian Bayesian tracking, IEEE Trans. Signal Process., vol. 50, no. 2, pp. 174188, Feb. 2002.


Y. Ke, and R. Sukthankar, PCA-SIFT: A More Distinctive Representation for Local Image Descriptors Computer Vision and Pattern Recognition, 2004.


C. Song, H. Zhao, W. Jing and H. Zhu, Robust Video Stabilization Based on Particle Filtering with Weighted Feature Points IEEE Transactions on Consumer Electronics, Vol. 58, No. 2, May 2012


H. Bay, T. Tuytelaars and L. V. Gool, Surf: Speeded Up Robust Features, Computer Vision and Image Understanding (CVIU), vol.110, no.3, pp.346-359, 2008.