Professional Documents
Culture Documents
The process of repairing the damaged area or to remove the specic areas in a video is known as video inpainting. To deal with this kind of problems, not only a robust image inpainting algorithm is used, but also a technique of structure generation is used to ll-in the missing parts of a video sequence taken from a static camera. Most of the automatic techniques of video inpainting are computationally intensive and unable to repair large holes. To overcome this problem, inpainting method is extended by incorporating the sparsity of natural image patches in the spatio-temporal domain is proposed in this paper. First, the video is rst converted into individual image frames. Second, the edges of the object to be removed are identied by the SOBEL edge detection method. Third, the inpainting procedure is performed separately for each time frame of the images. Next, the inpainted image frames are displayed in a sequence, so that it appears as a video. For each image frame, the condence of a patch located at the image structure (e.g., the corner or edge) is measured by the sparseness of its nonzero similarities to the neighboring patches to calculate the patch structure sparsity. The patch with larger structure sparsity is assigned higher priority for further inpainting. The patch to be inpainted is represented by the sparse linear combination of candidate patches. Patch propagation is performed by the algorithm automatically by inwardly propagating the image patches from the source region into the interior of the target region by means of patch by patch. Compared to other methods of inpainting, a better discrimination of texture and structure is obtained by the structure sparsity and also sharp inpainted regions are obtained by the patch sparse representation. This work can be extended to wide areas of applications, including video special eects and restoration and enhancement of damaged videos.
Chapter 1 INRODUCTION
1.1 Need
In recent years, transforming cultural and historical artifacts such as photographs and vintage lms/videos into digital format has become an important trend. However, because of their age, the visual quality of such images and videos after digitization is usually very poor and often contain unstable luminance and damaged content. Video enhancement techniques widely used to restore the visual content of vintage lms include video denoising, video stabilization , and video inpainting. Video inpainting, one of the most challenging techniques, helps users remove undesirable objects and repair areas where content is missing or damaged. To deal with image inpainting problems, initially focus was on removing or repairing small regions of an image using interpolation or smoothing techniques. Subsequently,more powerful methods were developed to perform image inpainting on large continuous areas. Video inpainting is the method by which the noisy or damaged frames are extracted from the video and those frames are replaced by the new frames. Thus video inpainting removes the damaged frame from the video and nally gives the good quality video as output.
video. So this increases the quality of the video. And we get the video having good quality.
Video Extraction. Video Reconstruction. VIDEO INPAINTING by using patch NNF Technique. Frame Completion.
To restore the video quality. To remove the scratches from videos. To enhance & improve video quality.
1.7 Applications:
In photography and cinema, is used for lm restoration; to reverse the deterioration (e.g., cracks in photographs or scratches and dust spots in lm; see infrared cleaning).
It is also used for removing red-eye, the stamped date from photographs and removing objects to creative eect. This technique can be used to replace the lost blocks in the coding and transmission of images, for example, in a streaming video. It can also be used to remove logos in videos.
thesis process. The method has limited applicability because it only works well under certain types of constrained camera motion used a graph cut algorithm to divide a video sequence into multiple layers based on the motion in each layer. Each layer is then inpainted by applying the proposed image inpainting algorithms. The drawback of this approach is that temporal consistency is not addressed. In Kokaram and Godisll employed a 3-D autoregressive model to detect and reconstruct missing video data. The method uses an interpolation technique instead of patch duplication. Only small missing regions can be repaired and the issue of maintaining temporal consistency is not addressed. proposed a two-phase sampling and alignment video inpainting approach that predicts motion in the foreground before repairing damaged foreground areas and adopted an image inpainting technique to repair damaged areas of separated background. Subsequently, they extended their algorithm to handle situations with varying illumination. The illumination mask used in regulates the intensity of inpainted frames until it is similar to the original video. However, intensity ickers are viewed as visual defects in vintage lms. Therefore, when we do inpainting on damaged vintage lms, we not only recover the missing content but also stabilize the intensity change across consecutive frames. The objective of the above-mentioned moves is to guarantee the recovery of visually pleasing results. videos. Most of the above-mentioned algorithms discussed the use of image inpainting techniques to repair damaged background areas in However, if the damaged areas are too large, visual defects are still evident in the resulting videos.
sults in small-sized videos,the method is time consuming and computationally Moreover, information about the missing content in every video frame must be provided in advance. In proposed to construct motion manifolds of space-time volume and apply structure propagation methods to recover the missing portions of foreground object and background and maintain spatiotemporal continuity. Although the output is acceptable, the method does not work well when the missing portions of a space-time volume are large. proposed to complete a damaged video by transferring motion elds sampled from other portions of the video. The limitation of this method is that it works only on stationary video and may easily cause over-smoothing artifacts. In a previous work , we proposed a video inpainting algorithm that segments a video into an intrinsic motion layer (created by the video camera) and an extrinsic motion layer (created by the moving object) and then removes the selected areas from dierent layers. The limitation of this method is that it can only handle videos that have consistent luminance and are recorded under stable camera motions such as panning. Because restoration of digitized vintage lms is an important application area, researchers have also developed video
enhancement techniques especially for vintage lms. proposed a line scratch detection and removal algorithm. Although the line scratch method is very ecient, the authors only use an image interpolation method to repair damaged content. In used temporal coherence analysis to detect scratches in video images. Both methods can only deal with small regions around defects. proposed using spatiotemporal analysis techniques to repair single frame defects, but they neglected the issue of maintaining temporal continuity.
3.2 Solution:
As there are many problems with the existing system as mentioned above, So to remove all the drawbacks of the system, there is Video inpainting method in which the focus is done on the frames, instead of focusing on the whole video. In this technique, the frames are extracted from the video. From the video missing frames are analyzed and nally new frames are created and these new frames are added to the existing video.Thus the technique increases the quality of the video. There are powerful methods were developed to perform image inpainting on large continuous areas.
3.3 Advantages:
Reconstructing the Video better. Frame completion repairs damaged frame in the form of scratches to produce a visually pleasant video with good spatial continuity and stabilized luminance.
Improves Quality.
3.4 Disadvantages:
It only recovers video with small amount of damaged frames. Works only for Uncompressed AVI Video File.
In our experiments, we found that when using existing video inpainting techniques to repair old lms or remove undesirable objects, the unstable luminance and poor quality of the original lm frequently cause visible defects in the resulting video. As a result, we believe a new approach is needed to tackle the challenges presented by old lms as well as by modern digital videos. To this end, we propose a video inpainting algorithm that can address those challenges and produce visually pleasing results. When inpainting severely damaged videos, we begin by lling gaps in the temporal information to help the inpainting process obtain more reference data from the whole video sequence. Our proposed video inpainting algorithm involves two key steps: motion completion and frame completion. The rst step, motion completion, tries to replace missing motion information to help the inpainting process obtain reliable reference data. The second step, frame completion, maintains the spatial continuity of the referenced content before it is pasted onto the corresponding missing area. This step is especially important when the luminance in
Fig.
1 shows some examples of using existing video In Fig. 1(a), the rst and last frames
inpainting methods applying image inpainting-related techniques to inpaint a severely damaged video sequence. contain undamaged reference information. However, without accurate motion information, the inpainting process can only use information derived from the current and/or neighboring frames to repair missing areas. The example shows how relying on spatial information from a single frame may result in poor inpainting results. Fig. 1(b) shows how, with complete motion information, the inpainting process can extract undamaged information from the entire video sequence and nd reliable reference data to repair missing areas. In addition, experiment results demonstrate motion completion also signicantly improves the temporal continuity of the nal result. Fig. 2 presents our proposed motion map construcframework, which is comprised of three procedures:
tion, motion completion, and frame completion. Motion map construction is a preprocessing procedure. We begin by manually labeling damaged areas in vintage lms to divide each succeeding video frame into a damaged layer and a background layer. The former shows the missing area and the latter shows the rest of the video content. Next, we estimate the motion information located in the background layer to construct a motion map for each frame. These maps form the basis of our video inpainting process and are used to replace the missing motion information in the motion completion procedure. Finally, the frame completion procedure uses a patch adjustment mechanism to paste data from neighboring or current frame onto the missing areas indicated in the damaged layer. The remainder of this paper is organized as follows. Section II describes the construction of the motion map. Section III presents the proposed video inpainting algorithm. Section IV details the experiment results and Section V contains concluding remarks.
10
4.2.1 Level 0:
An information moves through the software, it is modied by a series of transformation. A data ow diagram is a graphical technique that depicts information ow and transform that are applied as data moves from input to output. Data ow diagram may be used to represent a system or software at any level of abstraction. DFD may be into levels that represent increasing informaThis is called as tional ow and functional details. Therefore DFD provides a mechanism for functional modeling as well as information ow modeling. a single bubble with input and output data. Fundamental/ context level DFD. It represents the entire software element as
11
4.2.2 Level 1:
In this level there is a detail description of the software where the entire software is represented by 2/3 or more bubbles.
4.3 Java:
An edition of the Java platform is the name for a bundle of related programs, or platform, from Sun which allow for developing and running programs written in the Java programming language. The platform is not specic to any one processor or operating system, but rather an execution engine (called a virtual machine) and a compiler with a set of standard libraries that are implemented for various hardware and operating systems so that Java programs can run identically on all of them. The heart of the Java Platform is the concept of a "virtual machine" that executes Java byte code programs. This byte code is
12
the same no matter what hardware or operating system the program is running under. There is a JIT compiler within the Java Virtual Machine, or JVM. The JIT compiler translates the Java byte code into native processor instructions at run-time and caches the native code in memory during execution. The use of byte code as an intermediate language permits Java programs to run on any platform that has a virtual machine available. The use of a JIT compiler means that Java applications, after a short delay during loading and once they have "warmed up" by being all or mostly JIT-compiled, tend to run about as fast as native programs. Since JRE version 1.2, Sun's JVM implementation has included a just-in-time compiler instead of an interpreter. Although Java programs are platform independent, the code of the Java Virtual Machine (JVM) that execute these programs is not; every supported operating platform has its own JVM.
4.4 Swing:
Swing is a widget toolkit for Java. It is part of Sun Microsystems' Java Foundation Classes (JFC) an API for providing a graphical user interface (GUI) for Java programs. Swing was developed to provide a more sophisticated set of GUI components than the earlier Abstract Window Toolkit. Swing provides a native look and feel that emulates the look and feel of several platforms, and also supports a pluggable look and feel that allows applications to have a look and feel unrelated to the underlying platform.
13
14
Requirement ID Requirement Category Requirement Importance Requirement Description Method of validation/verication Priority Diculty Table 5.4: System Features.
Video Extra
Highest High
5.4.2 Maintainability:
The system will be developed using the standard software development conventions to help in easy review and redesigning of the system. The system will be backed up by a full edge documentation of the product which is available online as well as free to download.
5.4.3 Availability:
The system is available on demand.
5.4.4 Supportability:
The current system is able to support only Uncompressed AVI Video Files.
15
-Benets -Machine cost A project is a temporary endeavor with a dened beginning and end (usually time-constrained, and often constrained by funding or deliverables), undertaken to meet unique goals and objectives,[2] typically to bring about benecial change or added value. repetitive, permanent. The temporary nature of projects stands in contrast with business as usual (or operations), which are
system may take some more time. Video Size matters a lot as it may degrade the Video Extratction process. System performance. The system may nd out of memory in time if less RAM compared to Video Frames Numbers.
16
17
18
19
20
21
22
23
7.2 Financial:
The nancial investment is very less for creating this application. All the software's to be used such as Net Beans, Microsoft SQL server are available on the internet as an open source. So there is no problem regarding the nancial feasibility of the project.
7.3 Operational:
The project being developed is very useful in remote areas or at accidental places where we don't about that place but we can nd our friends or relatives, if software should be installed on their mobile and he/she should have knowledge about it.
24
25
CONCLUSION
We have proposed a novel video inpainting algorithm for digitized aged lms. The algorithm consists frame completion. In addition, a preprocessing procedure constructs a motion map to record the motion information in undamaged source areas. The motion completion procedure restores the motion in each missing area based on the completion order determined by the priority computation step. The completed motion map is used to improve the temporal continuity and nd the best-matched result for inpainting damaged areas. The frame completion procedure seamlessly repairs all the damaged areas and reduces the intensity of video icker. During the frame completion phase, we use a panoramic mosaic to help stabilize the global and local luminance and thereby obtain better restored videos.
26
REFERENCES
[1] J. Bergen, P. Anandan, K. Hanna, and R. Hingorani, Hierarchical modelbased motion estimation, in Proc. 2nd Eur. Conf. Computer Vision, 1992, pp. 237252. [2] A. M. Huang and T. Nguyen, Correlation-based motion processing with adaptive interpolation scheme for motion-compensated frame interpolation, IEEE Trans. Image Process., vol. 18, no. 4, pp. 740752, Apr. 2009. [3] P. Angin, B. Bhargava, R. Ranchal, N. Singh, L. Ben Othmane, L. Lilien, and M. Linderman , A User-Centric Approach for Privacy and Identity Management in Cloud Computing, Proc. 29th IEEE Intl. Symp. on Reliable Distributed Systems (SRDS), New Delhi , India, Nov. 2010. [4] K. M. Gullu, O. Urhan, and S. Erturk, Scratch detection via tempora coherency analysis, in Proc. 2006 IEEE Int. Symp. Circuits and Systems,2006.OpenID Foundation Website, accessed in Aug. 2010. [5]Y. Shen, F. Lu, X. -C. Cao, and H. Foroosh, Video completion for perspective camera under constrained motion, in Proc. Int. Conf. Pattern Recognition, 2006, pp. 6366.
27
Contents
1 INRODUCTION
1.1 1.2 1.3 1.4 1.5 1.6 Need . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Need for the system development: . . . . . . . . . . . . . . . . . Aim And objective: . . . . . . . . . . . . . . . . . . . . . . . . . Safety Requirements: . . . . . . . . . . . . . . . . . . . . . . . . Security Requirements: . . . . . . . . . . . . . . . . . . . . . . . Risk Management: 1.6.1 1.6.2 1.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Risk Identication:
1
1 1 2 2 2 3 3 3 3
Risk Analysis: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Applications:
2 LITERATURE SURVEY
2.1 2.2 Basis Of Project Idea: . . . . . . . . . . . . . . . . . . . . . . . Proposed Optimizing: . . . . . . . . . . . . . . . . . . . . . . . .
4
4 5
3 PROJECT STATEMENT
3.1 3.2 3.3 3.4 Problem Denition: . . . . . . . . . . . . . . . . . . . . . . . . . Solution: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advantages: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disadvantages: . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
7 7 7 8
4 SYSTEM DEVELOPMENT
4.1 4.2 System Architecture: . . . . . . . . . . . . . . . . . . . . . . . . Data Flow Diagram: 4.2.1 4.2.2 4.3 4.4 Level 0: Level 1: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
9 10 11 12 12 13
Java: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Swing: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
14 14 14 15 15 15 15
28
Supportability:
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15 15 16
17
17 18 19 20 21 22 23
24
24 24 24 25
29
List of Figures
4.1 4.2 4.3 6.1 6.2 6.3 6.4 6.5 6.6 6.7 7.1 Video Inpainting Architecture . . . . . . . . . . . . . . . . . . . Data Flow Diagram-Level 0. . . . . . . . . . . . . . . . . . . . . Data Flow Diagram-Level 1. . . . . . . . . . . . . . . . . . . . . Use Case Diagram . . . . . . . . . . . . . . . . . . . . . . . . . 9 11 12 17 18 19 20 21 22 23 25
30
List of Tables
5.1 5.2 5.3 5.4 Software Requirement System Features. System Features. . . . . . . . . . . . . . . . . . . . . . . . 14 14 14 15 Hardware Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
ACKNOWLEDGEMENT
We would like to thank you project co-ordinator Prof. Abhale B.A. and Project guide Prof.Thakur.P.S for his guidance in this project work, and his tireless support in ensuring its completion. We would also like to thank the HOD of the Information Technology Department Prof. Gaikwad K.P. for providing me all the facilities in the department and outside. We would like to thanks to all my friends and well-wishers who helped me directly and indirectly during completion of project. This project being conceptual one needed a lot of support from my guide so that we could achieve what we were set out to get. And we are glad to say that the success of the project is an acknowledgement in itself.
32