Deblurrinq Images

Fundamentals of Algorithms
Editor-in-Chief: Nicholas 3. Higham, University of Manchester
The SIAM series on Fundamentals of Algorithms is a collection of short user-oriented books on state-of-theart numerical methods. Written by experts, the books provide readers with sufficient knowledge to choose an appropriate method for an application and to understand the method's strengths and limitations. The books cover a range of topics drawn from numerical analysis and scientific computing. The intended audiences are researchers and practitioners using the methods and upper level undergraduates in mathematics, engineering, and computational science. Books in this series not only provide the mathematical background for a method or class of methods used in solving a specific problem but also explain how the method can be developed into an algorithm and translated into software. The books describe the range of applicability of a method and give guidance on troubleshooting solvers and interpreting results. The theory is presented at a level accessible to the practitioner. MATLAB® software is the preferred language for codes presented since it can be used across a wide variety of platforms and is an excellent environment for prototyping, testing, and problem solving. The series is intended to provide guides to numerical algorithms that are readily accessible, contain practical advice not easily found elsewhere, and include understandable codes that implement the algorithms.

Editorial Board
Peter Benner Technische Universitat Chemnitz John R. Gilbert University of California, Santa Barbara Michael T. Heath University of Illinois—Urbana-Champaign C. T. Kelley North Carolina State University Cleve Moler The MathWorks James G. Nagy Emory University Dianne P. O'Leary University of Maryland Robert D. Russell Simon Fraser University Robert D. Skeel Purdue University Danny Sorensen Rice University Andrew J. Wathen Oxford University Henry Wolkowicz University of Waterloo

Series Volumes
Hansen, P. C., Nagy, J. G., and O'Leary, D. P. Debarring Images: Matrices, Spectra, and Filtering Davis, T. A. Direct Methods for Sparse Linear Systems Kelley, C. T. Solving Nonlinear Equations with Newton's Method

Maryland Dcblurrinq Images Matrices. Spectra. Denmark James G.Per Christian Hansen Technical University of Denmark Lynqby. and Filtering Society for Industrial and Applied Mathematics Philadelphia slam. Naqg Emory University Atlanta. Georgia Diann€ P O'Learg University of Maryland College Park. .

mathworks. MA 01760-2098 USA. Figure 7.H364 2006 621.1 on page 88. GIF is a trademark of CompuServe Incorporated. Inc.Copyright ® 2006 by Society for Industrial and Applied Mathematics.5 on page 98 is used with permission of Timothy O'Leary. PA 19104-2688. I. are made by the publisher. 508-647-7000. Inc. Image processing-Mathematical models. it is at the user's own risk and the publisher. The right-hand image in Challenge 12 on page 70 is used with permission of Marielba Rojas. No warranties. authors. Trademarked names may be used in this book without the inclusion of a trademark symbol.com. O'Leary. Philadelphia. Fax: 508-647-7101. info@mathworks. spectra. Title. ISBN-13: 978-0-898716-18-4 (pbk.) ISBN-10: 0-89871-618-7 (pbk. MATLAB is a registered trademark of The MathWorks. If the programs are employed in such a manner. 10 9 8 7 6 5 4 3 2 1 All rights reserved.3 on page 89.2 on page 16. Printed in the United States of America. For information.com/ TIFF is a trademark of Adobe Systems. or transmitted in any manner without the written permission of the publisher.(Fundamentals of algorithms) Includes bibliographical references and index. Dianne P. Figure 7. express or implied. 3 Apple Hill Drive. . cm. and Figure 7. Nagy. Inc. and filtering / Per Christian Hansen. please contact The MathWorks. Library of Congress Cataloging-in-Publication Data Hansen. James G. They should not be relied on as the sole basis to solve a problem whose incorrect solution could result in injury to person or property. and their employers that the programs contained in this volume are free of error.36'7015118--dc22 2006050450 V ^13 ITI is a registered trademark.) 1. stored. Natick. 3600 University City Science Center. For MATLAB product information. These names are used in an editorial context only. No part of this book may be reproduced. Per Christian. no infringement of trademark is intended. authors and their employers disclaim all liability for such misuse. The left-hand image in Challenge 12 on page 70 is used with permission of Brendan O'Leary.. p. TA1637. www. Oeblurring images : matrices. . write to the Society for Industrial and Applied Mathematics. The butterfly image used in Figure 2.

To our teachers and our students .

This page intentionally left blank .

2 The Matrix in the Mathematical Model 3.2 BCCB Matrices 4.4 Displaying and Writing Revisited The Blurring Function 3.3 A First Attempt at Deblurring 1. Displaying.1 Image Basics 2.2.2 Computations with BCCB Matrices 4.1.2 Reading.3 Performing Arithmetic on Images 2.4 Deblurring Using a General Linear Model Manipulating Images in MATLAB 2.4 Noise 3.1 How Images Become Arrays of Numbers 1.2 A Blurred Picture and a Simple Linear Model 1.2. and Writing Images 2.3 Obtaining the PSF 3.3 Separable Two-Dimensional Blurs 4.Contents Preface How to Get the Software List of Symbols 1 The Image Deblurring Problem 1.3 BTTB + BTHB+BHTB + BHHB Matrices 4.1 Basic Structures 4.2 Two-Dimensional Problems 4.1.1 One-Dimensional Problems 4.1 Spectral Decomposition of a BCCB Matrix 4.1.4 Kronecker Product Matrices vii ix xii xiii 1 2 4 5 7 13 13 14 16 18 21 21 22 24 28 29 33 34 34 37 38 40 41 43 44 48 2 3 4 .1 Taking Bad Pictures 3.5 Boundary Conditions Structured Matrix Computations 4.

Smoothing Norms.1 Introduction to Spectral Filtering 5. and Other Topics 7.6 Estimating Noise Levels Color Images. TSVD Regularization Methods Periodic Boundary Conditions Reflexive Boundary Conditions Separable Two-Dimensional Blur Choosing Regularization Parameters 2.6 Blind Deconvolution 7.6 The Discrete Picard Condition Regularization by Spectral Filtering 6.5 Total Variation Deblurring 7.5 The DFT and DCT Bases 5.1 Two Important Methods 6. . Auxiliary Functions Bibliography Index .viii Contents 4. 48 4.5 4.4.4.7 When Spectral Methods Cannot Be Applied 6 7 Appendix: MATLAB Functions 1. Tikhonov Regularization Methods Periodic Boundary Conditions Reflexive Boundary Conditions Separable Two-Dimensional Blur Choosing Regularization Parameters 3.4 Working with Other Smoothing Norms 7.1 Constructing the Kronecker Product from the PSF . .2 Tikhonov Regularization Revisited 7.4 The SVD Basis for Image Reconstruction 5.2 Matrix Computations with Kronecker Products 49 Summary of Fast Algorithms 51 Creating Realistic Test Data 52 55 55 57 58 61 63 67 71 71 74 77 79 82 84 87 87 90 92 96 97 99 100 103 103 103 104 106 107 108 108 109 Ill 112 113 121 127 4.2 Incorporating Boundary Conditions 5.5 Implementation of GCV 6.3 Regularization Errors and Perturbation Errors 6.1 A Blurring Model for Color Images 7.2 Implementation of Filtering Methods 6.4 Parameter Choice Methods 6. .3 SVDAnalysis 5.6 5 SVD and Spectral Analysis 5.3 Working with Partial Derivatives 7.

Preface There is nothing worse than a sharp image of a fuzzy concept. We do not require the reader to be familiar with these regularization methods or with ill-posed problems. computational techniques for reconstruction of blurred images based on a concise mathematical model for the blurring process. preferably. in which the singular value decomposition—or a similar decomposition with spectral properties—is used to introduce the necessary regularization or filtering in the reconstructed image. numerical analysis. we aim to give a new and practical perspective on the issues of using regularization methods to solve real problems. vectors. Throughout the book we give references to the literature for more details about the problems. We will assume that the reader is familiar with MATLAB and also. we have chosen to keep our formulations in terms of matrices. has access to the MATLAB Image Processing Toolbox (IPT). and (2) it is much closer to the computational tools that are used to solve the given problems. While the underlying mathematical model is an ill-posed problem in the form of an integral equation of the first kind (for which there is a rich theory). The main purpose of the book is to give students and engineers an understanding of the linear algebra behind the filtering methods. All the methods presented in this book belong to the general class of regularization methods. For readers who already have this knowledge. and computational science will be exposed to modern techniques to solve realistic large-scale problems in image deblurring. . and the algorithms—including the insight that is obtained from studying the underlying ill-posed problems. The book describes the algorithms and techniques collectively known as spectral filtering methods. which are methods specially designed for solving ill-posed problems. -Allen Ginsberg This book is concerned with deconvolution methods for image deblurring. and matrix computations.Ansel Adams Whoever controls the media—the images—controls the culture. Our reasons for this choice of formulation are twofold: (1) the linear algebra terminology is more accessible to many of our readers. that is. the techniques. and our aim is that the reader will be able ix . The topics covered in our book are well suited for computer demonstrations. The book is intended for beginners in the field of image restoration and regularization. Readers in applied mathematics.

can be found at www. We show how these structures reflect the PSF. thus setting the stage for the reconstruction algorithms.S. as well as additional Challenges and other material. including the singular value decomposition and orthogonal transformations. and also introduces the most important tools. and therefore it is natural to use it for the examples and algorithms presented here. Without too much pain. We will also assume that the reader is familiar with basic concepts of linear algebra and matrix computations. Chapter 6 explains how regularization. The book starts with a short chapter that introduces the fundamental problem of image deblurring and the spectral filtering methods for computing reconstructions. Chapter 5 builds up an understanding of the mechanisms and difficulties associated with image deblurring. we also discuss methods for choosing the regularization parameter that controls the smoothing. We explain how to load and store the images. Chapter 4 gives a thorough description of structured matrix computations.org/books/fa03 We are most grateful for the help and support we have received in writing this book. Finally. and we discuss some topics related to the boundary conditions that must always be specified. Throughout the book we have included Very Important Points (VIPs) to summarize the presentation and Pointers to provide additional information and references. and how to perform mathematical operations on them. We also provide Challenges so that the reader can gain experience with the methods we discuss. Linda Thiel. We introduce circulant. in the form of spectral filtering. expressed in terms of spectral decompositions. MATLAB provides a convenient and widespread computational platform for doing numerical computations. is applied to the image deblurring problem.x Preface to start deblurring images while reading the book. Sara Murphy. especially in deblurring the mystery image of Challenge 2. and how operations with these matrices can be performed efficiently by means of the FFT algorithm. and concepts needed for the remaining chapters. Chapter 3 gives a description of the image blurring process. We derive the mathematical model for the point spread function (PSF) that describes the blurring due to different sources. based on the MATLAB "templates" presented in this book. The U. Toeplitz. We do not require the signal processing background that is often needed in classical books on image processing. and Hankel matrices. National Science Foundation and the Danish Research Agency provided funding for much of the work upon which this book is based. and others on . We hope that readers have fun with these.siam. In addition to covering several spectral filtering methods and their implementations. as well as Kronecker products. The chapter sets up the basic notation for the linear system of equations associated with the blurring model. The images and MATLAB functions discussed in the book. techniques. a user can then make more dedicated and efficient computer implementations if there is a need for it. Chapter 2 explains how to work with images of various formats in MATLAB. Chapter 7 gives an introduction to other aspects of deblurring methods and techniques that we cannot cover in depth in this book.

Nicholas Higham. O'Leary Lyngby. Italy.Preface xi the SIAM staff patiently worked with us in preparing the manuscript for print. 2006 . and College Park. Nagy Dianne P. The referees and other readers provided many helpful comments. in particular. Stephen Marsland. and the book benefited from the suggestions of the participants and the experience gained there. Thank you to all. September 2005. Atlanta. Robert Plemmons. Julianne Chung. Per Christian Hansen James G. and Zdenek Strakos. We would like to acknowledge. Martin Hanke-Bourgeois. Nicola Mastronardi graciously invited us to present a course based on this book at the Third International School in Numerical Linear Algebra and Applications. Monopoli.

MATLAB can be obtained from The Math Works. • ADDITIONAL CHALLENGES: a small collection of additional Challenges related to the book.org/books/fa03 The material on the website is organized as follows: • HNO FUNCTIONS: a small MATLAB package.com xii . • ADDITIONAL IMAGES: a small collection with some additional images which can be used for tests and experiments. 3 Apple Hill Drive Natick. MA 01760-2098 (508) 647-7000 Fax: (508) 647-7001 Email: inf o@mathworks . We invite readers to contribute additional challenges and images.How to Get the Software This book is accompanied by a small package of MATLAB software as well as some test images. • ADDITIONAL MATERIAL: background material about matrix decompositions used in this book. designed to let the reader experiment with the methods. e.g.siam. The package also includes several auxiliary functions. com URL: http://www. which implements all the image deblurring algorithms presented in the book. for creating point spread functions. • CHALLENGE FILES: the files for the Challenges in the book. written by us. The software and images are available from SIAM at the URL www. Inc.mathworks.5 or newer versions. It requires MATLAB version 6..

The fast transforms (FFT and DCT) used in our algorithms are always computed by means of efficient implementations although. Capital boldface always denotes a matrix or an array. e. /) of A Column . an image (grayscale or color) is always referred to as an image array.th column of identity matrix) 2-norm.x. X E P m xn LINEAR ALGEBRA SYMBOLS Matrix (always N x N) Matrix dimension Matrix element (i. x . || • ]|/>.List of Symbols We begin with a survey of the notation and image deblurring jargon used in this book. For the same reason we use the term PSF array for the image of the point spread function. while small boldface denotes a vector and a plain italic typeface denotes a scalar or an integer. || • Hi-- XIII . The phrase matrix is reserved for use in connection with the linear systems of equations that form the basis for our methods. /7-norm.' of matrix A Identity matrix (order £) Vector Vector element i of x Standard unit vector (. e. Throughout. IMAGE SYMBOLS Image array (always m x n) Noise "image" (always m x n) PSF array (always m x n) Dimensions of image array B. we often represent them by matrices. All of the main symbols used in the book are listed here. for notational reasons. Frobenius norm A N = m •n aya. If b. || • ||2. having in mind its natural representation in MATLAB.

REGULARIZATION Filter factor Diagonal matrix of filter factors Truncation parameter for TSVD Regularization parameter for Tikhonov 0. ® Cc F = Fr <g> Fc .) k a OTHER SYMBOLS Kronecker product Stacking columns: vec notation Complex conjugation Discrete cosine transform (DCT) matrix (two-dimensional) Discrete Fourier transform (DFT) matrix (two-dimensional) ® vec(-) conj(-) C = C. $ = diag(</>.xiv List of Symbols SPECIAL MATRICES Boundary conditions matrix Discrete derivative matrix Matrix for zero boundary conditions Color blurring matrix (always 3 x 3 ) Column blurring matrix Row blurring matrix Shift matrices ABC D A0 Aco]or Ac Ar Zj.Z2 SPECTRAL DECOMPOSITION Matrix of eigenvectors Diagonal matrix of eigenvalues U A SINGULAR VALUE DECOMPOSITION Matrix of left singular vectors Matrix of right singular vectors Diagonal matrix of singular values Left singular vector Right singular vector Singular value U V Z u. v. or..

The influence of this noise puts a limit on the si/e of the details that we can hope to recover in the reconstructed image. we want the recorded image to be a faithful representation of the scene that we see—but every image is more or less blurry. 1 . the original Star Wars trilogy was enhanced for release on DVD. Each pixel is assigned an intensity. . the optical system in a camera lens may be out of focus. The key issue is that some information on the lost details is indeed present in the blurred image—but this information is "hidden" and can only be recovered if we know the details of the blurring process. Image enhancement is used in the restoration of older movies. because it is unavoidable that scene information "spills over" to neighboring pixels. and the limit depends on both the noise and the blurring process. for example. These methods are not model based and therefore not covered in this book. For example. Some blurring always arises in the recording of a digital image. Unfortunately there is no hope that we can recover the original image exactly! This is due to various unavoidable errors in the recorded image. in astronomical imaging where the incoming light in the telescope has been slightly bent by turbulence in the atmosphere.Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. we seek to recover the original. For example.Mark Twain When we use a camera. POINTER. the inevitable result is that we record a blurred image. The same problem arises. A small image typically has around 2562 = 65536 pixels while a high-resolution image often has 5 to 10 million pixels. In these and similar situations. sharp image by using a mathematical model of the blurring process. In image deblurring. Sec |33] for more information. The most important errors are fluctuations in the recording process and approximation errors when representing the image with a limited number of digits. so that the incoming light is smeared out. image deblurring is fundamental in making pictures sharp and useful. meant to characterize the color of a small rectangular segment of the scene. Thus. A digital image is composed of picture elements called pixels.

1. we suggest the excellent book by Higham and Higham [27]. Color images can be represented using various formats. 0) while. One of the challenges of image deblurring is to devise efficient and reliable algorithms for recovering as much information as possible from the given data. 1. Hence. the values (1. we need three values per pixel.1 How Images Become Arrays of Numbers Having a way to represent images as arrays of numbers is crucial to processing images using mathematical techniques. if X is a multidimensional MATLAB array of dimensions 9 x 1 6 x 3 . This chapter provides a brief introduction to the basic image deblurring problem and explains why it is difficult. and values in between are shades of gray. Consider the following 9 x 1 6 array: 0 0 0 0 0 0 0 0 0 0 8 8 8 8 8 8 8 0 0 8 8 8 8 8 8 8 0 0 0 0 0 0 0 8 8 0 0 0 0 0 0 0 8 8 0 0 0 0 0 0 0 8 8 0 0 0 0 0 0 0 0 0 0 0 ' 0 4 4 0 0 0 0 0 0 0 0 4 4 0 3 3 3 3 3 0 0 4 4 0 3 3 3 3 3 0 0 4 4 0 3 3 3 3 3 0 0 4 4 0 3 3 3 3 3 0 0 4 4 0 3 3 3 3 3 0o 0 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 If we enter this into a MATLAB variable X and display the array with the commands imagesc ( X ) . We will therefore use these toolboxes when convenient. The entries with value 8 are displayed as white. we provide alternative approaches that require only core MATLAB commands in case the reader does not have access to the toolboxes. In the following chapters we give more details about techniques and algorithms for image deblurring.0. for example. 1) represent blue. but in some cases it is more convenient to use routines that are only available from the Signal Processing Toolbox (SPT) and the Image Processing Toolbox (IPT). 0) represent yellow and (0. Throughout the book. The Image Deblurring Problem POINTER. then we obtain the picture shown at the left of Figure 1. green. axis image. For example.org/books/fa03 For readers needing an introduction to MATLAB programming. to represent a color image. A pure red color is represented by the intensity values (1. When possible. which represent their intensities on the red. 0. This material can be found on the book's website: www. The basic MATLAB package contains many functions and tools for this purpose. we provide example images and MATLAB code. MATLAB is an excellent environment in which to develop and experiment with filtering methods for image deblurring. colormap (gray).siam. 1.2 Chapter 1. other colors can be obtained with different choices of intensities. and blue scales. the RGB format stores images as three components. entries equal to zero are black.

defined as "0 0 0 0 X(:. A digital image is a two. with the command imagesc ( X ) .3) = 0 0 0 0 _0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0~ 0 0 0 0 0 0 0 1 1 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0_ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ? then we can display this image.1. 2 ) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" 1 1 0 0 0 0 0 0 0 1 i 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 . How Images Become Arrays of Numbers 3 Figure 1. : .or three-dimensional array of numbers representing intensities on a grayscale or color scale. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ~0 0 0 0 X ( : . in color.1. :.0 "0 0 0 0 = 0 0 0 0 . This brings us to our first Very Important Point (VIP).1. Images created by displaying arrays of numbers. :.1. . 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.1) = 0 0 0 0 . obtaining the second picture shown in Figure 1. VIP 1.0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 1 1 1 1 ] 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" 0 0 0 0 0 0 0 0_ T X(: .

the techniques carry over to color images.2. Thus. such as the ones in Figure 1. 1. and the right is a blurred version of the same image.T applies the same horizontal blurring to all m rows of X. Grayscale images. When this is the case. The left is the "true" scene.4 Chapter 1. arranged in a rectangular grid.2. However.2. then there exist two matrices Ac e R m x m and Ar e R" x ". X e Rmxn represents the desired sharp image. we must have a mathematical model that relates the given blurred image to the unknown true image. or intensity.e. (Ac X) Arr = Ac (X AJ?).. able to record the amount. are typically recorded by means of a CCD (charge-coupled device). To fix notation. we can think of a grayscale digital image as a rectangular m x n array. of the light that hits each detector. as explained above. whose entries represent light intensities captured by the detectors. The blurred image is precisely what would be recorded in the camera if the photographer forgot to focus the lens. Consider the example shown in Figure 1.of X..2 A Blurred Picture and a Simple Linear Model Before we can deblur an image. A sharp image (left) and the corresponding blurred image (right). because Similarly. and in Chapter 7 we extend our notation and models to color images. Since matrix multiplication is associative. Figure 1. it does . which is an array of tiny detectors. such that we can express the relation between the sharp and blurred imaaes as The left multiplication with the matrix Ac applies the same vertical blurring operation to all n columns x. The Image Deblurring Problem Most of this book is concerned with grayscale images. while B e M mx " denotes the recorded blurred image. i. Let us first consider a simple case where the blurring of the columns in the image is independent of the blurring of the rows. the right multiplication with A.

it is represented by a finite (and typically small) number of digits. let Bexai. we must realize that the blurring model in (1. when the image is digitized. obtained by computing Hnai've — A~'B A~ r via Gaussian elimination on both Ac and Ar. Figure 1. barcode readers used in supermarkets and by shipping companies must be able to compensate for imperfections in the scanner optics. it is inevitable that small random errors (noise) will be present in the recorded data. First. where Ar ' = (A r ') ' = (Ar ') r . not matter in which order we perform the two blurring operations. ignoring all kinds of errors. see Wittman [63] for more information. Both matrices are ill-conditioned.3 illustrates that this is probably not such a good idea.3. Image deblurring is much more than just a useful tool for our vacation pictures. The naive reconstruction of the pumpkin image in Figure 1. At a more mundane level. A First Attempt at Deblurring 5 POINTER. Let us take a closer look at what is represented by the image B.t = Ac X A^ represent the ideal blurred image.1. analysis of astronomical images gives clues to the behavior of the universe. and the image Xnaive is dominated by the influence from rounding errors as well as errors in the blurred image B.3.2. Moreover. Thus the recorded blurred image B is really given by (1. The reason for our use of the transpose of the matrix Ar will be clear later. the reconstructed image does not appear to have any features of the true image! Figure 1. then one might think that the naive solution will yield the desired reconstruction. Because the blurred image is collected by a mechanical device.1) is not quite correct. 1. when we return to this blurring model and matrix formulations. For example.3 A First Attempt at Deblurring If the image blurring model is of the simple form Ac X A^ = B.2) . To understand why this naive approach fails. because we have ignored several types of errors.

01. colormap gray How large can you choose the parameter noise before the inverted noise dominates the deblurred image? Does this value of noise depend on the image size? .6 Chapter 1. use the commands imagesc(X). This inverted noise will dominate the solution if the second term A~' E A~ r in (1. to describe effective deblurring methods that are able to handle correctly the inverted noise. represents the contribution to the reconstruction from the additive noise.3. We can now state the purpose of our book more precisely. CHALLENGE 1.3) has larger elements than the first term X. n. the inverted noise indeed dominates. X. say. The exact and blurred images X and B in the above figure can be constructed in MATLAB by calling [B. To display a grayscale image. Apparently. in many situations. which we can informally call inverted noise. as in Figure 1. Try to deblur the image B using Xnaive = Ac \ B / A r ' . Unfortunately. with m = n = 256 and noise = 0 . X] = challenge!(m.r T. namely. image deblurring is not as simple as it first appears. axis image. n o i s e ) .3) where the term Ac lEA. Ac. Ar. Consequently the naive reconstruction is given by and therefore (1. The Image Deblurring Problem where the noise image E (of the same dimensions as B) represents the noise and the quantization errors in the recorded image.

4 Deblurring Using a General Linear Model Underlying all material in this book is the assumption that the blurring.mat. An important consequence of the assumption . or at least well approximated by a linear model. Can you deblurthis image with the naive approach. 1. this assumption is made because in many situations the blur is indeed linear. Deblurring Using a General Linear Model CHALLENGE 2. CHALLENGE 3. the operation of going from the sharp image to the blurred image.e. is linear. The quantity cond(A) is computed by the MATLAB function cond ( A ) .4. formally defined by (1. i. use this relation to determine the maximum allowed value of ||E||]. It is the condition number of A.. measuring the possible magnification of the relative error in E in producing the solution XnaTve- For the test problem in Challenge 1 and different values of the image size. 7 The above image B as well as the blurring matrices Ac and Ar are given in the tile challenge2 .1. As usual in the physical sciences. use Challenges 1 and 2 as examples to test your skills and learn more about the presented methods. such that the relative error in the nai've reconstruction is guaranteed to be less than 5%. so that you can read the text in it? As you learn more throughout the book.8).' + E it is easy to show that the relative error in the nai've reconstruction X na i ve = A~"'BA~' satisfies where denotes the Frobenius norm of the matrix X. For (he simple model B = At X A.

we expect failure. The mathematical notation for this operator is vec. and define the corresponding vectors Then the noisy recorded image B is represented by the vector and consequently the naive solution is given by (1. The matrix A represents the blurring that is taking place in the process of going from the exact to the blurred image. The equation A x = b can often be considered as a discretization of an underlying integral equation. Again let Bexact and E be. Our basic assumption is that we have a linear blurring process. the details can be found in [23]. This means thatifBi and 82 are the blurred images of the exact images X | andX2. we need a blurring model somewhat more general than that in (1. In order to handle a variety of applications.4). this time using the general formulation in (1.. but from the previous section. then there exists a large matrix A such that b = vec(B) and x = vec(X) are related by the equation A x = b. The key to obtaining this general linear model is to rearrange the elements of the images X and B into column vectors by stacking the columns of these images into two long vectors x and b. both of length N = mn. Since the blurring is assumed to be a linear operation. We repeat the computation from the previous section.8 Chapter 1. For our linear model.5) . and goes back to classical works such as the book by Andrews and Hunt [1].thenB = a B\+fi 82 is the image of X = a X] + /? X2. The Image Deblurring Problem POINTER. the noise-free blurred image and the noise image.4). For now. there must exist a large blurring matrix A € K / V x / v such that x and b are related by the linear model (1. the naive approach to image deblurring is simply to solve the linear algebraic system in (1. and also discuss the precise structure of the matrix in Chapter 4.e. we will explain how it can be constructed from the imaging system in Chapter 3. i. Let us now explain why. assume that A is known. is that we have a large number of tools from linear algebra and matrix computations at our disposal.4) and this is our fundamental image blurring model.When this is the case. The use of linear algebra in image reconstruction has a long history.1). respectively.

it is straightforward to show that the inverse of A is given by (we simply verify that A ' A = !„). where the term A~' e is the inverted noise.1. Golub and Van Loan 118]. . Since E is a diagonal matrix. we see that u^Uj = 0 if i ^ j. and Stewart [57]. for .-) is a diagonal matrix whose elements er/ are nonnegative and appear in nonincreasing order. The SVD of a square matrix A 6 M' VxiV is essentially unique and is defined as the decomposition where U and V are orthogonal matrices. .' = 1. N. . Another representation of A and A~' is also useful to us: Similarly. and the rank of A is equal to the number of positive singular values. and. similarly. with entries 1/cr. ' is also diagonal. Assuming for the moment that all singular values are strictly positive. of U are called the left singular vectors. are called the singular values. while the columns v. The columns u. which is the tool-of-the-trade in matrix computations for analyzing linear systems of equations. The important observation here is that the deblurred image consists of two components: the first component is the exact image. Important insight about the inverted noise term can be gained using the singular value decomposition (SVD). its inverse T. satisfying U 7 U — IN and V r V = I /v . Since U r U — I/v. it is because the inverted noise term contaminates the reconstructed image. If the deblurred image looks unacceptable. \f\j = 0 if / •£ j. . of V are the right singular vectors.4. Good presentations of the SVD can be found in the books by Bjorck [4]. and the second component is the inverted noise. Deblurring Using a General Linear Model 9 POINTER. Equation (1.3) in the previous section is a special case of this equation. and £ = diag((j. . The quantities <r.

can be associated with some "frequency. the solution has very little contribution from v. for A~'e we see that the quantities ufe/a. u^e.. Figure 1. • The singular vectors corresponding to the smaller singular values typically represent higher-frequency information. as i increases. strictly speaking.5) can be written as (1.: tend to have more sign changes. we greatly magnify the corresponding error component. is reshaped into an m x n array V. Looking at the expression (1. and v." approximated by the number of times the entries in the vector change signs.=i £/ vo sucri as m (1-6) and (1. it follows immediately that the naive reconstruction given in (1." the .7) In order to understand when this error term dominates the solution. And since each vector v.2. they are not images. Note that each vector v. in such a way that we can write the naive solution as All the V.. are the expansion coefficients for the basis vectors v. We see that the spatial frequencies in V. That is. for an expansion where each basis vector represents a certain "frequency. When we encounter an expansion of the form £]. The Image Deblurring Problem Using this relation.7). As a consequence the condition number (1.\ to . to the result. The consequence of the last property is that the SVD provides us with basis vectors v. arrays (except the first) have negative elements and therefore.6) and the inverted noise contribution to the solution is given by (1. indicating that the solution is very sensitive to perturbation and rounding errors.8) is very large. then the . but when we divide by a small singular value such as aN. the vectors u. increase with the index i.• measures the contribution of v.th coefficient measures the amount of information of that frequency in our image. which in turn contributes a large multiple of the high-frequency information contained in \. When these quantities are small in magnitude.th expansion coefficient £. we need to know that the following properties generally hold for image deblurring problems: • The error components |u/ e are small and typically of roughly the same order of magnitude for all i. for the blur of Figure 1. • The singular values decay to a value very close to zero.10 Chapter 1.4 shows images of some of the singular vectors V.7).

A few of the singular vectors for the blur of the pumpkin image in Figure 1. For example. Because of this. we might be better off leaving the high-frequency components out altogether.4. This is precisely why a naive reconstruction. into m x n arrays. since they are dominated by error. this reconstruction is much better than the naive solution shown in Figure 1.3. such as the one in Figure 1. Deblurring Using a General Linear Model 11 Figure 1.3. for some choice of k < N we can compute the truncated expansion in which we have introduced the rank-/c matrix Figure 1.4. The "images" shown in this figure were obtained by reshaping the mn x 1 singular vectors v. and is therefore computationally feasible only if we can find fast algorithms .1. the computed solution. appears as a random image dominated by high frequencies.5 shows what happens when we replace A 'b by x* with k = 800.2. We may wonder if a different value for k will produce a better reconstruction! The truncated S VD expansion for xk involves the computation of the S VD of the large N x N matrix A.

2 and 1. CHALLENGE 4.2by using k — 800 (instead of the full k = N = ] 697 44). For the simple. defined similarly to A^. which.5. model B = Ac X Ar' + E in Sections 1. though. to compute the decomposition. We model the blurring of images as a linear process characterized by a blurring matrix A and an observed image B. caused by the inversion of very small singular values of A. VIP 2.)]. in vector form. The reason A"1 b cannot be used to deblur images is the amplification of high-frequency components of the noise in the data..12 Chapter 1. is b.3. n) we can define the reconstruction Use this approach to deblur the image from Challenge 2. The reconstruction x* obtained for the blur of the pumpkins of Figure \. Practical methods for image deblurring need to avoid this pitfall. Can you find a value of k such that you can read the text? . let us introduce the two rank-it matrices (Ac)| and (A. it may be helpful to have a brief tutorial on manipulating images in MATLAB. and we present that in the next chapter. Then for k < min(/n. Before analyzing such ways to solve our problem. The Image Deblurring Problem Figure 1.

and how to write/save images to files. blue—the primary colors of light) format for color images. how to perform arithmetic operations on them. the world is a mere object to be manipulated by him. We discuss in this chapterthe following MATLAB commands for processing images: MATLAB colormap imformats double importdata imread imwrite imfinfo load image save imagesc MATLAB 1PT imshow rgb2gray mat2gray Recall that we use IPT to denote the MATLAB Image Processing Toolbox. green. Color images can use different color models. In addition. how to display them. MATLAB supports each of these formats. can be thought of simply as two-dimensional arrays (or matrices).Chapter 2 Manipulating Images in MATLAB For the bureaucrat. 13 . HSV. 2. Typical grayscales for intensity images can have integer values in the range [0. as shown in Chapter 1. where each entry contains the intensity value of the corresponding pixel. such as RGB.Karl Marx We begin this chapter with a recap of how a digital image is stored. and then discuss how to read/load images. 0. grayscale. is black. POINTER. and CMY/CMYK. but we will be mainly concerned with grayscale intensity images.1 Image Basics Images can be color. 255 or 65535. 65535]. which. 255] or [0. we will use RGB (red. and the upper bound. is white. where the lower bound. . or binary (O's and 1 's). For our purposes.

tif'). t i f . F is a three-dimensional array since it contains RGB information. it may be necessary to allow for noninteger values. 255]. and imagesc. and Writing Images Here we describe some basics of how to read and display images in MATLAB. 11 for pixel values.AB POINTER. Another popular color format in image processing is HSV (hue. so we will convert images to floating point before performing arithmetic operations on them. For the examples in this chapter. . » F = imread('butterflies. Manipulating Images in MATl. tif from that website. Doing the same thing for pumpkins . value). since it renders images more accurately.tif'). 2. magenta.tif') shows that the image contained in the file butterflies . for example. image. imshow can only be used if the IPT is POINTER. There are three basic commands for displaying images: imshow. we provide a MATLAB demo script chapter 2 demo. This means the intensity values are integers in the range [0. As part of our software at the book's website. whereas G is a two-dimensional array since it represents only the grayscale intensity values for each pixel. In general. An alternative to the RGB format used in this book is CMY (cyan. use the CMYK system. imshow is preferred. For example. we use pumpkins . The functions help or doc describe many ways to use imread. often used in the printing industry.2 Reading. However. tif is an RGB image. and yellow—the subtractive primary colors). In addition. several images can also be downloaded from the book's website. especially in terms of size and color. Access to the IPT provides several images we can use. Notice that both F and G are arrays whose entries are uintS values. The command imf inf o displays information about the image stored in a data file. » info = imfinfo('butterflies. a CMY cartridge and a black one. MATLAB supports double precision floating point numbers in the interval [0. The first thing we need is an image. Now use the whos command to see what variables we have in our workspace. saturation. The command to read images in MATLAB is imread. we see that this image is a grayscale intensity image. see » help imdemos/Contents for a full list.14 Chapter2. Many ink jet printers. here are two simple examples: » G = imread('pumpkins. tif and butt erf lies . Displaying. Since many image processing operations require algebraic manipulation of the pixel values. m that performs a step-by-step walk-through of all the commands discussed in this chapter.

Displaying. and the online help provides more information. figure. This is especially true for grayscale intensity images. figure. for example. figure. notice that an unexpected rendering may occur.2 what happens with each of the following commands: » » » » » » » » » » figure. image does not always provide a proper scaling of the pixel values. figure. Neither command sets the axis ratio such that the pixels are rendered as squares. Grayscale pumpkin image displayed by imshow. if the IPT is not available. figure. colormap(gray) imagesc(G) imagesc(G). colormap(gray) imshow(F) image(F) image(F). we suggest using the imagesc command followed by the command axis image to get the proper aspect ratio. imshow(G) image(G) image(G). We see in Figures 2. unless we explicitly specify gray using the colormap (gray) command.2.1. In addition. Reading. colormap(gray) In this example. from TIFF to JPEG. Here we describe only two basic approaches. figure. then the commands image and image sc can be used.2. and Writing Images 15 Figure 2. figure. which will work for converting images from one data format to another. figure. The tick marks and the numbers on the axes can be removed by the command axis o f f . figure. Only imshow displays the image with the correct color map and axis ratio. Thus. There are many ways to use this function. and image sc. this must be done explicitly by the axis image command.1 and 2. image. To write an image to a file using any of the supported formats we can use the i mwr i t e command. This can be done . where image and imagesc display images using a false colormap. colormap(gray) imagesc(F) imagesc(F). available. If this is not the case.

One important thing we must keep in mind is that most image processing POINTER. In this case. so images can also be stored using this "MAT-file" format. There are many types of image file formats that are used to store images.3 Performing Arithmetic on Images We've learned the very basics of reading and writing. Butterfly image displayed by imshow. » imwrite(G. 2.jpg'). and imagesc. if we want to use the saved image in a subsequent MATLAB session.tif').these and many other file formats. so now it's time to learn some basics of arithmetic. 'image. The MATLAB command imf ormats provides more information on the supported formats.2. Note also that MATLAB has its own data format. Currently. simply by using imread to read an image of one format and imwrite to write it to a file of another format. » G = imread('image. image. For example. and that the gray colormap is ignored for color images. we simply use the load command to load the data into the workspace. Manipulating Images in MATLAB Figure 2. .16 Chapter 2. Note that image and imagesc do not automatically set correct axes. Image data may also be saved in a MAT-file using the save command. the most commonly used formats include • • • • GIF (Graphics Interchange Format) JPEG (Joint Photographic Experts Group) PNG (Portable Network Graphics) TIFF (Tagged Image File Format) MATLAB can be used to read and write files with.

we first read in an image: » G = imread('pumpkins. Our algorithms require many arithmetic operations. —. the IPT has functions such as imadd. and standard arithmetic operations like +. subtract. Before performing arithmetic operations on a grayscale intensity image. and / do not always work for images. we want to be able to add. —. 1|. if we attempt the simple command » G + 10. 65535]. integer representation of images can be limiting. Use the whos command to see what variables are contained in the workspace. To experiment with arithmetic.2. unexpected results can occur. Of course. say. and imdivide that can be used specifically for image operations. If. imsubtract. then the result contains entries that are nonintegers. the main conversion function we need is double. after performing some arithmetic operations. For the algorithms discussed later in this book. the pixel values fall outside these intervals. floating point numbers. Then if we plan to use arithmetic operations on these images. . and working in 8. This can be done by using the command rgb2gray. VIP 3. Notice that Gd requires significantly more memory. version 6. If we are only doing one arithmetic operation. For example.tif).g. standard MATLAB commands such as +. However. Recall that typical grayscales for intensity images can have integer values from [0. and standard arithmetic operations may not work on these types of variables. that is. are either uint 8 or uint 16. JPEG. in older versions of MATLAB (e.. and then converting back to the appropriate format when we are ready to display or write an image to a data file. we may want to convert color images to grayscale intensity images. but now we are not restricted to working only with integers. Unfortunately. since our goal is to operate on images with mathematical methods. 255]or[0.or 16-bit arithmetic can lead to significant loss of information. and / all work in a predictable manner. operating upon it. we need to convert to double precision. we will not use these operations. the IPT provides basic image operations such as scaling. In some cases. use the MATLAB command double to convert the pixel values to double precision. *. we adopt the convention of converting the initial image to double precision. For example. Therefore. we need to understand how to algebraically manipulate the images. Performing Arithmetic on Images 17 software (this includes MATLAB) expects the pixel values (entries in the image arrays) to be in a fixed interval.. if we multiply an image by a noninteger scalar. It is easy to use » Gd = d o u b l e ( G ) . Moreover. The + operator does not work for uintS variables! Unfortunately. immultiply. then we get an error message. rounding. we can easily convert these to integers by. In working with grayscale intensity images. then this approach may be appropriate. multiply. *. or floating point values in the interval |0. For example.3.5). To get around this problem. etc. most images stored as TIFF. and divide images.

we can use any of MATLAB's array operations on it. to determine the size of the image and the range of its intensities. since we lose information. In any case. as shown in Figure 2. a good idea to change "true color" images to grayscale.4 Displaying and Writing Revisited Now that we can perform arithmetic on images. . axis image. • When the input image has uintS entries. imagesc(Gd). » F = imreadf'butterflies. we can use » size(Fd) » max(Fd(:)) » min(Fd(:)) 2.3. we need to be able to display our results. once the image is converted to a double precision array. colormap(gray) and we observe that something unusual has occurred when using imshow. » Fg = rgb2gray(F). Manipulating Images in MATLAB Figure 2. it expects the values to be integers in the range 0 (black) to 65535 (white). » figure. it expects the values to be in the range 0 (black) to 1 (white). » Fd = double(Fg). For example. But try to display the image Gd using the two recommended commands. imshow expects the values to be integers in the range 0 (black) to 255 (white). • When the input image has uintie entries. To understand the problem here. imshow(Gd) » figure. we need to understand how imshow works. in general.3. It is not. although the values are really the same—look at the values Gd (2 0 0 . Note that Gd requires more storage space than G for the entries in the array. 2 0 0 ) .18 Chapter 2.tif). 2 0 0} and G (2 0 0 . • When the input image has double precision entries. The "double precision" version of the pumpkin image displayed using imshow(Gd) (left) and imagesc (Gd) (right).

The other way to fix this scaling problem is to rescale Gd into an array with entries in [0. [ ]) then imshow finds the max and min values in the array. truncation is performed. 16) If we try this with the double precision pumpkin image. and then displays. If we say » imshow(Gd. and they are not easily ported to other applications such as Java programs. entries less than 0 are set to 0 (black). 'Quality'. 100) » imwrite(mat2gray(X). So. Then the image is displayed. 'Mylmage. use the command imshow (G. The array Gd has entries that range from 0 to 255. and we want to save our images with their full double precision values. and then read the image back with imread. all entries greater than 1 are set to 1. if X is a double precision array containing grayscale image data. The disadvantages are that the MAT-files are much larger. while PNG uses 16 bits. We can get around this in two ways. 'Mylmage. then we can simply use the save command.. since it will give consistent results. For example. . we could use » imwrite(mat2gray(X). 1]. This can be done as follows: » Gds = mat2gray(Gd). 1]. we see that the JPEG format saves the image using only 8 bits.jpg'. 'SignificantBits'. we should first use the mat2gray command to scale the pixel values to the interval [0. Displaying and Writing Revisited 19 If some entries are not in range. [0. and entries larger than the upper bound are set to the white value. 'BitDepth'. but they are double precision. When using the imwrite command to save grayscale intensity images. first use the mat2gray function to properly scale the pixel values.4. 1]. 255]} Of course this means we need to know the max and min values in the array. [ ] ) to display image G. If the IPT is not available. To display grayscale intensity images.png'. The first is to tell imshow that the max (white) and min (black) are different from 0 and 1 as follows: » imshowfGd. follow these commands by the command colormap ( g r a y ) . even if the scaling is already in the interval [0. » imshow(Gds) Probably the most common way we will use imshow is imshow (Gd. VIP 4.2. In particular. The tick marks and numbers on the axes can be removed by axis o f f . then to save the image as a JPEG or PNG file. If the IPT is available. scales to [0. use imagesc (G) followed by axis image.1]. if the image array is stored as double precision. and then display. before displaying the image. If 16 bits is not enough accuracy. . [ ] ) . This scaling problem must also be considered when writing images to one of the supported file formats using imwrite. VIP 5. 16. Gd.. resulting in an image that has only pure black and pure white pixels.

but can be read using the importdata command. and R for B values. 40% of the green channel. images stored using the Flexible Image Transport System (FITS). perform these tasks: 4. replacing pixel values by either 0 or 1 (see the image on the right above). For the color image butterflies . If you have access to the MATLAB IPT. used by astronomers to archive their images. 2. G. execute the following tasks: 1. and 20% of the blue channel. CHALLENGE 5. The importdata command can be very useful for reading images stored using less popular or more general file formats. Create a grayscale version of the butterflies image by combining 40% of the red channel. Manipulating Images in MATLAB POINTER. 5. . compare your grayscale image to what is obtained using rgb2gray. For example. 7. t i f .20 Chapter 2. Swap the colors: G for the R values. Blur the image by applying the averaging technique from task 3 to each of the three colors in the image.e. 3. 6. For the grayscale pumpkins . tif image. B for G values. Display the image in reverse color (negative): black for white and white for black (see the image on the left above). currently cannot be read using the imread command. Display the image in high contrast.. Put the image in a softer focus (i. Test your understanding of MATLAB's image processing commands. blur it) by replacing each pixel with the average of itself and its eight nearest neighbors. and B images separately. Display the R.

because it allows us to set up an equation whose solution. 21 . so it reflects without blurring? ~ Lao Tzu Our main concern in this book is problems in which the significant distortion of the image comes from blurring. and boundary conditions. defocus the camera's lens! POINTER. These components provide relations between the original sharp scene and the recorded blurred image and thus provide the information needed to set up a precise mathematical model.1 Taking Bad Pictures A picture can.Chapter 3 The Blurring Function Can you keep the deep water still and clear. What we have in mind in this book are blurred images. digital or not. In this chapter we discuss these MATLAB commands: MATLAB conv2 randn MATLABIPT fspecial imnoise HNO FUNCTIONS psfDefocus psfGauss HNO FUNCTIONS are written by the authors and can be found at the book's website. that make up the model of the image blurring process. where the blurring comes from some mechanical or physical process that can be described by a linear mathematical model. 3. and how we can exploit structure in the matrix when implementing image deblurring algorithms. be considered "bad" for many reasons. Everyone who has taken a picture. for example. at least in principle. knows what a blurred image looks like—and how to produce a blurred image. in this chapter we describe the components. noise. we can. This model is important. is the unblurred image. The latter issues are addressed in the following chapter. such as blurring operators. We must therefore understand how the blurring matrix A is constructed. of course.

Sometimes the blurring in an image comes from mechanisms outside the camera. and Roggemann and Welsh [49]. they cannot overcome severe blurring that can occur in many important applications. Jain [31]. Although these techniques can be useful for mild blurs. Examples of blurring. Andrews and Hunt [11. The above discussion illustrates just some of the many causes of blurred images. by "sharpening" the contours in the image. No matter how hard we try to focus the camera. and environmental effects. camera and object motion. The same type of atmospheric turbulence also affects images of the earth's surface taken by astronomical telescopes. In spite of many telescopes being located at high altitudes. Yet another type of blurring also taking place outside the camera is due to variations in the air that affect the light coming into the camera. see. to describe them by a mathematical model. For many pictures. where air is thin and turbulence is less pronounced. such as [15]. the blurring comes from the camera itself. The aim of this book is to describe more sophisticated approaches that can be used for these difficult problems. Blurring in images can arise from many sources. In defocussing. Cameras of high quality have lens systems that seek to compensate for this as much as possible. for example.g. this still causes some blur in the astronomical images. and the exact path followed by the light through the lens depends on the wavelength.22 Chapter 3. You may have noticed how the light above a hot surface (e..2 The Matrix in the Mathematical Model Blurred images are visually unappealing. These variations are very often caused by turbulence in the air. there are physical limitations in the construction of the lens that prevent us from producing an ideal sharp image. e. a highway or desert) tends to flicker. such as limitations of the optical system. due to the air's turbulence when it is heated. and applications in which they arise. Bertero and Boccacci [31. can be found in many places. But in certain situations—such as microscopy—we need to take such imperfections into consideration.g. astigmatism. these limitations are not an issue. POINTER. and many photo editing programs for digital image manipulation contain basic tools for enhancing the images.. This flicker is due to small variations in the optical path followed by the light. Lagendijk and Biemond [37]. and CCDs can be found in many books about computer vision. 3. Technical details about cameras. more precisely from the optical system in the lens. that is. and outside the control of the photographer. with the result that the object appears to be smeared in the recorded image. Some of these limitations are due to the fact that light with many different wavelengths (different colors) goes into the camera. . A good example is motion blur: the object moved during the time that the shutter was open. lenses. The Blurring Function POINTER. Precisely the same mechanism can blur images taken from a satellite. Obviously we obtain precisely the same effect if the camera moved while the shutter was open.

the pixels. In the next section.3.for i = 1 . Since the key ingredient is the blurring model. What approximates a point source depends on the application. we make the assumption that this ideal image has the same dimensions as the recorded image. in atmospheric imaging. then the blurring operation will cause the single bright pixel to be spread over its neighboring pixels. as the ith column of the identity matrix is common in the mathematical literature. = A(:. and b = vec(B) when referring to the vector representation. The matrix A represents the blurring that is taking place in the process of going from the exact to the blurred image. At this stage we know there is such a matrix. we shall take a closer look at the formulation of this model. We can also imagine the existence of an exact image. . slightly inconsistent with the notation used in this book since it may be confused with the ith column of the error. we set x = e. we explore 'The use of e. .2. We have attempted to minimize such inconsistencies. Point sources and PSFs are often generated experimentally. That is. For obvious reasons. except for a single bright pixel. In microscopy. we assume that the blurring can be described by a mathematical model. As mentioned in Chapter 1. This image represents the ideal scene that we arc trying to capture. to be the ith unit vector. Suppose we take the exact image to be all black. visually more appealing image. . We also recall from Section 1. however. we will think of the recorded image b as the blurred version of the ideal image x. but how do we gel it? Imagine the following experiment. and we refer to it as either the m x n array X or the vector x = vec(X). . The Matrix in the Mathematical Model 23 POINTER. which is the image we would record if the blurring and noise were not present. i) — column i of A. the point source can be a single bright star |25|. Throughout. In the linear model. such that b and x are related by the equation Ax = b. Mathematically. represent the light intensity.1 which consists of all zeros except the /th entry. That is. and we use this model to reconstruct a sharper. N. Moreover. If we take a picture of this image. there exists a large matrix A of dimensions N x N. then in principle we have obtained complete information about the matrix A. the single bright pixel is called a point source. we use the matrix notation B when referring to the image array. For example. though. the point source is typically a fluorescent rnicrospherc having a diameter that is about half the diffraction limit of the lens |11 ]. We recall that a grayscale image is just an array of dimension m x n whose elements. which is 1. It is. except a single pixel whose value is 1. we take a model-based approach to image deblurring. the point source is equivalent to defining an array of all zeros. Clearly.4 that we can arrange these pixels into a vector of length mn. and the function that describes the blurring and the resulting image of the point source is called the point spread function (PSF).1. with N — mn. E. When we refer to the recorded blurred image. . if we repeat this process for all unit vectors e. as illustrated in Figure 3. The process of taking a picture of this true image is equivalent to computing Ae.

to conserve storage we can often represent the PSF using an array P of much smaller dimension than the blurred image. When this is the case.3 Obtaining the PSF We can learn several important properties of the blurring process by looking at pictures of various PSFs. (The upper right image in Figure 3. which shows several PSFs in various locations within the image borders. the PSF is zero 15 pixels from the center. Furthermore. though. The blurring matrix A is determined from two ingredients: the PSF. a careful examination reveals that the PSF is the same regardless of the location of the point source. VIP 6. Figure 3.used to construct these PSFs correspond to the indices i — 3500. In our example.) We refer to P as the PSF array. and 12555. which defines how each pixel is blurred. then the pixel values in the PSF must sum to 1. Left: a single bright pixel. In this example—and many others—the light intensity of the PSF is confined to a small area around the center of the PSF (the pixel location of the point source). In the example shown here. the intensity is essentially zero. The Blurring Function Figure 3. and the unit vectors e. Right: the blurred point source. the blurring is a local phenomenon. called a point spread function. alternatives to performing this meticulous task.1. but it happens so often that throughout the book we assume spatial invariance. called a point source. for example. and the boundary conditions. This is not always the case.24 Chapter 3. Here the images are 120 x 120.5 we demonstrate that because we can only see a finite region of a scene that extends forever in all directions. In other words. 7150. It might seem that this is all we need to know about the blurring process. As a consequence of this linear and local nature of the blurring. but in Sections 3. We remark. Consider. we say that the blurring is spatially invariant.2 has size 31 x 31.2. some information is lost in the construction of the matrix A. if we assume that the imaging process captures all light. In the next chapter we demonstrate how our deblurring algorithms are affected by the treatment of these boundary conditions. which specify our assumptions on the scene just outside our image. and outside a certain radius.3 and 3. that many of our deblurring algorithms require that the PSF array . 3.

In some cases the PSF can be described analytically. When this is the case. horizontal motion blur. If the line covers r pixels—over which the light is distributed—then the magnitude of each nonzero element in the PSF array is r ~ ' . The same is true for vertical motion blur. and r is the radius of the blur. 49J. Consider. For example. An example of the PSF array for horizontal motion is shown in Figure 3. this process is often referred to as "zero padding" and will be discussed in further detail in Chapter 4. for example.3. which smears a point source into a line. Obtaining the PSF 25 Figure 3. and thus P can be constructed from a function. In this case the small PSF array is embedded in a larger array of zeros.1) where (k. The PSF for blurring caused by atmospheric turbulence can be described as a twodimensional Gaussian function [31.3. In other cases. be the same size as the blurred image.2. Bottom: two blurred images of single pixels near the edges. the elements of the PSF array are given by a precise mathematical expression. t) is the center of P. rather than through experimentation.2) . and the elements of the unsealed PSF array are given by (3. and a zoom on the blurred spot (right). Top: the blurred image of a single pixel (left). the elements p^ of the PSF array for out-of-focus blur are given by (3.3. knowledge of the physical process that causes the blur provides an explicit formulation of the PSF.

26

ChapterB. The Blurring Function

Figure 3.3. Examples of four PSFs. In all four cases the center of the PSF coincides with the center of the PSF array. where the parameters s\, 82, and p determine the width and the orientation of the PSF, which is centered at element (k, I) in P. Note that one should always scale P such that its elements sum to 1. The Gaussian function decays exponentially away from the center, and it is reasonable to truncate the values in the PSF array when they have decayed, say, by a factor of 104 or 108. The PSF of an astronomical telescope is often modeled by the so-called Moffat function [41], and for this PSF the elements of the unsealed PSF array are given by
(3.3)

Similar to the Gaussian PSF for atmospheric turbulence, the parameters S], si, and p determine the width and the orientation of the PSF, and P should be scaled such that its elements sum to 1. The additional positive parameter y3 controls the decay of the PSF, which is asymptotically slower than that of the Gaussian PSF. If p = 0 in the formulas for the Gaussian blur and Moffat blur, then the PSFs are symmetric along the vertical and horizontal axes, and the formulas take the simpler forms

3.3. Obtaining the PSF

27

POINTER. MATLAB's IPT includes a function f special that computes PSF arrays for motion blur, out-of-focus blur, and Gaussian blur. The functions psf Def ocus and psf Gauss can also be found at the book's website.
and

If also s\ — S2, then the PSFs arc rotationally symmetric. VIP 7. The PSF array P is the image of a single white pixel, and its dimensions are usually much smaller than those of B and X. If the blurring is local and spatially invariant, then P contains all information about the blurring throughout the image. Once the PSF array is specified, we can always construct the big blurring matrix A one column at a time by simply placing the elements of P in the appropriate positions, leaving zeros elsewhere in the column. In the next chapter, we shall see how the locality and the spatial invariance impose a special structure on the matrix A, which saves us this cumbersome work. If we want to compute the blurred image B one pixel at a time (given the sharp image X), then we need to compute

Hence we need to work with the rows of A—not the columns—to compute each pixel in the blurred image as a weighted sum (or average) of the corresponding element and its neighbors in the sharp image. The weights are the elements of the rows in A. Alternatively, we can use the fact that the weights are also given by the pixel values of the PSF array P, and the weighted sum operation is known in mathematics and image processing as a twodimensional convolution. CHALLENGE 6, Write a MATLAB function psfMof f at (similar to our functions psf Def ocus and psf Gauss) with the call
P = psfMoffat (dim, s, beta)

that computes the PSF array P for Moffal blur, using (3.3) for the case with s{ = ,v2 = s, p — 0, and f} = beta. The PSF array should have dimensions dim(l) x dim(2). and the center of the PSF should be located at the center of the array.

POINTER. MATLAB has a built-in function, conv2, that can be used to form the convolution of a PSF image and a true image—in other words, to artificially blur an image. The function is computationally efficient if the PSF image array has small dimensions, but for larger arrays it is better to use an approach based on the fast Fourier transform; see Chapter 4 for details.

28

Chapter 3. The Blurring Function

CHALLENGE 7. In this book the convolution with P is used as a mathematical model for the blurring in the picture. Explicit convolution with a (typically small) PSF array P can also be used as a computational device to filter an image, and this challenge illustrates three such applications. • Noise removal is achieved by averaging each pixel and its nearest neighbors (cf. Challenge 5), typically using one of the following 3 x 3 PSF arrays:

Each of these low-pass filters is normalized to reproduce a constant image. Load the images pumpkinsnoisyl.tif and pumpkinsnoisy2.tif and try to remove the noise by means of these filters. • Edge detection can be achieved by means of a high-pass filter that damps the low frequencies in the image while maintaining the high frequencies. Examples of such filters are the following 3 x 3 PSF arrays:

All three filters produce zero when applied to a constant image. Load the image pumpkins . tif and then try the three filters. • Edge enhancement is achieved by adding some amount of the high-pass filtered image to the original image, resulting in an image that appears "sharper." (This is not deblurring.) Test this approach on the two images pumpkinsblurredl. tif and pumpkinsblurred2 . t i f , using each of the three high-pass filters. Remember to use the same color axis on the original and the enhanced image. There are many other linear and nonlinear filters used in image processing, such as the median and Sobel filters; sec, for example, [31] and the MATLAB IPT functions medfilt 2 and f special.

3.4 Noise
In addition to blurring, observed images are usually contaminated with noise.2 Noise can arise from several sources and can be linear, nonlinear, multiplicative, and additive. In this book we consider a common additive noise model that is used for CCD arrays; see, for example, [2,55,56]. In this model, noise comes essentially from the following three sources: • Background photons, from both natural and artificial sources, cause noise to corrupt each pixel value measured by the CCD array. This kind of noise is typically modeled
2 lmages can be corrupted by other defects. For example, "bad pixels" occur if the CCD array used to collect the image has broken elements. We do not consider such errors in this book.

3 reveals a potential difficulty at the boundaries of the image. However. In MATLAB.5. this is called white noise. say btj.e. we can describe the inclusion of additive noise to the blurred image as follows: where E is an m x n array containing. Poisson noise can be generated with the function imnoise from the 1PT. This kind of noise can take only positive values. some of which may be outside the field of view. • The analog-to-digital conversion also results in quantization error. Readout noise is usually assumed to consist of independent and identically distributed random values. Such artifacts can easily be seen in the reconstructed pumpkin image in Figure 1. Quantization error can be approximated by uniformly distributed white noise whose standard deviation is inversely proportional to the number of bits used.3. Such random errors are often called Gaussian white noise. Similarly. .5.. that is near the edge of the picture. so we can use 0 .3 The noise is further assumed to be drawn from a Gaussian (i. normal) distribution with mean 0 and a fixed standard deviation proportional to the amplitude of the noise. when the signal is represented by a finite (small) number of bits. In some cases it is possible to assume that the Poisson noise can be approximated well by Gaussian white noise [2]. For example.01. Consider a pixel in a blurred image. Gaussian white noise can be generated using the built-in function randn. see Figure 3. Clearly we lose some information that cannot be recovered. • The CCD electronics and the analog-to-digital conversion of measured voltages result in readout noise. n ) -1) to obtain uniformly distributed white noise with standard deviation 0.4. Recall that bij is obtained from a weighted sum of pixel xtj and its neighbors. uniformly distributed white noise with standard deviation 1 is generated with the command sqrt ( 3 ) * (2*rand ( m . Boundary Conditions 29 by a Poisson process. and is thus often referred to as Poisson noise. we ignored this behavior at the boundary. n ) -1). where information from the exact image "spills over the edge" of the recorded blurred image. or with the function poissrnd from the Statistics Toolbox. The most common technique for dealing with this missing information at the boundary is to make certain assumptions about the behavior of the sharp image outside the boundary. Gaussian white noise with standard deviation 0.01 is generated with the MATLAB command E = 0 . 01*randn (m. and that the quantization noise is negligible. When we described how the columns of A could be constructed in Section 3. 3. We can thus generate fairly accurate noise models by simply using MATLAB's built-in randn function. In our linear algebra notation. n ) . for example. 01*sqrt ( 3 ) * (2*rand ( m .2. In MATLAB. 'The term "white" is used because these random errors have certain spectral similarities with white light.5 Boundary Conditions The discussion of the PSF in Section 3. elements from a Poisson or Gaussian distribution (or a sum of both). a good model for image deblurring must take account of these boundary effects—otherwise the reconstruction will likely contain some unwanted artifacts near the boundary. with a fixed Poisson parameter.

This zero boundary condition can be pictured as embedding the image X in a larger image: (3. The zero boundary condition is a good choice when the exact image is mostly zero outside the boundary—as is the case for many astronomical images with a black background..e. Sometimes we merely get an artificial black border. we say that we impose boundary conditions on the reconstruction. If we ignore boundary conditions in creating the blurring matrix A (as in Section 3.. Again we can picture this boundary . The Blurring Function Figure 3. We shall describe some boundary conditions that can be expressed in our language of matrix computations.2). beyond the scope of our model. caused by a large difference in pixel values inside and outside of the border.4) where the 0 submatrices represent a border of zero elements. the image within the boundaries. When these assumptions are used in the blurring model. Our boundary conditions come in different forms. via a statistical analysis of the image). There are many other techniques. Hence we must often use other boundary conditions that impose a more realistic model of the behavior of the image at the boundary but only make use of the information available. at other times we compute a reconstructed image with severe "ringing" near the boundary. consists of zeros) outside the boundary. Unfortunately. such as those based on extrapolation (e. so that scene values outside the image affect what is recorded. The periodic boundary condition is frequently used in image processing.30 Chapter 3. The PSF "spills over the edge" at the image boundary (the yellow line). i.e. The simplest boundary condition is to assume that the exact image is black (i. This implies that the image repeats itself (endlessly) in all directions.4. we implicitly assume zero boundary conditions. the zero boundary condition has a bad effect on reconstructions of images that are nonzero outside the border..g.

Boundary Conditions condition as embedding the image X in a larger image that consists of replicas of X: 31 (3.6) We illustrate this approach with a 3 x 3 example. Boundary conditions specify our assumptions on the behavior of the scene outside the boundaries of the given image. Introducing (with MATLAB notation) the three additional images. Which boundary condition to use is often dictated by the particular application. Ignoring boundary conditions is equivalent to assuming zero boundary conditions.5) In some applications it is reasonable to use a reflexive boundary condition. VIP 8. In order to obtain a high-quality deblurred image we must choose the boundary conditions appropriately. which implies that the scene outside the image boundaries is a mirror image of the scene inside the image boundaries.5. . we can picture the reflexive boundary condition as embedding the image X in the following larger image: (3.3.

Construct the three versions of the extended image Xext corresponding to the three types of boundary conditions. To compute the PSF array. using a small PSF array P of size 32 x 32. and blur each image with the Gaussian (atmospheric turbulence) blur with _V| = .e.tif image shown above. The Blurring Function Let X be the iogray.v2 =15 and p — 0. 'same'0 Finally extract as B the center part of B exl such that B corresponds to X.p. you can use the call P = psfGauss (32. i. you can use the call Best = conv2 9X. gives the smallest amount of artifacts at borders of the image B? . Which boundary condition provides the best deblurring model.32 CHALLENGE 8. of si/. 150.. Chapter 3.e 512 x 512. To perform the blurring.

A* A = AA*.V linearly independent eigenvectors of A. Also.2) where U is a unitary matrix4 and A is a diagonal matrix containing the eigenvalues of A. we are in a position to provide an explicit description of the blurring matrix. but there are N linearly independent eigenvectors.e. then it is possible to compute the eigendecomposition A = UAU~'. While it is possible to compute an SVD for any matrix.1) and (4. boundary conditions. Another useful decomposition is the (unitary) spectral decomposition. Note also that if A has real entries. where U is an invertible (but not unitary) matrix containing . We have already encountered the SVD.2). when A is real. defined by the linear model. where U* = conj(U) 7 is the complex conjugate transpose of U. the spectral decomposition can be computed if and only if A is a normal matrix.Donald Curtis Now that we understand the basic components (i.Chapter 4 Structured Matrix Computations The structure will automatically provide the pattern for the action which follows. 5 4 33 . and noise) of the image blurring model. but the elements in the spectral decomposition may be complex. If the matrix is not normal. A matrix is unitary if U" U = U 0* = I.. In this book we consider only the orthogonal and unitary decompositions given by (4. A. (4.1) where U and V are orthogonal matrices. The deblurring algorithms in this book use certain orthogonal or unitary decompositions of the matrix A. and Z is a diagonal matrix. then its eigenvalues (the diagonal elements of A) are either real or appear in complex conjugate pairs. PSF. then the elements in the matrices of the SVD will be real.5 that is. (4. .

Although it may be prohibitively expensive to compute these factorizations for large.1 Basic Structures For spatially invariant image deblurring problems.1. and U that make up the SVD and spectral decomposition. but this does not necessarily mean that there is similar exploitable structure in the matrices U. The extension to two-dimensional problems is straightforward and is given in Section 4. and Hankel matrices. and may be found in Appendix 3 and at the book's website. A similar analysis can be done using the spectral decomposition.1 One-Dimensional Problems Recall that by convolving a PSF with a true image. The purpose of this chapter is to describe various kinds of structured matrices that arise in certain image deblurring problems.2. and then "shift" to obtain p(s — t). V.34 Chapter 4.1. the specific structure of the matrix A depends on the imposed boundary conditions. MATLAB fft2 ifft2 circshift svds svd randn MATLAB IPT dct2 idct2 HNO FUNCTIONS dcts2 idcts2 dctshift kronDecomp padPSF In Chapter 1 we used the SVD to briefly investigate the sensitivity of the image deblurring problem. Convolution is a mathematical operation that can be described as follows. and to show how to efficiently compute the SVD or spectral decomposition of such matrices. 4. 4. if A is normal. HNO FUNCTIONS refer to M-files written by the authors. we obtain a blurred image. then the convolution of p and x is a function b having the form That is. in Section 4. each value of the function b(s) is essentially a weighted average of the values of x(t). If p(s) and x (s) are continuous functions. we must first "flip" the function p(t) to obtain /?(—/). and can involve Toeplitz. In order to perform the integration.1. By efficient we mean that the decomposition can be computed quickly and that storage requirements remain manageable. how these structures arise for one-dimensional problems. efficient approaches exist for certain structured matrices. circulant.1. Structured matrices can often be uniquely represented by a small number of entries. Structured Matrix Computations POINTER. . The following functions are used to implement some of the structured matrix computations discussed in this chapter. where the weights are given by the function p. To simplify the notation. generic matrices. we first describe.

That is. the entry corresponding to a shift of zero) over the .'th entry in x.. It is perhaps easier to illustrate the discrete operation for one-dimensional convolution using a small example. by the one-dimensional arrays where if. p. and sum to get the j'th entry in b.e. and >'. Suppose a true image scene and PSF array are given.3) . (4. convolution can be written as a matrix-vector multiplication. is the center of the PSF array. b. we have Thus. of x and p. pixels of the blurred image are obtained from a weighted sum of the corresponding pixel and its neighbors in the true image. flip) the PSF array.. respectively. can be summarized as follows: • Rotate (i. shifting) the center of the PSF array (i. • Multiply corresponding components.e. by 180 degrees. assuming that p?.1.4. • Match coefficients of the rotated PSF array with those in x by placing (i. The weights are given by the elements in the PSF array. Basic Structures 35 The discrete version of convolution is a summation over a finite number of terms. The basic idea of computing the convolution. represent pixels in the original scene that are actually outside the field of view.. For example.e. writing its elements from bottom to top.

• Periodic Boundary Conditions.4) A matrix whose entries are constant on each diagonal. Since we do not know these values. is called a circulant matrix. such as in (4. vi — x\. and (4.6) . and v(.Thus (4. such as in (4.36 Chapter 4. u>2 = x\. In this case we assume that the true scene is comprised of periodic copies of x. • Zero Boundary Conditions. • Reflexive Boundary Conditions. In this case.5 we have the following. In this case we assume that the true scene immediately outside the field of view is a mirror reflection of the scene within the field of view. and thus (4. within the field of view. wi — x$. and V2 = *2.5) A Toeplitz matrix in which each row (and column) is a periodic shift of its previous row (column). so w\ = X4.5). is called a Toeplitz matrix with parameters p. y\ = x$. boundary conditions are used to relate them to the pixels jc.3) can be rewritten as (4.4).3) can be rewritten as (4. Using the various boundary conditions discussed in Section 3. Thus w\ = x?. w/ — v. and 3/2 = *4. = 0. Structured Matrix Computations It is important to keep in mind that the values it>.contribute to the observed pixels in the blurred image even though they are outside the field of view.3) can be rewritten as (4.

so the matrix given in (4. if we assume zero boundary conditions. For example. j } pixel in X. we must make use of the boundary conditions.1. Corresponding components are multiplied.7) has a block Toeplitz structure (as indicated by the lines). Then the element ^22 in the center of B is given by For all other elements in this 3 x 3 example. the PSF array is rotated by 180 degrees and matched with pixels in the true scene.4. let with p22 the center of the PSF array. In particular. In particular. 4.2 Two-Dimensional Problems The convolution operation for two-dimensional images is very similar to the one-dimensional case. Basic Structures 37 A matrix whose entries are constant on each antidiagonal is called a Hankel matrix. X. then the element bi\ at the border of B is given by By carrying out this exercise for all the elements of B.7) The matrix in (4. Similar block-structured matrices that arise in image deblurring include . it is straightforward to show that for zero boundary conditions.1. j) pixel of the convolved image. by placing the center of the rotated PSF array over the (/. and each block is itself a Toeplitz matrix.6) is a Toeplitz-plus-Hankel matrix. We call such a matrix block Toeplitz with Toeplitz blocks (BTTB). and the results summed to compute bjj. b = vec(B) and x = vec(X) are related by (4. B. to compute the (/.

With this notation.3 Separable Two-Dimensional Blurs In some cases the horizontal and vertical components of the blur can be separated. as in our example in Section 1. 4. BHTB: Block Hankel with Toeplitz blocks. In this case. we can precisely describe the structure of the coefficient matrix A for the various boundary conditions. Xj r . BTHB.2.3). • Zero Boundary Conditions. The special structure for this blur implies that P is a rank-one matrix with elements given by If we insert this relation into the expression in (4.e. blur across the rows of the image array). In this case. respectively. then the m x n PSF array P can be decomposed as where r represents the horizontal component of the blur (i. Using the notation from Section 3.5..1.7). we see that the coefficient matrix takes the form . • Reflexive Boundary Conditions. • Periodic Boundary Conditions. BHHB: Block Hankel with Hankel blocks. Structured Matrix Computations BCCB: Block circulant with circulant blocks.38 Chapter 4.. BTHB: Block Toeplitz with Hankel blocks. A is a BTTB matrix as demonstrated above. If this is the case. A is a sum of BTTB.2). and BHHB matrices (as explained in Section 4. X ud . In this case. and c represents the vertical component (i. and X x . each of these matrices takes into account contributions from X. blur across the columns of the image). BHTB.e. A is a BCCB matrix (we will discuss this in Section 4.

the result is that the blurred image B can be obtained by first convolving each column of X with c and then convolving each of the resulting rows with r. and Ac is a Toeplitz matrix with parameters c. using (4. respectively. for example. In this case.4).1. . POINTER. • Periodic Boundary Conditions..1.6). Ar is a Toeplitz-plus-Hankel matrix with parameters r. by (4.4. in [59J. • Reflexive Boundary Conditions. and they represent the one-dimensional convolutions with the rows and columns. In this case. is called a Kronecker product. In this case. This special structure.4. Basic Structures 39 In general—also for other boundary conditions—the coefficient matrix A for separable blur has block structure of the form where Ac is an m x m matrix. The matrices Ar and Ac have parameters r and c. and Ac is a circulant matrix with parameters c. As we shall see in Section 4. Hence they inherit the structures described in Section 4. Ar is a circulant matrix with parameters r. using (4. and Ac is a Toeplitz-plus-Hankel matrix with parameters c.5). and Ar is an n x n matrix with entries denoted by of. and the symbol ® that defines the operation that combines Ar and Ac in this way.1. Here we list some of the most important properties of Kronecker products: a more extensive list can be found. Ar is a Toeplitz matrix with parameters r. In particular: • Zero Boundary Conditions.

As an illustration. . VIP 9. A. consider again a 3 x 3 image.2 BCCB Matrices In this section we consider BCCB matrices that arise in spatially invariant image deblurring when periodic boundary conditions are employed. In the following sections we see that it is possible to efficiently compute the SVD or spectral decompositions of some of these matrices. element b^\. described in this section. Structured Matrix Computations A summary of the important matrix structures described in this section is given in VIP 9. or reflexive). periodic. is given by It then follows that the linear model now takes the form (4.40 Chapter 4. and how they correspond to the type of PSF (separable or nonseparable) and the type of imposed boundary conditions (zero. In particular. Boundary condition Zero Periodic Reflexive Nonseparable PSF BTTB BCCB BTTB+BTHB +BHTB+BHHB Separable PSF Kronecker product of Toeplitz matrices Kronecker product of circulant matrices Kronecker product of Toeplitz-plus-Hankel matrices 4. we see that the matrix A does not need to be constructed explicitly. Assuming periodic boundary conditions. for example. The following table summarizes the important structures of the blurring matrix. Instead we need only work directly with the PSF array.8) and the matrix is BCCB.

Thus. f f t 2 implements matrix-vector multiplication with \/WF. It is well known that any BCCB matrix has the particular spectral decomposition where F is the two-dimensional unitary discrete Fourier transform (DFT) matrix. N = mn. Thus A has a unitary spectral decomposition. so it is difficult to give a precise cost for the computations of f f t2 ( X ) and if f t2 ( X ) . if then Although this may seem a bit unnatural from a linear algebraic point of view.1 Spectral Decomposition of a BCCB Matrix It is not difficult to show that any BCCB matrix is normal. and i f f t2 implements multiplication with 77?? F*. without constructing F explicitly. the scaling usually takes care of itself since an operation such as if f t2 (f f t2 (•) ) is an identity operation. However. the vector x must first be reshaped. that is. BCCB Matrices 41 4. In general the speed depends on the size of X. where N is the dimension of A (i. the cost is O(N log.2. it is very natural for images. using fast Fourier transforms (FFTs). where the images are m x n arrays of pixel values). if A' = mn = 2k. For example. In particular. In MATLAB. • f f t2 and if f t2 act on two-dimensional arrays. A'). rather than the stacked vector representation given by x. and if f t2 can be used for multiplications with F*: • Implementations of f f t2 and if f t2 involve a scaling factor that depends on the problem's dimensions.2. .e. just as is F*(F(-)). This matrix has a very convenient property: a divide-and-conquer approach can be used to perform matrix-vector multiplications with F and F*.4. they are most efficient if the dimensions have only small prime factors. In particular. We can perform the product of the matrix with the image data X. to form the multiplication Fx.. A* A = A A*. the function f f t2 can be used for matrix-vector multiplications with F. FFT algorithms are fairly complicated.

That is. We thus have the unitary matrix. P (in this case. An FFT is an efficient algorithm to compute this matrix-vector multiplication. Jain [31].8).42 Chapter 4. See also http : //www. scaled by the square root of the dimension. to compute the eigenvalues of A. then . and Van Loan [59]. given explicitly by (4. The inverse DFT of the vector x is the vector x. if center is a 1 x 2 array containing the row and column indices of the center of the PSF array. POINTER. org. followed by a DFT of the rows. In particular. where A two-dimensional DFT of an array X can be obtained by computing the DFT of the columns of X. Denote the first column of A by a i and the first column of F by f i and notice that where A. but how can we compute the eigenvalues? Tt turns out that the first column of F is the vector of all ones. 2 ] ) . is a vector containing the eigenvalues of A. then the (unitary) DFT of x is the vector x whose kth component is where i — -J— \. A similar approach is used for the inverse two-dimensional DFT. we need only multiply the matrix VWF by the first column of A. In our 3 x 3 example. If x is a vector of length n with components Xj. the eigenvalues can be computed by applying f f t2 to an array containing the first column of A. center = [2 . Good references on the FFT and its properties include Davis [8]. as well as MATLAB's documentation on f ft2 and if f t2. Structured Matrix Computations POINTER. Observe that the DFT and inverse DFT computations can be written as matrix-vector multiplication operations. Thus. the resulting array is Note that this array can be obtained by swapping certain quadrants of entries in the PSF array: The built-in MATLAB function circshif t can be used for this purpose. f f t w .

However. is used because f f t2 and if f t2 involve computations with complex numbers. 1 . Since the entries in the PSF and X are real. the spectrum of a BCCB A. Filtering methods that use the FFT will be discussed in Chapter 5. x could be computed using the statements S = fft2 ( circshift(P.e.* fft2(X) ). BCCB Matrices 43 circshift(P. we can efficiently perform many matrix computations with BCCB matrices. X = ifft2( fft2(B) . the computed B may contain small complex parts.center) ) .2. Then it is easy to verify that A"1 = F*A~'F (by checking that AA~' = I). using the MATLAB statement S = fft2 ( circshift(P. Now we know that a spectral decomposition of any BCCB matrix is efficient with respect to both time and storage. especially if b (i.center) )./ S ). defined by a PSF array.2 Computations with BCCB Matrices Once we know the spectral decomposition. N — mn). xnaive = A ! b is likely to be a very poor approximation of the true solution. which are removed by the statement B = real ( B ) . so we simply write Thus.. can be computed using the MATLAB statements S = fft2( circshift(P. The last statement. For example.2. 4. due to roundoff errors. the result B should contain only real entries.center) ) . B) contains noise (recall the simple examples in Chapter 1).center). since the matrix F is not computed or stored. without explicitly forming A. Thus. performs the necessary shift. 1 .4. If the PSF array is m x n (and hence A is N x N. then using the FFT to compute the eigenvalues is extremely fast compared to standard approaches (such as MATLAB's eig function) which would cost O(N3) operations. B = real(B). It is important to emphasize that because A is severely ill-conditioned. 1 . assuming A~' exists. X = real(X). the naive solution. 1 . B = ifft2( S . Suppose we want to solve the linear system b = Ax. B = real ( B ) . can be computed very efficiently. .

use S . 1 . Structured Matrix Computations VIP 10.44 Chapter 4. basic computations with A can be performed using P.fft2 ( circshift(P. When using periodic boundary conditions. col] =centerofPSF X = true image B = blurred image • To compute eigenvalues of A.3 BTTB + BTHB + BHTB + BHHB Matrices Figure 4. • To compute the blurred image from the true image.1. use B . In this section we consider the case of using reflexive boundary conditions./ S ) ).* f f t 2 ( X ) ) ). without ever constructing A. 4.center) ) . BHTB.10). so that A is a sum of BTTB. • Given: P =PSF array center = [row. • To compute the naive solution from the blurred image. BTHB. An artificially created PSF that satisfies the double symmetry condition in (4.r e a l ( i f f t 2 ( S . . use X = r e a l ( i f f t 2 ( f f t 2 (B) . and BHHB matrices.

then we say that the PSF satisfies a double symmetry condition. In this case. it can be shown that A is normal. the Gaussian model for atmospheric turbulence blur.3. and there are fast algorithms for computing matrix-vector multiplications with C and CT. P. and that it has the real spectral decomposition where C is the orthogonal two-dimensional discrete cosine transformation (DCT) matrix [44].9) where P is (2k — 1) x (2k — 1) with center located at the (k. If (4. BTTB + BTHB + BHTB + BHHB Matrices 45 This seems fairly complicated. it is highly structured. This symmetry condition does occur in practice.1 shows an example of a doubly symmetric PSF. the matrix is block symmetric. The cost and storage requirements are of the same order as those of f f t2 and if f t 2. for example. In MATLAB. Figure 4. the arrays are doubly symmetric. suppose that the m x n PSF array. but the matrix has a simple structure if the nonzero part of the PSF satisfies a double symmetry condition. In addition. and where the zero blocks may have different (but consistent) dimensions.10) where f liplr and f lipud are MATLAB functions that flip the columns (rows) of an array in a left-right (up-down) direction. . The result is that the matrix A is symmetric. k) entry. dct2 and idct2 can be used for these computations. has the form (4. but savings can be achieved by the use of real arithmetic rather than complex.4. For example. and each block is itself symmetric. The matrix C is very similar to F. In particular.

. where where a>k is defined above. Thus. suppose the PSF array is given by If reflexive boundary conditions are used. Structured Matrix Computations POINTER. followed by a DCT of the rows. to compute the eigenvalues. then the (orthogonal) DCT of x is the vector x. The first column can be found by basic manipulation of the PSF array. A two-dimensional DCT of an array X can be obtained by computing the DCT of the columns of X. If x is a vector of length n with components jc/. . where where a>i — ^/l/n and co^ — ^2/n. then the first column of A (which is a 25 x 25 BTTB + BTHB + BHTB + BHHB matrix) can be represented by the array . . k = 2. the eigenvalues are easily computed: where a\ is the first column of A and c. As with the BCCB case. i is an element of C. The inverse DCT of the vector x is the vector x. A similar approach is used for the inverse two-dimensional DCT. n.46 Chapter 4. we need only construct the first column of A and use the dct 2 function. For example. .

As with BCCB matrices. provided it has the double symmetry structure about the center defined by (4. the second for pixels to the left. In general. see the functions dcts. In general. If this is not available. it is a simple matter to compute its spectrum using the dct2 function: el = zeros(size(P)). el (1. with the center located at the (k. we just need to know the pixel location of the center of the PSF. for example. k) entry.3.10). we can efficiently perform matrix computations with these doubly symmetric BTTB + BTHB + BHTB + BHHB matrices. BTTB + BTHB + BHTB + BHHB Matrices 47 POINTER. Once the first column of A is known.9) and (4. The MATLAB functions dct2 and idct2 are only included with the IPT. idcts. center) ) . The first term accounts for pixels within the frame of the image. e\. we can construct an array containing the first column of A directly from the PSF array as follows: • Suppose the PSF array. once we know how to compute the spectral decomposition. P. A MATLAB function implementing this process.4. The results are summarized in VIP 11.1) = 1. is (2k — 1) x (2k — 1). can be found in Appendix 3 (there is no analogous built-in function). and idcts2 in Appendix 3. . The relationship between the DCT and FFT can be found. in Jain [31J and Van Loan [591. it is not too difficult to write similar functions that use f f t2 and if f t2. • Define the shift matrices Z] and Z2 using the MATLAB functions diag and ones (illustrated for k = 3): • Then an array containing the first column of the corresponding blurring matrix can be constructed as It is not difficult to generalize this to the case when P has arbitrary dimension and center. and the last for pixels above and to the left./ dct2( el ) . where we use el to denote the array version of the first unit vector. dcts2. dct shift. the third for pixels above. S = dct2( dctshift(P.

and how to construct the smaller matrices Ar and Ac directly from the PSF array.48 Chapter 4. • Given: P = PSF array center = [row. S = dct2( dctshift(P. if the images X and B have m x n pixels. is an m x n array./ S ) . that can be represented as a Kronecker product.1 Constructing the Kronecker Product from the PSF Suppose the PSF array.. 4. We then discuss how to efficiently implement important matrix computations with Kronecker products in Section 4. center) ) . el (1. without ever constructing A. . In Section 4. then Ar is n x n and Ac is m x m.. use B = idct2( S .11) where. the PSF is separable).4.4 Kronecker Product Matrices In this section we consider blurring matrices. • To compute the naive solution from the blurred image./ dct2(el).4.4. • To compute the blurred image from the true image. basic computations with A can be performed using P.2. then use dcts2 and idcts2 in place of dct2 and idct2. Structured Matrix Computations VIP 11.1 we show that it is possible to recognize if such a representation exists. P. use el = zeros(size(P)). col] X = true image B = blurred image • To compute eigenvalues of A. A. Implementations of dcts2 and idcts2 may be obtained from the book's website. =centerofPSF • If the IPT is not available.* d c t 2 ( X ) ) . When using reflexive boundary conditions and a doubly symmetric PSF. use X = idct2 ( dct2 (B) . 4.1) = 1. and that it can be represented as an outer product of two vectors (i.e. (4.

where To explicitly construct Ar and Ac from the PSF array. we can do this efficiently with the built-in svds function: [u. then A = A r <g) Ac is mn x mn.13) exploited in Section 1. r = sqrt(s)*v. 1) .2.2 Matrix Computations with Kronecker Products Kronecker products have many important properties that can be exploited when performing matrix computations.12) is equivalent to the matrix-matrix relation (4.3.1. if A r and Ac are nonsingular. then the solution of (Ar ® A c )x = b can be represented as and easily computed in MATLAB using the backslash and forward slash operators: . and the matrix-vector relation (4. A — A r <8> A c . of P. Similarly. we need to be able to find the vectors r and c. see the function kronDecomp in Appendix 3. and corresponding singular vectors. if Ar is n x n and Ac is m x ra. A. For example.4. Because the dimensions of Ar and Ac are relatively small (on the order of the image pixel dimensions). P. For example.11) using the matrices Ac and Ar defined in Section 1. if we use a 3 x 3 PSF array.4. for zero boundary conditions. v] = svds ( P . with center (2. it is possible to construct the matrices explicitly. c = sqrt(s)*u. can be represented as the Kronecker product (4. then Then. This can be done by computing the largest singular value. Kronecker Product Matrices 49 Then. In addition. s.4. For details. In a careful implementation we would compute the first two singular values and check that the ratio of si/s\ is small enough to neglect all but the first and hence that the Kronecker product representation A r <g> Ac is an accurate representation of A. 4. 2).2. In MATLAB. the blurring matrix. where x = vec(X) and b = vec(B). as shown in Section 4. matrix-matrix computations are trivial (and efficient) to implement in MATLAB: B = Ac*X*Ar'. The structures of Ar and Ac depend on the imposed boundary condition.

X = Vc * ( (Uc' * B * U r ) . S = diag(Sc) * d i a g ( S r ) ' . Note that we do not need to explicitly form the big matrices Ur 0 Uc. Vr] = s v d ( A r ) . then. Chapter 4. XnaiVe could be implemented in MATLAB as [Uc. Sr. / notation for elementwise division. etc. S = vec(diag(£r <8> £ c ))- . [Ur. then it is more efficient to compute an LU factorization of Ar and compute X = ((U \ (L \ B ) ) / L') / U' . For example. The S VD of a Kronecker product can be expressed in terms of the S VDs of the matrices that form it. Instead we work directly with the small matrices Ur. if then The final matrix factorization is essentially the S VD of A. the naive solution computed using the SVD. Uc. using the MATLAB . Structured Matrix Computations We remark that if Ar — Ac. where S = diag (Sc) * diag (Sr) ' is an efficient way to compute the array of singular values. can be written as Notice that if B = UjBUr. where S = vec(diag(Zr <8> E c )) is an array containing the singular values of A.50 X = Ac \ B / A r ' . etc. Sc. Thus. except that it does not satisfy the usual requirement that the diagonal entries of £r ® Zc appear in nonincreasing order. Vc] = s v d ( A c ) . / S ) * V r ' .

B C ) . use B = Ac * X * A r ' . When using a separable PSF. • To use the SVD to compute the naive solution from the blurred image. S = diag(Sc) * d i a g ( S r ) ' . center. Ar and Ac. use sr = s v d ( A r ) . Note that if the PSF is separable. and they are therefore tailored to problems with specific PSFs and specific boundary conditions. • Given: P = PSF array center = [row. . use [Ur. use [Ar. Sr.s v d ( A c ) . see the function kronDecomp in Appendix 3. basic computations with A can be performed using P.5. Vc] .- [Uc. For details. and we do not have the restrictions imposed for the FFT. X = Vc * ( (Uc' * B * Ur) .and DCT-based methods. sc = s v d ( A c ) .4./S ) * Vr' . • To compute the blurred image from the true image. ' zero') • To construct the Kronecker product terms. Vr] = s v d ( A r ) . col] = center of PSF X = true image B = blurred image BC = string denoting boundary condition (e. We summarize this information in the following VIP.. S = sc * sr' .5 Summary of Fast Algorithms In the previous three sections we have described three fast algorithms for computing a spectral decomposition or an SVD. without ever constructing A. The three algorithms use certain structures of the matrices. then computations can be implemented efficiently for any boundary condition. . use X = Ac \ B / A r ' . Ac] = kronDecomp(P. • To compute singular values of A. • To compute the naive solution from the blurred image. VIP 12. Summary of Fast Algorithms 51 We summarize the above computations in the following VIP. 4.g. SC.

It is first necessary to say a few words about the PSF. but by keeping Psmall in the upper left corner of Pbig.1).3 that generally the light intensity of the PSF is confined to a small area around its center. if Psmall is a given. s i z e ( X b i g ) ) . 4). P. the zero padding can be done as Pbig = padPSF(Psmall. we have the following fast algorithms (recall the table on mat rix structures given in VIP 9). beyond which the intensity is essentially zero. Specifically. the Gaussian PSF arrays Psmall = psfGauss([31. Pbig(l:size(Psmall. then we can simply use the MATLAB statements Pbig = zeros(size(Xbig)). Pbig = psfGauss([63. small PSF array. and we want to pad with zeros so that it has the same dimension as a large image. 1:size(Psmall. Random values are then added to the blurred pixel values to simulate additive noise. For example. has much smaller dimensions than the associated image arrays. we created a simple function. Xbig. PSF Arbitrary Doubly symmetric Separable Boundary condition Periodic Reflexive Arbitrary Matrix structure BCCB BTTB + BTHB +BHTB+BHHB Kronecker product Fast algorithm Two-dimensional FFT Two-dimensional DCT 2 small SVDs 4. Note that the zero padding could be done in a variety of ways. both PSF arrays have the same center. Therefore. For example. essentially contain the same information about the PSF. 63]. X and B. to conserve storage. We remark . However. This is convenient because computations involving the PSF usually require knowledge of its center. For spatially invariant PSFs. 4). It is a simple matter to extend the dimensions of P by padding with zeros. then P must have the same shape as X and B. it is often the case that the PSF array.6 Creating Realistic Test Data Image deblurring test data is often created artificially by blurring a known true image with a known PSF. 31]. padPSF.2\ = Psmall Because we need to do this kind of zero padding fairly often. Structured Matrix Computations VIP 13. a few issues need to be considered. The examples and sample MATLAB code in this section require data and certain M-files that can be obtained from the book's website. Recall from Section 3. for this purpose (see Appendix 3).52 Chapter 4. In order to create such test data using the computational techniques discussed in this section. if we want to use the computational techniques outlined in this chapter.

size(Xbig)). can be used. X = Xbig(51:562.51:562). The left image was created using a blurring matrix A with zero boundary conditions. Any boundary condition can be used to perform the blurring operation on the large image.2. we must first choose a matrix model (i. a "true" image. 1-center)). Pbig = padPSF(P. Sbig = fft2(circshift(Pbig. . leading to artificial dark edges. the image B is a realistic representation of a blurred image taken from an infinite scene. The problem with this approach is that we must enforce a specific boundary condition on the blurring operation. Note that if we are given blurred image data. We now turn to the issue of creating a blurred image. P.6. X. This can be done by performing the blurring operation on a large image. Note also that we could take more rows and columns from Xbig and Bbig. 6). see. for example. the following MATLAB statements: Xbig = double(imread('iograyBorder. [54]. Figure 4. such as overlap-add and overlap-save techniques. from which a central part is extracted. B = Bbig(51:562. fix a boundary condition) for A. but if we are creating blurred image data. as long as the number of discarded rows and columns around the boundary is at least half the diameter of the nonzero extent of the PSF.4.512] .* fft2(Xbig))). For example. we could use the approach outlined in VIP 10 to create the blurred image. and if storage is an issue. To perform the blurring operation. Bbig = real(ifft2(Sbig . center] = psfGauss( [512.. then alternative schemes. and a blurred image B. then we try to make a best guess at what boundary condition is most realistic.51:562). [P. which may not accurately model the actual scene.2. Consider. The right image was created via a blurring of a larger image.e. for example. These computations provide test data consisting of a set of 512 x 512 images: the PSF array. then we should try to simulate blurring using correct data from the larger scene as illustrated in Figure 4. Because the nonzero extent of the PSF is much smaller than the number of rows and columns that were discarded from the "big" images. Creating Realistic Test Data 53 that padding trades storage for time.tif')). followed by extraction of the central part.

B = B + 0.. 23. Now use these efficient approaches for larger n x n Gaussian PSFs. we can add Gaussian white noise so that ||e|J2/||Ax||2 — 0. we add 1% noise to the blurred data: E = randn(size(B)).'fro')*E. We will use this process to generate test problems in the remainder of the book. using the built-in MATLAB function randn. The specific example considered in this section is implemented in chapter4demo. how large can you make the radius r in the PSF before the errors start to dominate the reconstruction? What happens if you perform the same tests using the Fast algorithm in VIP 10? CHALLENGE 10. 243. 349. ' f r o ' ) a n d n o r m ( B . 729. For small separable PSFs it is possible to construct the blurring matrix A explicitly using our kronDecomp function and the MATLAB built-in function kron. 613.01.01*norm(B. 32. which is equivalent to computing 2-norms of vectors e and Ax. What do these results suggest? For example. 887. ' f r o ' ) compute Frobenius norms of arrays E and B. For your favorite test image and periodic boundary conditions. can you explain why one approach is faster/slower than another? Can you explain the FFT and DCT timings for /. use the PSF for out-of-focus blur together with the above approach to generate noisy images for different noise levels. = 887 compared to the timings torn = 729 and n = 1024? .e. with n = 64. This can be done by adding random perturbations to the entries in the blurred image.47. 1024. 128. 181. B = vec(Ax). m. CHALLENGE 9. E = E / norm(E. Structured Matrix Computations The final step in creating realistic test data is to include additive noise. Compare this with the time required to compute the same quantities (i. the noise-free blurred image. Then use the fast algorithm in V I P 1 I to compute the naive solution. that is. For each noise level. Use the MATLAB tic and toe functions to measure the time required to compute the singular values and eigenvalues of A. and 12. Do this for n x n Gaussian PSFs with n = 16. 81.'fro'). Notice that before adding the noise in the third step in the above computation. 256.54 Chapter 4. For example. I I . which may be obtained from the book's website. 512. Also observe that norm ( E . the array S) using the efficient methods given in V'IPs 10. 101.

In particular. all the singular values decay gradually to zero and the condition number cond(A) = a\ jaN is very large. we might try to use the SVD approach to damp the effects caused by division by the small singular values. Recall that the naive solution (cf. 5. . Art is light synthesis.Chapter 5 SVD and Spectral Analysis Science is spectral analysis.5)-(1.7)) can be written as 55 .Karl Kraus The previous chapter shows that it is easy to solve noise-free problems that have structure. satisfying U 7 U = IN and VTV = I#. and X is a diagonal matrix with entries a\ > 02 > • • • > crN > 0. (1. but what should we do when noise is present? The answer is to filter the solution in order to diminish the effects of noise in the data.1) Recall from Section 1. For a blurring matrix. In this chapter we use the SVD to analyze and build up an understanding of the mechanisms and difficulties associated with image deblurring. The SVD analysis that we carried out is independent of the algorithm that we choose to solve the image deblurring problem. That is. we used the SVD to explain how the noise e in the data (the recorded blurred image) enters the reconstructed image in the form of the inverted noise A'1 e.4 that we define the SVD of the N x N matrix A to be where U and V are orthogonal matrices. but it does suggest the SVD as one method for dealing with the inverted noise. Our example at the end of Chapter 1 introduced the SVD as a tool for analysis in image deblurring.1 Introduction to Spectral Filtering (5.

05. followed by the addition of Gaussian white noise e with ||e||2/||b||2 = 0. The resulting method is. method. 5 It is occasionally referred to as the pseudo-inverse filter.1. referred to as the truncated SVD. The corresponding solutions range from oversmoothed to undersmoothed.56 Chapter 5. Figure 5. SVD and Spectral Analysis One approach to damp the effects caused by division of small singular values is to simply discard all SVD components that are dominated by noise—typically the ones for indices i above a certain truncation parameter k.1 shows three TSVD solutions x^ computed for three different values of the truncation parameter k. which was blurred with a Gaussian PSF. this method can work quite well.6 and it amounts to computing an approximate solution of the form (5. .2) Figure 5. and k — 7243 (bottom right). or TSVD. as k goes from small to large values. for obvious reasons. k = 2813 (bottom left). In spite of its simplicity. The original image is iogray. Exact image (top left) and three TSVD solutions x^ to the image deblurring problem. t i f . computed for three different values of the truncation parameter: k = 658 (top right).

5. and consequently components with higher frequencies are included—hence we can think of A.4) . The problem that we need to solve in order to compute the desired reconstruction is (5. Now we illustrate how the use of other boundary conditions can reduce these oscillations and thus improve the quality of the reconstructed image. As explained in Section 4. 5. are chosen such that 0/ ~ 1 for large singular values. Thus we really want to determine a filter that balances the details that can be recovered with the influence of the noise. as a way to control how much smoothing (or low-pass filtering) is introduced in the reconstruction. The resulting blurred image is shown on the right in Figure 4. There are several important issues that need to be addressed: choosing the filter factors. noise is added to this image. which have the form (5. This assumption manifests itself as clearly visible oscillations in the reconstructions having their largest amplitude near the borders of the image. choosing proper bases (we might prefer to use a Fourier basis instead of the SVD). and in such a way that the influence from the noise in the blurred image is damped. how should this filtering be done. we implicitly assumed zero boundary conditions.5. and </>/ ~ 0 for small singular values. The smallest value k = 658 was deliberately chosen too small. more terms are included in the SVD expansion. Section 3. We give some examples of commonly used filters and show how to implement them efficiently in MATLAB. cf. but at the cost of loss of information. The question is. Different spectral filtering algorithms involve different choices of the filter factors. to show the effect of oversmoothing. In this way.6 we first blur a large exact image to produce the large and blurred image. any artifacts from boundary conditions in the model problem will appear outside the borders of the chopped image B. and how much filtering should be used? A large amount of filtering ensures that noise is highly suppressed. Spectral filtering amounts to filtering each of the components of the solution in the spectral basis.2. and designing efficient implementations for large-scale problems. This chapter describes a class of deblurring algorithms based on filtering methods. while the largest value k = 7243 shows an undersmoothed reconstruction with too much influence from the high-frequency components of the noise.2 Incorporating Boundary Conditions In the example used in Figure 5.1. Finally. The TSVD method is an example of the general class of methods that are called spectral filtering methods. VIP 14. General image deblurring algorithms inevitably involve some kind of filtering in order to damp the influence of the noise.3) where the filter factors 0.2. Incorporating Boundary Conditions 57 Note that as k increases. and then we "chop off" the borders of this image to obtain the resulting image B.

3 SVD Analysis Figure 5. Hence. k = 2865. Figure 5. The only change is that we must now use the SVD of the corrected matrix A (instead of the SVD ofAo). and a severe sensitivity to the noise in the data.3). and k = 4638. this section consists of a case study for symmetric Gaussian blur (with s\ = s2 and p — 0). and ABc is a boundary correction term that incorporates specific boundary conditions into the model. Similar to A. except that we now use reflexive boundary conditions (leading to a matrix A with BTTB + BTHB + BHTB + BHHB structure.2. the modified problem (5. TSVD solutions xk using reflexive boundary conditions on a model problem created by the bordering technique (to ensure that the blurred image contains information from outside the edges). The 3 1 x 3 1 test image X used in this section.58 Chapter 5. we must also use a spectral filtering method to solve the modified problem. In order to get a better understanding of the properties of the reconstructions computed by means of spectral filtering in the form of (5. Independent of the specified boundary condition. Reconstructions based on A = AQ correspond to zero boundary conditions. SVD and Spectral Analysis where A0 is the BTTB matrix resulting from zero boundary conditions. Section 4. Figure 5. we use truncation parameters k = 703. To simplify the discussion we keep the problem dimensions small and consider the test image X in Figure 5.2 shows reconstructions similar to those in Figure 5. From left to right. 5. a very large condition number.3). The bordering artifacts have disappeared. the matrix ABc is structured.3. cf. and (as we saw in Section 4) its form depends on the type of boundary condition.4) has the characteristics that make image deblurring very challenging: gradually decaying singular values.1.3 with m = n = 31 .

when the PSF consists of a single nonzero pixel. and we assume zero boundary conditions such that ABC = 0 and A = AQ.5 shows plots of |u.7. Figure 5. the matrix A is the identity and all singular values are identical (and the condition number of the matrix is one). where u. Hence we can obtain an understanding of the properties of Xfi]t by studying the behavior of the quantities cr. We already mentioned in Chapter 1 that the singular values decay gradually..5.. and 2.. SVD Analysis 59 (which is a subimage from the image iogray. We note that at one extreme.3.7b| for the same three . Clearly. Figure 5. t i f showing a small detail). The behavior of the coefficients ufb is determined by both the blurring (via the singular vectors u/) and the data b.4 shows three 3 1 x 3 1 PSF arrays P together with the singular values of the corresponding 961 x 961 blurring matrices A = A0. are introduced to damp the undesired components.7b. 1. thus.4. Even for narrow PSFs with a slow decay in singular values. The decay of the singular values depends on the parameters s\ and S2 which define the width of the Gaussian function and. the PSF gets "wider"—the singular values decay faster. u. the condition number cond(A) = a\ /cr/v becomes large for large images.. of the corresponding blurring matrices A = AQ (we assume zero boundary conditions). respectively) and the singular values cr. In the other extreme. in spectral filtering we express the solution Xfnt as a sum of right singular vectors v.4. are the coefficients in the expansion of the naive solution.rb / a. Three Gaussian PSFs (with s\ = $2 — 1. when the PSF is so wide that all entries of A are equal. and v/. and the filter factors 0. The coefficients in this expansion are (/>.e.. Let us first look at the behavior of the singular values a. the amount of blurring. ufb / cr. Notice that as the blurring gets worse—i. Figure 5. then all but one of the singular values are zero.

some components that are dominated by the noise. for the initial coefficients we have u ( r b ^ u(7bexact> while the remaining coefficients satisfy ufb ~ ufe. We say that such a reconstruction. beyond the transition index for the coefficients uf b. we see that for ||E||F = 3 • 10~4 the transition occurs very roughly around index 900 (narrow PSF).ID'2. 200 (medium PSF). the quantities |u(rb| decay—at a rate that is slightly faster than that of the singular values—while later the coefficients level off at a noise plateau determined by the level of the noise in the image.5 that. matrices A as in Figure 5.e.60 Chapter 5. so that the information in the initial coefficients dominates the filtered solution.5. is under smoothed. in addition to the desired SVD components. The insight we get from Figure 5. and 150 (wide PSF). We see from Figure 5. then we include.4. By a visual inspection of plots in Figure 5. Those coefficients lying at the noise plateau are dominated by the noise. i. those that are larger in absolute value than the noise plateau. In other words. The index where the transition between the two types of behavior occurs depends on the noise level and the decay of the unperturbed coefficients. . 400 (medium PSF). and for two different levels of the noise component E in the model B = Bexact + E. Plots of singular values a. hiding the true information. If we choose the TSVD truncation parameter k too large.. For any spectral filtering method. for the higher level of noise ||E||F = 3 • 10~2 the transition occurs roughly around index 400 (narrow PSF). Top row: ||E||F = 3 • 10~4.4. SVD and Spectral Analysis Figure 5. bottom row: I|E||F = 3 . in which we have included too many high-frequency SVD components. namely.5. initially. and two different levels of the noise E in the model B = Bexact + E. (colored lines) and coefficients |u(7b| (black dots) for the three blurring matrices A defined by the PSFs in Figure 5. Similarly. These are precisely the values of k that should be used in the TSVD method.5 is that only some of the coefficients u ( r b carry clear information about the data. and 250 (wide PSF). we therefore choose the filters 0.

we can write That is. the right-hand side coefficients u. To illustrate this.5 (the medium PSF and the noise level 3 • 10 2 ). Figure 5. and 400. Top row: singular valuesCT.5. The former components typically correspond to the larger singular values. The SVD Basis for Image Reconstruction 61 If. then we include too few SVD components. k = 200 is the best of the four choices. while those with smaller absolute value are dominated by noise. k = 100 gives a blurred TSVD solution because too few SVD components are used.4 The SVD Basis for Image Reconstruction Recall that we can always go back and forth between an image and its vector representation via the vec notation. are the two-dimensional representations of the singular vectors v/..6 shows these solutions. and for/: = 400 the TSVD solution is dominated by the inverted noise. 300. and the basis images V. Bottom row: the corresponding TSVD reconstructions. 200. We use the medium fSFfrom Figure 5. on the other hand. and we compute TSVD solutions \k for k = 100. we choose k too small.. 300. Figure 5. and the higher level of noise \\E\\-p = 3 • 10~2 from Figure 5.rb. leading to a reconstruction that has a blurred appearance because it consists mainly of low-frequency information.5. Specifically.-(green solid curve). along with plots of the singular values a. and the solution coefficients u.4. for k = 300 some noise has entered the TSVD solution. The spectral components that are large in absolute value primarily contain pure data. 5. A reconstruction with too few SVD components is said to be oversmoothed. we use the problem in the middle of the bottom row in Figure 5. and 400. . 200. VIP 15. and TSVD solution coefficients u?"b/cr/| (blue dots) for k — 100.6.rb/cr. Clearly. right-hand side coefficients u ( r b (black dots). the image Xfi] t is the two-dimensional representation of the filtered solution Xfjj t .4. Ontheother hand.

At this stage we recall the fact that if A is N x N. the matrix V. the subspaces spanned .4. Plots of the first 16 basis images V. Unfortunately. . it can be shown that if the singular values.. where the singular vectors v/ are from the SVD of the matrix A for the middle PSF in Figure 5. then the singular values are uniquely determined.3). . Figure 5. Figure 5. Green represents positive values in \i while red represents negative values. while singular vectors corresponding to the smaller singular values tend to represent more high-frequency information. for / = 1. arising from the SVD of the matrix A for the middle PSF in Figure 5. . This figure is another illustration of our claim in Chapter 1 that singular vectors corresponding to large singular values carry mainly lowfrequency information. In addition. Let us now look at the basis images V.-. .-). SVD and Spectral Analysis Using these quantities.7. These matrices satisfy v/ = vec(V. cr. but this does not present any difficulties in our analysis of the spectral properties of the image deblurring problem.4. As the index i increases. we can thus write the filtered solution image as (5.5 and 5. except that we write our reconstructed image Xfi]t as a weighted sum of the basis images V.62 Chapter 5.7 shows V.5) This expression is similar to (5. which constitute a basis for the filtered solution.3). are distinct.6. then the singular vectors are uniquely determined up to signs. 16. In particular. ranging from the very flat appearance of V] to matrices with more oscillations in both the vertical and/or horizontal directions.. and we already studied them in Figures 5. The expansion coefficients are the same as in (5. the same is not true of the singular vectors. tends to have more high-frequency content.

where A B c represents the correction corresponding to either periodic or reflexive boundary conditions. For example. 5.) If we insert the relation for F b into the expression for . these basis images are quite different from those for zero boundary conditions (in Figure 5.7). (This fact is used in VIP 10. must satisfy the specified boundary conditions. for each boundary condition are shown in Figures 5. and o\ \ are distinct.5). let us consider the same PSF but with either periodic or reflexive boundary conditions in the blurring model. of A have increasingly more oscillations as the corresponding singular values a. it is interesting to study the special basis components associated with these two methods. each V. satisfies the boundary conditions imposed on the problem. while the rest appear in pairs. That is.5 The DFT and DCT Bases The SVD gives us a general way to express the filtered solution in the forms (5.7 only a\.5. 04. and thus the singular vectors provide the necessary frequency information we desire. Because of this. Frr = Fr.2. In addition. and therefore in which conj(-) denotes elementwise complex conjugation. This matrix can be written as where <g> is the Kronecker product defined in Section 4. This is due to the fact that each image V. They are complex symmetric. Fj = Fc. the reconstructed image will also be quite different. Let us first consider the case where A is a BCCB matrix as described in Section 4. in which F is the unitary twodimensional DFT matrix.4. respectively. The basis images V.8 and 5. there are many identical pairs of singular values. The first 16 matrices V. we now study the right singular vectors of the corrected matrix A = A0 + ABC.5. VIP 16. We also know from Chapter 4 that in many important cases we can compute the filtered solution efficiently via the FFT or the DCT.3) and (5. because these methods immediately provide the spectral factorization of the matrix A. From the above expression for F it follows that in which the matrix Fc B Frr is identical (up to a scaling factor) to the two-dimensional FFT of B. This matrix has the spectral decomposition A = F* A F. in Figure 5. For symmetric Gaussian blur with s\ = s2. To further illustrate the role played by the SVD basis images V ( . decrease.9. Hence. and Fc and Fr are the unitary one-dimensional DFT matrices of dimensions m x m and n x n. The DFT and DCT Bases 63 by the singular vectors corresponding to distinct singular values are unique.

with reflexive boundary conditions in the blurring model.64 Chapter 5.7.for the PSF in Figure 5.9. The first 16 basis images V. Figure 5. The first 16 basis images V.7.. SVD and Spectral Analysis Figure 5.8.for the PSF in Figure 5. . with periodic boundary conditions in the blurring model..

and f r j is the y'th column of F r . . Plots of the real parts of some of the DFT-based basis images f c> / fr7. used when the blurring matrix A is a BCCR matrix. Equation (5. is the ith column of Fc.6) where i/^ are the elements of the matrix ^f. f r '.10. Moreover.10 illustrates a few of these basis images. respectively. Figure 5. we obtain 65 It is straightforward to show that if the m x n matrix * satisfies then the naive reconstructed image is given by (5. positive and negative values. The imaginary parts of the basis images have the same appearance.. f c . Figure 5.6) shows that in the BCCB case the basis images for the solution are the conjugate outer product conj(f c .) of all combinations of columns of the two DFT matrices. The DFT and DCT Bases the naYve solution. blue and red represent.5.5. We used m = n = 256.

The rows of these matrices consist of sampled and scaled cosine functions. SVD and Spectral Analysis POINTER. BTHB. BHTB. Similar to the BCCB case above.3 where A is a sum of BTTB. when applied to real data. Next we consider the case from Section 4. respectively. .66 Chapter 5. where C is the two-dimensional DCT matrix. is the ith row of Cc. We can write A = C r A C. the matrix C can be be written as the Kronecker product where Cc and Cr are one-dimensional DCT matrices of dimensions m x m and n x «. it follows that This symmetry is used in the f f t2 algorithm.is the y'th row of C r . The unitary DFT matrices Fc and Fr satisfy where Jc and Jr are permutation matrices of the form Since B is real. and noticing that C b = vec(Cc B C^). we obtain The matrix Cc B C' is simply the two-dimensional DCT of the image B. we now introduce the m x n matrix * such that and then we can write the nai've reconstructed image as (5. c<?. Equation (5.7) shows that the basis images in this case are the outer product c c _.7) Here. and BHHB matrices. to halve the amount of computational work. and similarly crry.11 shows a few of these basis images. Figure 5. c^ of all combinations of rows of the two DCT matrices. Inserting this into the expression for the naive solution. In analogy with the DFT approach.

We illustrate this fundamental property with an analysis of the iogray. for clarity.12. we have extracted the parts of the arrays that correspond to the lower frequencies. the DCT basis. Excerpts of these arrays are shown in the top of Figure 5. positive and negative values. cyan and red represent.6.11. In the DFT basis. tif image from Figure 5. of course.. but these components are usually smaller in magnitude than the low-frequency components. This means that the expansion coefficients in a spectral basis will tend to decay in magnitude as the frequency increases.6 The Discrete Picard Condition Most images are dominated by low-frequency spectral components that are needed to describe the overall features of the image. where. we computed the three 512 x 512 arrays (or transforms) which represent the spectral expansion coefficients in the DFT basis. For this image. cjy used in the treatment ofBTTB + BTHB + BHTB + BHHB matrices.1. The Discrete Picard Condition 67 Figure 5. respectively. the low-frequency information is represented by the coefficients in the four "corners" of the array. Plots of some of the DCT-based basis images cc. 5. and the SVD basis for rotationally symmetric Gaussian blur (which is separable because p = 0). High-frequency components are.5. We used m = n = 256. and we see that the largest coefficients (in magnitude) are . Note that we display the magnitudes of the coefficients. also needed to represent the details of the image.

6. Bottom: the coefficients ordered according to decreasing eigenvalues or singular values. indeed located here.. The same information can also be represented or visualized in a different way. Precisely the same is true in the DCT and the SVD bases. The overall tendency in all three plots is a decay of the magnitude of the spectral coefficients. by plotting the spectral components (i. the elements of Fc X FjT. We have already seen such plots of the SVD coefficients in Figures 5. tif image in the DFT. on average. and SVD bases for separable Gaussian blur.5 and 5. and Vc X Vj") in the order dictated by a decreasing ordering of the corresponding eigenvalues or singular values of the blurring matrix A.12. Two representations of the magnitudes of the spectral components of the iogray. where the largest coefficients (in absolute value) are located in the upper left "corner" of the arrays—the image is also dominated by low-frequency DCT and SVD basis images. The bottom plots in Figure 5. The eigenvalues and singular values are computed as explained in VIPs 10-12. DCT. SVD and Spectral Analysis Figure 5.e. We also see that the magnitude of the coefficients tends to decay. This verifies that the image is dominated by the low-frequency DFT basis images. in which the smaller indices correspond to the lower frequencies.12 show the first 5000 spectral components (in magnitude) according to this ordering. Cc X CjT.68 Chapter 5. Top: the coefficients as they appear in the computed arrays. The conclusion is therefore the same as before: the image X is dominated . away from the four "corners" of the array.

the coefficients \ujb\ decay (on average) faster than the singular values a. tif image with a low-pass and a high-pass filter from Challenge 7. or right-hand side b. emphasizes the edges in the image. and that blurred images are completely dominated by low-frequency components (giving the blurred appearance). it follows immediately that the spectral coefficients of the blurred image decay faster than those of the sharp image.rbexactl satisfy the discrete Picard condition..5.6. . This reflects the fact that high-frequency information is highly damped (or even lost) in the blurring process. Concerning the reverse process of image ^blurring. given a blurred and noisy image B. Due to the decay of the eigenvalues and singular values. we cannot expect all the coefficients \ujb\ to decay. while the high-pass filter damps the low frequencies and magnifies higher frequencies. as is the case for the coefficients |u. Now recall that the spectral coefficients of the exact blurred image Bexact (computed via bexact = A x) are obtained by multiplying the spectral coefficients for X by the eigenvalues or singular values of the blurring matrix A. the above analysis shows that we can only hope to compute an approximate reconstruction if the spectra] components of B decay faster than the eigenvalues or singular values. You should see that the low-pass filter damps all the high frequencies. Due to the presence of the noise in the recorded image B = Bexact + E. the SVD coefficients |u. Use conv2 !o filter the pumpkins . High-pass filtering. This requirement to the data. The Discrete Picard Condition 69 by the low-frequency spectral components corresponding to the largest eigenvalues or singular values. or SVD) is used. Then compute and display the two-dimensional DFT and/or the two-dimensional DCT of the image itself. With the addition of noise. which damps the lower frequencies. VIP 17. When we compute an approximate reconstruction of the sharp image. For most image deblurring problems. CHALLENGE 11. is known as the discrete Picard condition. no matter which basis (DFT. until they level off when the noise in the image starts to dominate the coefficients.Rather the coefficients |u(7b| will level off when they become dominated by the noise components. as well as the filtered versions of the image (using f f t2 and/or dct2). Smoothing (or blurring) acts as a low-pass filter that damps the higher frequencies in the image. and inspect the three images. The index for which this transition occurs depends on the decay of the exact coefficients |u(7bexact and the magnitude of the noise. DCT.rbexactl. we should include only the components that correspond to coefficients |u(rb| that are above the noise level. and specifically for the SVD formulation this condition says that the right-hand side coefficients |u(rb| must decay (on average) faster than the corresponding singular values.

Use a semilogarithmic scale and plot the absolute values. generate the small PSF array via a call to psf Def ocus. expand it to an m x n array via a call to padPSF. SVD and Spectral Analysis CHALLENGE 12. and 18.70 Chapter 5. You should see that the discrete Picard condition is satisfied for both images. both of which were degraded by out-offocus blur. compute the eigenvalues as explained in VIP 11. 2. 15. The blurred images were rounded to integers in the range [0. We provide two blurred images. and 3. tif and dim = 3 0 for outof focus2 . We return to the deblurring of these two images in Challenges 14. . you should plot the two-dimensional DCT coefficients for the blurred image (computed with dct2 or dcts2) and the eigenvalues of the blurring matrix. The best reconstructions for these images are obtained with reflexive boundary conditions. 255]. Specifically. Your task here is to perform an analysis of the discrete Picard condition for one or both images. computed as follows: 1. Part I. Deblurring Walk-Through. thus simulating quantization errors. tif. shown above. The PSF was generated by our psf Def ocus function with dim = 4 0 for outoffocusl.

and the L-curve criterion).F. (6. 6. You can follow along with the codes as the algorithms are defined. Many of the algorithms discussed in this chapter are implemented in the programs found in Appendices 1 and 2 and at the book's website. before the dear product emerges. see in particular tik_dct tsvd_dct tik_fft tsvd_fft tik_sep tsvd_sep gcv^tik gcv_tsvd 71 . as through a filter.1 Two Important Methods The SVD analysis in the previous chapter motivates the use of spectral filtering methods because these methods give us control—via the filter factors—over the spectral contents of the deblurred images. We focus on two candidate regularization methods (TSVD and Tikhonov) and three candidate ways to compute the regularization parameter (the discrepancy principle.Chapter 6 Regularization by Spectral Filtering You 've got to go by or past or through boredom. This chapter takes a closer look at filtering. in the computed solution.1) POINTER. generalized cross validation. . Spectral filtering methods work by choosing the filter factors </>. Scott Fitzgerald The previous chapter demonstrated that filtering is needed when noise is present. which is also referred to as regularization because it can be interpreted as enforcing certain regularity conditions on the solution. The degree of regularization is governed by a regularization parameter that should be chosen carefully.

.• (i = 1. for example. . since these vectors are the eigenvectors of A r A and AA r . This is the method used. In this section we discuss the two most important spectral filtering methods. we define the filter factors to be one for large singular values. The Tikhonov Method. is scaled by the filter factor 0. and zero for the rest. so that the solution component in the direction v. to compute the solution shown in Figure 5. greatly exceeds the magnitude of the singular value a f . More precisely. . Thus.4) ensures that both the norm of the residual b — A x« and the norm of the solution xa are somewhat small. but if we make it zero by choosing x = A-1b. we also want to keep ||x||2 reasonably small. For this method we define the filter factors to be ((6. AO and express the solution Xfiit in coordinates vfx determined by the vectors v. We saw that solving the equation A x = b exactly did not produce a good solution when the data b was contaminated by noise. This choice of filter factors yields the solution vector xa for the minimization problem (6. (6. . in order to reduce the effect of error in the component ufb.6. . For this method. This problem is motivated by the fact that we clearly want ||b — A x||2 to be small. respectively. N). then This quantity becomes unrealistically large when the magnitude of the noise in some direction u.72 Chapter 6. .2. . and our minimization problem in (6.3). These methods operate on the data b in the coordinates uf b determined by the vectors u. This is the spectral coordinate system. (/ = 1. The TSVD Method.3) where a > 0 is called the regularization parameter. Instead.2) The parameter k is called the truncation parameter and it determines the number of SVD components maintained in the regularized solution. Note that A: always satisfies 1 < k < N. Regularization by Spectral Filtering in order to obtain a solution with desirable properties. . we filter the spectral solution via the filtered expansion in (5.4) as we will discuss further in Section 7. .

a\]. . Consider first a filter factor 0. ^af/cr for large i.1. We discuss the TSVD and the Tikhonov methods for choosing filter factors for deblurring. the lilter factors for low-frequency components are close to one. using the Taylor expansion (1 + e)"1 = l . . Two Important Methods 73 POINTER. while filter factors for highfrequency components are close to zero.then0. For a low-pass filter. Then.e + le 2 + O(e 3 ). Filters are chosen to diminish the effects of noise. We also see that there is no point in choosing a outside the interval [a^. We now consider the effect of the choice of the parameter a. <Ti]. a Fourier coordinate system Fb is often used. « 1 for small indices/'. but the solution x does not include components corresponding to small singular values o > + i . . Again using the Taylor expansion of (1 + c ) ~ l > we obtain Thus we can conclude that the Tikhonov filter factors satisfy This means that if we choose a e [aN. <g. we choose the truncation parameter k so that the residual ||b — Ax||2 is reasonably small. For TSVD regulari/ation. « a.1 illustrates this point. a^i • The parameter a in Tikhonov's method acts in the same way as the parameter k in the TSVD method: it controls which SVD components we want to damp or filter. Figure 6. a (which is the case for some of the last filter factors). The TSVD and Tikhonov methods are analogous to this. for whichCT.we obtain Next we consider a filter factor 0. VIP 18. For a given a.6. while 0. but the basis vectors are tailored to the blurring function. too. See [34] for more information about Fourier filtering. the "breakpoint" at which the filter factors change nature is at that index for which a. A filter replaces a signal b by where f/ is a row of the unitary Fourier transform matrix. . Filtering is a common task in signal processing. .» a (which is the case for some of the first filter factors). Rather than the SVD coordinate system U r b.. for which <r.

6) as where Z^1 = <&£ *.6) for Xfiit is only a slight modification of (6. recall VIPs 10. Thus. 1's and O's for TSVD and or. for three different values of the regularization parameter a.5) Similarly. if the filter factors are given.5) and (6. Regularization by Spectral Filtering Figure 6.2 Implementation of Filtering Methods If we assume that all of the singular values of A are nonzero. 11.6) can be written in terms of the spectral decomposition. TTze Tikhonov filter factors fa = a 2 /(cr 2 + a 2 ) versus a.5). if it exists.74 Chapter 6. Relations analogous to (6. and 12.g. and 12 with implementations to compute xmt- . and we showed how to efficiently compute the SVD or spectral decomposition for such matrices. then it is simple to amend VIPs 10. Indeed. We also showed how to efficiently compute the naive solution (6.1. Since the expression (6..6) where $ is a diagonal matrix consisting of the filter factors fa for the particular method (e. then the naive solution can be (6. 6. we can rewrite (6. the spectral filter solution can be written as (6.5). it is not difficult to efficiently implement filtering methods for the structured matrices of Chapter 4. In Chapter 4 we described various kinds of structured matrices that arise in image deblurring problems.2/(<r2 + a 2 ) for Tikhonov). 11.

. we can see the effects of regularization. / (ab() . [Ur. and the Tikhonov regularization parameter should satisfy <r. with doubly symmetric PSF. BC) . in the case of TSVD. Xfilt = real( if ft2 ( fft2 (B) .* Sfilt ) * Vr' We have said very little about how we choose the parameter k for TSVD or a for the Tikhonov method. Ac] = kronDecomp(P. Sfilt = Phi . and compute the filter factors from the singular (spectral) values as follows: phi = ab () . use el = z e r o s ( s i z e ( P ) ) ./ S. . center) ) ./ dct2(el). X f i l t . . we can experiment with various values of alpha and display the filter solution to see the effects of regularization. By experimenting with various values of tol. In the case of Tikhonov regularization. Xfilt = Vc * ( (Uc' * B * Ur) . Sc.* Sfilt ) ). Sfilt = Phi . Later we discuss "automatic" methods for choosing these parameters.center) ). x fi it can be computed efficiently. / S. [Uc. For many structured matrices.- • For reflexive boundary conditions.2. use S = fft2 ( circshift(P. • For a separable PSF.6. • Given: 75 P = PSF center = [row. For example. Note that the use of abs is necessary in the case when FFTs are used.^2 +alpha^2). use [Ar.v < tx < a\. we might specify a tolerance below which all singular (spectral) values are truncated. Vr] = svd(Ar). Sr./ S. S = diag(Sc) * diag(Sr)'. Xfilt = idct2( dct2(B) . Vc] = svd(Ac).g. we could specify a value for a. e l ( l . Implementation of Filtering Methods VIP 19. S = dct2( dctshift(P. l ) = 1.* Sfilt ). ' zero') Phi = filter factors • For periodic boundary conditions. and displaying the computed filtered solution. 2 . 1 . Sfilt = Phi . center. but for now we can try to choose them experimentally. col] =centerofPSF B = blurred image BC = string denoting boundary condition (e. except that the TSVD truncation parameter should satisfy 1 < k < N. In this case the filter factors can be computed very easily in MATLAB as phi = (abs (S) >= tol) . Again.

76 Chapter 6. The regularization parameters were chosen to give the visually most pleasing reconstruction. We can avoid this unwanted situation by performing the computation only for nonzero values of S. but TSVD tends to produce slightly more graininess for this image. If some of the singular (spectral) values in S are zero. We illustrate in Figure 6. and set all other S f i l t values to 0. For both methods. then MATLAB will produce a "divide by zero" warning. We close this section with a remark about computing the quantity Sfilt = Phi . Both methods recover some detail from the blurred image.2 the results of the TSVD and Tikhonov methods on the blurred image of pumpkins from Figure 1. and the TSVD reconstruction (bottom right). The original and blurred pumpkin images (top). and some values of S f i l t will be set to Inf or to NaN. To do this. the Tikhonov reconstruction (bottom left). as illustrated in the following VIP. ./ S. for example.2. we could. use a logical array idx = (S ~= 0 ) that has values 1 for nonzero entries of S and 0 elsewhere. Regularization by Spectral Filtering Figure 6. This array signals which divisions are to be performed. the regularization parameter was chosen to make the picture most pleasing to the eye. We used reflexive boundary conditions.2.

Sfilt = zeros(size(Phi)).mat. Use both the Tikhonov and the TSVD methods on this problem./ S(idx). leading to an image with different contrast than that in the original image. CHALLENGE 13.3. Note the following hints: 1. [0 255] ) or imagesc ( X f i l t . Sfilt(idx) = Phi(idx) . Regularization Errors and Perturbation Errors 77 VIP 20. Due to the reconstruction algorithm. 3. we can now easily separate the two different types of errors in a regularized solution. some pixels in the reconstructions lie outside this range. we have . computed by means of spectral filtering.3 Regularization Errors and Perturbation Errors In order to better understand the mechanisms and regularizing properties of the spectral filtering methods.6) that we can write Xfj|t for both the TSVD solution and the Tikhonov solution in terms of the SVD. then MATLAB uses a color scale that maps miri ( X f i l t ( : ) ) to black and max (Xf i J t ( : ) ) to white. Specifically. CHALLENGE 14. If you display a reconstruction X f i l t . [ ] } or irnaqesc ( X f i l t ) . The original images have pixel values in the range [ 0 . 255 ]. instead of the direct computation Sfilt = Phi . We return to this cleblurring problem in Challenges 15 and 18. 4. To display the reconstruction with the same contrast as the original image. Try various choices of k and a and determine which choices give the clearest solution im<me. Deblurring Walk-Through. Practical implementations of filtering methods should avoid possible divisions by zero. 6. We observed in (6. For one or both images use the Tikhonov method implemented in t i k _ d c t to compute and display reconstructions ("or selected values of the regulari/ation parameter a. This can be done by using idx = (S ~= 0) . We return to the dehlurring problem from Challenge 12 with oul-of-focus blur and reflexive boundary conditions. [0 2 5 5 ] ) . using the call imshow ( X f i l t . we now take a closer look at the errors in the regularized solution x^. use irnshow ( X f l i t . We return to the image in Challenge 2 and the blurring matrix in challenge2 .6. Part II. Equipped with this formulation./ S used in VIP 19. 2.

6.7) We see that the error consists of two contributions with different origins. When too many filter factors </>.5 and 5. Regularization by Spectral Filtering and therefore the error in Xfnt is given by (6. On the other hand. and these components are dominated by the contributions from the exact right-hand side. As a consequence. the smaller the regularization error. but the perturbation error is large because inverted noise enters the solution—we say that the solution is undersmoothed. . in spite of the large condition number. and we use TSVD as the regularization method. Regularization by means of spectral filtering requires finding a suitable balance between the regularization error and the perturbation error by choosing the filter factors appropriately. Changes in the regularization parameter change <I> and the two kinds of errors. i. If 4> = IN. This is possible because the image deblurring problem satisfies the discrete Picard condition defined in the previous chapter. Figure 6.e. If $ = 0.3 illustrates how the norms of the regularization error and the perturbation error vary with the regularization parameter. then the regularization error is large while the perturbation error is small—the solution is oversmoothed. VIP 21. The first component (I# — V $ Vr)xexact is the regularization error which is caused by using a regularized inverse matrix V $ *L~1UT (instead of the inverse A"1 = V E~ ! U r ) in order to obtain the filtering. since W r = I#.78 Chapters. What is left in the regularized solution is primarily the more low-frequency SVD components associated with the larger singular values. the exact right-hand side exhibits decaying expansion coefficients when expressed in the spectral basis. which is consistent with the observation made from Figures 5. The reason we are able to compute regularized approximations to the exact solution.6. is that the spectral filtering is able to suppress much of the inverted noise while—at the same time—keeping the regularization error small. the noise affects primarily the high-frequency components which are associated with the smaller singular values and which are damped by the spectral filtering method. then the perturbation error is zero. A proper choice of the regularization parameter balances the two types of errors.. The problem is the same as in Figure 5. when too few filter factors are close to one. then the regularization error is zero. then the inverted noise is heavily damped (the perturbation error is small). • Regularization Error.• are close to one. We see that the two types of errors are balanced for k % 200. The second component V4> Tl~lUTe is the perturbation error which consists of the inverted and filtered noise. • Perturbation Error. When most of the filter factors are small (or zero). The closer $ is to the identity. The matrix V $ VT describes the mapping between the exact solution and the filtered solution Xfiit. then the regularization error is small.

For the discussion below.4 Parameter Choice Methods Choosing the regularization parameter for an ill-posed problem is an art based on a combination of good heuristics and prior knowledge of the noise in the observations. ) correspond to factors (1 — 0. the small filter factors 0(. We describe three important parameter choice methods: the discrepancy principle.6. which are multiplied by small coefficients \u. then the norm of the regularization error will not be large. the factors (1 — 0. . Parameter Choice Methods 79 Figure 6.-) close to one. it is useful to have formulas for the norm of the spectral filtering solution (6.bexact/ 0 "/1. 2 .. and the L-curve criterion. .(for / = 1 . To further understand this. The 2-norms of the regularization error (I# — V 0 V r )x exact and the perturbation error V 4> ^"'U^e versus the truncation parameter kfor the TSVD method. . the coefficients |ufb exa ctMI decay (on average).Hence we conclude that if the filters are suitably chosen.-) damp the contributions from the larger coefficients u(r bgxact/^i • Moreover.8) .(for i = N. ) are close to one. and since ||V r y||^ = ||y||^. 6. Since Xexact = A^bgxact = V E^ 1 U r b exa ct.4. due to the discrete Picard condition.3. let us consider the norm of the regularization error. N — 1. . generalized crossvalidation. . we obtain Now recall that. Since the first filter factors 0. .

9) Note that for the TSVD method. where a is the Tikhonov parameter or. the norm of the solution Xfiit = x. The parameter is chosen to make the predictions as good as possible. the filter factors. for the Tikhonov method.) Note that as 8 -> 0. Similarly. then the regularization parameter should be chosen so that the norm of the residual is approximately 8. where k is the TSVD cutoff.10) where r > 1 is some predetermined real number. and the trace is invariant under orthogonal transformation. then a good choice of the pixel values should be able to predict the missing data point well. from the analysis in Section 6. the best parameter for our spectral filtering methods minimizes the GCV functional (6. the norm of the solution Xfnt = xa is a monotonically nonincreasing function of a while the residual norm is monotonically nondecreasing. we can systematically try different values in (6.Other methods based on knowledge of the variance are given.t is a monotonically nondecreasing function of k. After a considerable amount of clever matrix manipulation. Regularization by Spectral Filtering (6. This parameter choice method arises from the principle that if we omit a data value. 10. for example. in [5. so . This choice relies on having a good estimate of 8.11) is rather formidable.80 and the norm of the residual Chapter 6.3 are shown in Figure 6. V <I> Z 'U 7 is the matrix that maps the right-hand side b onto the regularized solution Xfi] t .9) in order to find a parameter k or a to closely satisfy (6. the filtered solution satisfies Xfii t —>• xexact. Therefore. Once the SVD. (Common choices are 2 < r < 5.3. it can be shown that according to this principle. Instead. Generalized Cross Validation (GCV) [17]. the expected value of ||e||2 (the error in the observations b).10). 19]. The numerator is just \\(IN — A V 4> Z~ 1 U r )b||2 = ||b — Axfiull^. the parameter choice in GCV does not depend on a priori knowledge about the noise variance. GCV determines the parameter a that minimizes the GCV function. we choose a regularized solution Xfii t so that (6. We evaluate the denominator by noting that the trace of a matrix is the sum of its main diagonal elements. for which we already have a formula. and the vector U r b have been computed. Although (6. The Discrepancy Principle [42]. the cost is 2N multiplications and additions for each trial to compute the residual norm. Since the residual norm is a monotonic function of our regularization parameter k or a. abusing notation. If we have such an estimate. a = \/k. while the residual norm is monotonically nonincreasing. GCV functions for the problem of Figure 6.11) where. In contrast to the discrepancy principle. it is actually quite easy to evaluate.4.

3. we expect a solution near the corner to balance the regularization and perturbation errors. Every parameter choice method.4. 96].k}2 for TSVD (left) and G(a) given by (6. from which it draws its name. p. 36]. and the corner is located by estimating the point of maximum curvature [24]. and this is easy to compute.) • The discrepancy principle is convergent as the noise goes to zero. In practice. Even with a correct estimate of the variance. The log-log scale emphasizes the L shape. but it relies on information that is often unavailable or erroneous.1].Ax*|l2/(# . the solutions tend to be oversmoothed [29. For example. Which choice is best? Choosing an appropriate regularization parameter is very difficult. Hence. 22. Intuitively. (For further discussion and references about parameter choice methods.11) for Tikhonov regularization (right). the best regularization parameter should lie at the corner of the L.5 shows the L-curve for the TSVD method applied to the same problem as in Figure 6. Computing a point on the L-curve costs only 3N multiplications and additions and N divisions using (6.) . including the three we discussed. The L-Curve Criterion [20.6. has severe flaws: either they require more information than is usually available.9). The GCVfunctions G(k) = ||b . (See also the discussion in [20. 38]. see [10. The L-curve is a log-log plot of the norm of the regularized solution versus the corresponding residual norm for each of a set of regularization parameter values. the residual increases rapidly and the norm of the solution decreases only slowly.3. Parameter Choice Methods 81 Figure 6.8) and (6. while for values smaller than this. only a few points on the L-curve need to be computed. Figure 6.4. Section 6. This plot often is in the shape of the letter L. the norm of the solution increases rapidly without much decrease in the residual. or they fail to converge to the true solution as the error norm goes to zero. applied to the same problem as in Figure 6. since for values higher than this.

Specifically. so that numerical methods have difficulty in determining a good value of a [60]. and other methods is dependent on what information is available about the problem. The solution estimates fail to converge to the true solution as N —»• oo [61] or as the error norm goes to zero [12]. a log-log plot of the solution norm ||xfiit||2 versus the residual norm || b — Axfiitlh.82 Chapter 6. 6.e. . Another noted difficulty with GCV is that the graph for G can be very flat near the minimizer. • The L-curve criterion is usually more tractable numerically.11). VIP 22. the solution estimates fail to converge to the true solution as the error norm goes to zero [12]. In order to evaluate this function efficiently. i. GCV. we must find the minimum of the function G(a) given in (6.. Regularization by Spectral Filtering Figure 6. No parameter choice method is perfect. some algebraic simplification is helpful. • For GCV. The L-curvefor TSVD with parameter k applied to the same problem as in Figure 6. but its limiting properties are far from ideal. and choosing among the discrepancy principle. would look similar.5. in the case in which we are using the SVD. the L-curve criterion.The L-curve for Tikhonov regularization.3.5 Implementation of GCV To use GCV to obtain an estimate of a regularization parameter. with parameter a.

. s ) .2. where s = diag(E) and bhat = U r b. phi_d = 1 . • GCV for Tikhonov. The truncation parameter is found by evaluating G (k) for k = 1. Consider now specific regularization methods.6. and finding the index at which G (k) attains its minimum.* phi_d). G = sum((bhat . the expression for G (a) becomes To find the minimum of this continuous function we can use MATLAB's built-in routine f minbnd.. In the case of TSVD. Observe that since the 2-norrn is invariant under orthogonal transformation.~2 + alpha~2). . m a x ( s ) . [ ] . and so absolute values must be included with the squaring operations. bhat. but for now we limit our discussion to the SVD./ (s. In the case of Tikhonov filtering. the expression for G(k) can be simplified further. m i n ( s ) . so we can work in the coordinates of the SVD.5. s) % where bhat = U~T b. Note that this is a discrete function. In particular.~2)/(sum(phi_d)~2). For example. . ||b — Axfiit||2 = ||Ur(b — Axgit))^. the values in S and bhat may be complex. Implementation of GCV 83 we obtain A similar simplification can be done for spectral decompositions. Further details on the implementations for the various structured matrices considered in Chapter 4 can be found in the program listings gcv_tik (Appendix 2) and gcv_tsvd (Appendix 1). then some care must be taken in the implementation. suppose we implement the GCV function as function G = GCV(alpha. b h a t . • GCV for TSVD. Specifically. If the spectral decomposition is used instead of the SVD. Then the "optimal" a can be found using alpha = f m i n b n d ( @ G C V . N — 1..

like the elements of the noise vector e. while its covariance matrix is a scaled identity matrix.84 Chapter 6. £(U7e) = 0. Regularization by Spectral Filtering CHALLENGE 15.6 Estimating Noise Levels Further insight into choosing regularization parameters can be gained by using statistical information to estimate the noise level in the recorded image. with zero mean and identical standard deviation (i. white noise).52]. and that the covariance matrix for U r e is given by Hence the coefficients u(re behave.3 and 5. Part III. you should replace u ( r b and af with c(rb and Xj. compute the GCV function G(a) for the deblurring problem using reflexive boundary conditions and Tikhonov regularization. For one or both images. Sections 4. and the following first-order approximation holds: . 6. such as [9. Hence. is the /'th eigenvalue of the blurring matrix. the SVD basis is replaced by the DCT (spectral) basis.4. Keep in mind that for the algorithms based on the two-dimensional DCT. where r\ > 0 is the standard deviation. Then the expected value of e is the zero vector. Readers not familiar with statistics may wish to skip this section or consult an elementary statistics reference. Then find the a that minimizes the GCV function and compute the corresponding reconstruction(s). cf. Once again we return to the problem from Challenges 12 and 14. You should use the techniques discussed in this section to efficiently compute the GCV function. We return to this deblurring problem in Challenge 18. where cj is the /th row of the two-dimensional DCT matrix C and X.e. statistically. We first note that the coefficients u/'b in the spectral expansion are the elements of the vector Let us now assume that the elements of the vector e are statistically independent. From elementary statistics we also obtain the following relation for the expected value of (u? b)2: (because £(u?e) — 0). Then it follows from elementary statistics that the expected value of the vector U r e is also the zero vector. Deblurring Walk-Through.. Let us consider the SVD analysis of the noise and the inverted noise.

and it approaches 1 rapidly as /V increases. We conclude that for any index / where |u^"bexat. while for larger indices we have u/ b % u/ e ^ ?/. e. Quantization noise for images represented as integers is uniformly distributed in the interval [ —a . Estimating Noise Levels 85 POINTER. Compare the six parameters and the six solution images with the two that you computed in Challenge 13.t| is somewhat larger than i] we have u ( r b ^ u/"bexact.5. This insight allows us to estimate the error in our data.975. The following relations hold for white Gaussian noise e e MA': The factor v/2 Y((n + 1 )/2)/ V(n/2) approaches -/N rapidly.g.5. Returning to the plot in Figure 5. We have thus explained the overall behavior of the plot. we can. use the discrepancy principle to guide our choice of TSVD or Tikhonov parameter.'. while £(\uTb\) ~ r] when |uf bexilc. CHALLENGE 16.6. in which you determined k and a to give the clearest solution image. it is now evident that for small indices z the quantities ufb are indeed dominated by the component u^b cxact (which has an overall decreasing behavior). For more details see. Use the discrepancy principle.mat. Chapter 18 in [32]. r\ ~ u. GCV..v < 1..| is smaller than ?. Once we have this estimate of the noise. for example. and the L-curve criterion to choose regularization parameters for the Tikhonov method and the TSVD method on this problem. for N = 100 the factor is 9.v satisfies -5 v^3 < j'. POINTER. whose statistical behavior is identical to that of e.7b| for large indices .6. a } with a = 0. The following relations hold for such quantization noise e e IR' V : The factor j/. . Again consider the image in Challenge 2 and the blurring matrix in challenge2 .

Which method would you recommend using for these particular problems? CHALLENGE 18. For each method compare the quality of the reconstructed images. 14. Deblurring Walk-Through. Regularization by Spectral Filtering CHALLENGE 17.6 to create a blurred. Try each of these methods. can be used to compute the Tikhonov and TSVD solutions with the various boundary conditions. and 15. For one or both images. why would you expect t i k _ f ft and tik_sep with periodic boundary conditions to compute similar results? Try a problem that has significant boundaries.86 Chapter 6. The functions tik_dct tik_fft tik_sep tsvd_dct tsvd_fft tsvd_sep. To compute the estimate ft of the norm ||e||2 of the errors in the data. noisy image with a Gaussian PSF. provided in Appendix 1 and Appendix 2 and available at the book's website. we can assume that quantization errors dominate.10) with r = 2. The residual norm can be computed via the same technique as used for the GCV function. Why would you expect tik_dct and tik_sep with reflexive boundary conditions to compute similar results? Similarly. Part IV. but also experiment with other values of the regularization parameter. Hence we can use S = a ^/mnJ3. as well as one that has zero boundaries. For the last time we return to the problem from Challenges 12. . and compute the corresponding reconstruciion(s). compute the norm of the Tikhonov residual j|b — A Xfjuih. use the default (GCV-chosen) regularization parameter. Use the approach described in Section 4. In each case. experiment with different boundary conditions. In the case of tik__sep and tsvd_sep. determine the parameter a that satisfies the discrepancy principle (6. and the time required to compute them. Compare with the GCV-based reconstruction.

is illustrated in the left part of Figure 7. x ( : . one needs to filter the incoming light according to these colors before it is recorded.e. where we incorporate a regularization term based on a smoothing norm. This filter. Section 7. The three arrays of the color image are often referred to as channels. Finally. called a Bayer pattern.2 how to expand the use of the spectral filtering methods to more general regularization problems.4. in which the three two-dimensional arrays X ( : . Smoothing Norms. 87 . and to total variation techniques in Section 7. : . We start in Section 7. An alternative is to use a single CCD.1 shows a color image and its three color channels. and we show how to deblur color images with the techniques introduced so far.7. and finally recording these three color images on three different CCDs (charge-coupled devices). and X ( : . in Section 7. We extend this minimization framework to other norms in Section 7. we discuss computational methods for use when either the size of the image or the properties of the PSF make it too expensive to use our spectral methods. Figure 7. 2 ) . with only minor changes.3 discusses some computational issues when working with partial derivatives. and blue scales.5. a norm that involves partial derivatives of the image. 3) represent the color information.1 with a discussion of the treatment of color images. and blue filters. To record a digital color image. and they often represent the intensities in the red. l ) . then filtering each of the three new beams by red..Chapter 7 Color Images. In Section 7. Next we show in Section 7.2. a digital color image is represented by a three-dimensional array X of size m x n x 3. .Anthony Storr The purpose of this chapter is to introduce some more advanced topics related to spectral filtering methods. 7. i.1 A Blurring Model for Color Images As explained in Chapter 1. : . and Other Topics Originality implies being bold enough to go beyond accepted norms.6 we discuss situations in which the blurring function is not known exactly. green. This can be done by first splitting the light beam in three directions. : . in front of which is placed a filter mask with a red/green/blue pattern so that light in the different colors is recorded by different sensors on the same CCD. green.

2. Smoothing Norms. in the single-CCD approach. or blue) ends up on a pixel assigned to another color.1.88 Chapter 7. there is also a cross-channel blurring among these layers. Color Images. As a result. Right: the situation when the Bayer pattern is misaligned with respect to the CCD. so that it would be affected by red intensity as well as green. Figure 7. while the others are evenly divided between red and blue. half of the sensors measure green intensities. Assume that the color blurring (the cross-channel blurring) takes place after the optical blurring (the within-channel blurring) of the image. as shown in the right part of Figure 7. Let Br.2. B g . and Other Topics Figure 7. For example. Left: the Bayer pattern of sensors for recording color images on a single CCD. this occurs if the Bayer pattern is misaligned with respect to the CCD. Then for each . and Bb denote the grayscale images that constitute the three channels of the blurred color image B. A color image and its three color layers or channels. a sensor meant to record green might be partially covered by the red mask instead of the green mask. In either technique for recording color images. there is a possibility that light from one color channel (red. green. in addition to the within-channel blurring of each of the three color layers. Due to the misalignment.

and blue data vector that would be observed without the cross-channel blurring is multiplied by the matrix This matrix models the cross-channel blurring. «bt>) = (0.0. and (abr.0. This implies that the 3 x 3 cross-channel blurring matrix is the same for all pixels.5.25). a g b) = (0. Figure 7. we also assume that we have the same within-channel blurring (i. one with within-channel blurring only and one with both within-channel and cross-channel blurring. In addition. we define the "stacked" color images b and x by (7. green.. Two types of blurred color images. using Kronecker product notation. « r b) = (0.2. Note how some of the color . (7. Then the model for the color blurring takes the block form or.1. (a b r. the same PSF) in all three channels.7.1) where. a rg . Let us assume that the blurring is spatially invariant. Right: both within-channel blurring and cross-channel blurring with (a n .15. a bg .1.0.1).0. for color images. hence the optical blurring is modeled by Avec(X r ).a g g. and Avec(X h ).e. the red.3 shows two blurred color images.0.0. Avec(Xo).75).7.2) Figure 7.25. A Blurring Model for Color Images 89 pixel. Left: within-channel blurring only. and without loss of generality we can assume that each row sums to one.3.

and the full blurring model takes the form (Acolor ® A) X = b. 7. while. The technology associated with digital color imaging—including color fundamentals. then three independent deblurring problems are solved. on the other hand. then it is very likely that x is influenced too much by the noise in the data.e. If only within-channel blurring is present. Smoothing Norms. In particular. VIP 23.3) where D is a carefully chosen regularization matrix. If the residual is too large. and it measures the goodness-of-fit of the solution x. otherwise the combined problem in (7. This general Tikhonov regularization or damped least squares method takes the form (7. if the residual is too small. The interpretation of this minimization problem is that we seek a regularized solution that balances the size of two different terms: • The first term ||b — Ax]^ is the square of the residual norm. and color image processing—is surveyed in [53]. and Other Topics POINTER. Deblurring of color images is done according to the color blurring model. then A x does not fit the data b very well. no color blurring arises if Acolor = I3. Color Images. color) blurring is described by a 3 x 3 matrix ACO]or with nonnegative elements. if then the resulting image is grayscale only. at the other extreme. color recording systems. information is absent from the latter image. a more general approach to the regularization of a wide class of inverse problems). .2 Tikhonov Regularization Revisited In Chapter 6 we mentioned that the Tikhonov solution is related to the minimization problem whose solution has the form The above minimization problem turns out to be a special instance of a more general approach to image deblurring (in fact. often an approximation to a derivative operator.90 Chapter 7.1) is solved.. Spatially invariant cross-channel (i.

then we put too much emphasis on the first term.4) to zero. (7. notice that for two vectors y and z. To see this. and the regularization term. ||b — A x]^. and x will be influenced too much by the noise in the data. namely. Tikhonov Regularization Revisited 91 • The second term ||Dx||2 is called the regularization term and involves a smoothing norm.3. and we saw in Chapter 6 that there are several methods available for choosing this factor.2. since we can use numerical least squares algorithms to solve the Tikhonov problem efficiently.7. where the goal is to control the size of both the goodness-of-fit term. From this equation we see that the Tikhonov solution xa D has a closed-form expression. Hence.6) While (7. We saw in Chapters 1 and 6 that it is the inverted noise that destroys the quality of the reconstruction. e. See. Section 2. Image deblurring can be formulated as an optimization problem of the form (7. if a is too large. cf. The factor a2 controls the balance between the minimization of these two quantities.3) is a linear least squares problem.2]. merely a linear least squares problem in x. Section 20. we can also write the Tikhonov problem in the form (7.6) are convenient for theoretical studies. On the other hand. This norm is the 2-norm of the solution when D = 1N. We choose D so that the regularization term is small when x matches our expectations of how a good quality solution should behave. it is important to realize that (7. [4] and [22]. indeed. they are not recommended for numerical computations because of the loss of accuracy involved in working explicitly with the normal equations. [28. while the choice of D is discussed in Section 7.4] or [4. By setting the derivative of the minimization function in (7.5) and (7. it follows that the problem is mathematically equivalent to solving the normal equation (7.. If a is too small. ||Dx||^. We now turn our attention to algorithmic considerations. then we put too much emphasis on the second term and thus obtain a very smooth solution with too few details. Instead we recommend . and therefore D should be chosen so that the regularization term is large when the reconstruction contains a large component of inverted noise. For the development of efficient algorithms for solving the Tikhonov problem.5) which is yet another formulation of the Tikhonov problem. VIP 24.4) showing that it is.g. This is the best formulation for developing algorithms.3).

If we define the two m x m banded circulant matrices D..3)-(7. Similarly. Let z be a vector of length m consisting of equidistant samples of a periodic function z ( t ) with period 1.5) of the Tikhonov problem. and therefore we can often ensure the computation of a smooth solution by controlling the size of the partial derivatives.e. All three are useful for various theoretical studies. but only the least squares formulation (7. VIP 25. the quantity h~2(zi+\ — 2zi + z/-i) is a finite-difference approximation to the second derivative z"(t) at t = tj. where ?. There are three mathematically equivalent formulations (7. Then the quantity h~l(zi+\ — Zi) is a finite-difference approximation to the first derivative z'(t) at t = tj.OT by and then the vectors contain the finite-difference approximations to z'(t) and z"(t) at the m points ?/. using a QR decomposition or an SVD. For ease of exposition we restrict ourselves to the case of periodic boundary conditions. Other popular choices of the matrix D involve approximations to partial derivatives of the solution. i. Color Images. In terms of the general formulation in (7. This is a good choice of regularization term whenever a large amount of inverted noise results in a large value of 11 x 112. this corresponds to choosing D to be the identity matrix.4) is recommended for numerical computations.92 Chapter 7. The justification is that differentiation is known to magnify high-frequency components.3 Working with Partial Derivatives In Chapter 6 we have already discussed the case where the squared 2-norm ||x|J2 provides an adequate indication of the "quality" of the solution x. . and Other Topics using solution methods based directly on the least squares formulation (7. Our main challenge in applying this method is to understand how we can compute approximations to the partial derivatives of a digital image. 7. m and D2.4).3). Smoothing Norms. = ih and h = 1/m is the interspacing along the /-axis.

Moreover.e.2 0 0 1 0 1 0 XD[n XDjn " 0 1 0 0 0 " -2 1 0 0 Stencil ro oo" -1 0 0 0 Approx. where each column vector x.. from the relation we see that the matrix h 9 XD^ n represents a finite-difference approximation to the first or second partial derivative (xs or xss) of the image in the horizontal direction. m X " 0 1 0 " 0 . . evaluated at the pixel coordinates. Working with Partial Derivatives 93 Table 7.) An example of the use of these stencils is shown . i. Similarly. Matrix D.7.2 the matrix contains finite-difference approximations to the first or second partial derivative (xt or xtt) of the image in the vertical direction. Now recall that we can always write X as a collection of rows or columns.mX 0 1 0 1 0 . = x(i h.3. j h). i. Then for q = 1. for example.• is chosen such that £. Matrix expressions and corresponding computational stencils associated with discrete approximations of derivative operators. the Laplacian xtt + xss can be approximated byh-2(D2. consists of a column of pixels in the image.e. + xss of the image. m X + XD^ „) corresponds to the Laplacian xt.1 0 0 0 0 D2.. These derivatives can also be used in combination.. By writing out the elements of the matrices mentioned above we recognize the wellknown 3 x 3 computational stencils listed in Table 7. hx.1 for computing the discrete approximations to the five derivative operators. h2 xtt hxs h2 xss In order to apply these ideas to images. We let s denote the variable in the horizontal direction and let t denote the variable in the vertical direction.r is a row (or line) in the image.1.c.. (A stencil is applied to the image through a convolution operation in the same way as a PSF array. we consider the pixels of the m x n digital image X as samples on an equidistant grid (with grid size h x h} of a two-dimensional periodic function x. and each column vector £. we see immediately that the matrix /z~ 2 (D 2 .mX + XV^n).

4 and the relation ||X||p — ||vec(X)||2. Color Images. via the regularization terms.2.1. that of using one of the computational stencils shown in Table 7. Working with smoothing norms of the form ||Dx||2 (instead of ||x||2) adds only a minor overhead to the problem.4. Note that we can conveniently absorb the factor h ~q into the regularization parameter a. VIP 26. where we see that the large elements in the matrices with derivatives (Dq m X.4. in Figure 7. and Other Topics Figure 7.1.) correspond to the edges in the image X.94 Chapter 7.1 to the test image from Challenge 1. namely. Smoothing Norms. The stage is now set for incorporating the partial derivatives. for this norm. The different choices of D are summarized in Table 7. in the Tikhonov formulation of the image deblurring problems. the Frobenius norm of the discrete Laplacian of the image is h 2 times We can also form a regularization term as follows: There is no stencil formulation. Using the "vec" and Kronecker product notation from Section 4. similar to those in Table 7. . it follows that if x = vec(X). etc. then we can write the Frobenius norms of the discretized partial derivatives as h~q times Similarly. The results of applying the partial derivative stencils from Table 7. Notice the different grayscales for the images.

7.3. Working with Partial Derivatives

95

Table 7.2. Matrices used in computing (scaled) derivative norms corresponding to different choices ofY) (periodic boundary conditions). The diagonal matrix A is used in efficient implementation of the methods.

POINTER. For periodic boundary conditions the DFT matrix F = Fr g) Fc diagonalize: the matrix A. i.e., where the diagonal matrix A A contains the eigenvalues of A. The eigenvalues A (/ .,• of D(/ ,„ are

in which i = v — 1 denotes the imaginary unit, and

For all four choices of D we can write the Tikhonov solution in the form

where the filtering matrix j A A 2 (| A,\ ~ + « 2 A ) ' is diagonal. Here, the diagonal matrix A takes one of the four forms shown in the last column of Table 7.2. The derivation of (7.9) can be found at the book's website.

96

Chapter 7. Color Images, Smoothing Norms, and Other Topics

POINTER. In the image processing literature, (7.9) is referred to as Wiener filtering [26]. The quantity a 2 A represents the noise-to-signal power, i.e., the power spectrum of the noise E divided by the power spectrum of the exact image X. Wiener filtering is available in the IPT as the function deconvwnr.

POINTER. The IPT includes a function deconvreg that implements the algorithm described in this section for periodic boundary conditions. The user must specify information about the matrix D in the form of a stencil (cf. Table 7.1), and the default stencil is the Laplacian.

POINTER. Filtering using derivative operators and other smoothing norms is useful in, e.g., superresolution [13]. A survey of the use of derivative operators in image restoration can be found in [35].

7.4

Working with Other Smoothing Norms

The Tikhonov formulation in (7.3) is not the end of the story. There are many other possibh choices of the regularization term. One way to generalize the regularization term is t( replace the smoothing norm ||Dx||2 with the norm ||Dx||£, where || • \\p is the p-norn defined by Usually p satisfies 1 < p < 2, because p > 2 leads to very smooth images that contain few details. Table 7.3. Illustration of the penalization of a large element by different choices of the norm \\ • \\p. Herei= [3, 2, 1, 4]T andz = [3, 12, l , 4 ] r .

mn;
II 7\\ \\£\\P

p

1

1.1

1.2

1.5

2

10.0 20.0

11.1 24.3

12.3 29.7

17.0 57.8

30.0
170.0

While this may seem like only a minor change of the problem formulation, the impact on the reconstructed image can be dramatic. The reason is that a smoothing norm ||Dx||p with p < 2 penalizes the large-magnitude elements in the vector D x less than the 2-norm does; and the smaller the p, the less the penalization. Table 7.3 illustrates this point with a small example. Hence, if we use a value of p close to one, then we can allow a larger fraction of not-so-small elements in Dx than when using p — 2.

7.5. Total Variation Deblurring Consider now the image deblurring problem in the form

97

(7.10)

where D is one of the matrices discussed above. If we use p = 2, then the smoothing term || D x 11 \ enforces a dramatic penalization of large elements in the vector D x, which is equivalent to requiring that the partial derivatives must be small everywhere in the image. Thus, this approach favors reconstructions that are very smooth. However, if we use a smoothing norm ||D x||p with p close to one, then we implicitly allow the partial derivatives to be larger in certain limited regions of the image. This allows the large values of partial derivatives typically found near edges and discontinuities in an image. As a result of using p close to one, we can therefore obtain reconstructed images with better denned edges, because the large partial derivatives associated with the edges have less contribution to the function to be minimized. We illustrate this with a color-image example with no cross-channel blurring. Each of the three color layers in the image is reconstructed by means of the smoothing norm ||D x||£ with the matrix

which corresponds to penalizing the norm of the first-order partial derivatives in the horizontal and vertical directions in the image. Figure 7.5 shows two reconstructions using p = 2 and p = 1.1, respectively. Both reconstructions have sharp edges, but the second is free of the "freckles" that are clearly visible in the 2-norm reconstruction. The Tikhonov problem based on a smoothing norm ||D x\\pp with p ^ 2 is much more expensive computationally. The linear least squares algorithms are no longer directly applicable. However, it is still sometimes possible to make use of the fast methods described in this book. For example, the 1-norm and the oo-norm formulations lead to linear programming problems, and these problems can be solved by solving a sequence of least squares problems [48]. A similar algorithm applies to various iteratively reweighted least squares problems [40, 45J.

7.5

Total Variation Deblurring

The case p = 1 (the smallest value of p for which || • \\p is a norm) has received attention in connection with total variation denoising and deblurring of images, when it is used together with first derivatives. Here we give a brief introduction to this subject, expressed in our matrix terminology. The gradient magnitude y(s, t) of the two-dimensional function x — x(s, ?) is a new two-dimensional function, denned as

where again we use the notation xs and x, to denote, respectively, partial derivatives of x

but it also includes a large number of "freckles" with high spatial frequencies. t) represents an image. Smoothing Norms. Both reconstructions were computed using the smoothing norm ||D x||£ with the matrix in (7. The total variation functional is then denned as From the example images in Figure 7. Color Images. with respect to s and with respect to t.1.5.1. Comparison of image debluning using the butterflies . The top image. which was computed with p = 2. has no freckles.98 Chapter 7. then the total variation functional JTV(X) is a measure of the accumulated size of the edges in the image.4 we see that if the function x(s. The bottom image.11). has sharp edges. and Other Topics Figure 7. which uses p-norm smoothing with p = 1. tif color image from Figure 7. . yet the sharp edges are reconstructed well.

then this should be included in our model. this is not a good assumption. resulting in an imprecise PSF array P and therefore an imprecise blurring matrix A. We can generalize our TSVD and Tikhonov solution methods to this more general model. in Section 3..2 we discussed how the PSF might be measured in an experiment. e. when we have an imprecise model for the deblurring.7. In the discrete formulation. nonlinear term /rv(X) plays the role of the regularization term from the Tikhonov formulation. In some cases. Blind Deconvolution 99 POINTER. This model has been given the rather unfortunate name of blind deconvolution. instead of solving the regularized least squares minimization problem . the total variation of the image X is given by which is a sum of the 2-norms of all the gradients associated with the pixels in the image X. Instead of the model we might use where EA and e are both unknown. More details about total variation denoising and deblurring can be found in the books by Chan and Shen |7| and Vogel [62]. The total variation deblurring problem then takes the form of a nonlinear minimization problem min (||b-Ax|£ +a/-™(X)}. This minimization problem can be solved by variations of Newton's method.g.6 Blind Deconvolution So far we have assumed that the PSF—and therefore the matrix A—is known exactly.6. If there is noise/errors in A as well as b. for example. For Tikhonov. and for reflexive boundary conditions we suggest the choice which ensures that /Tv (X) = 0 when X is the constant image. where the last. For periodic boundary conditions the circulant matrix Di. It arises. m can be used to compute first derivatives. 7.

where the Frobenius norm \\EA IIp is computed as the square root of the sum of the squares of the elements. In such a method. suppose that we have zero boundary conditions but the PSF array is not rank-one. First. But what do we do when our problem does not fit into one of these frameworks? For example. the matrix is closely related to a BCCB matrix or a Kronecker product approximation. 50]. A discussion of regularization and applications to deblurring can be found in [14. using the related BCCB or Kronecker product matrix as a preconditioner. though.) to the linear system Ax = b. If we assume that E^ has the same structure as A (e. A good reference on iterative methods in the Krylov subspace family and on preconditioning is the book by Saad [51]. At iteration j. Fast algorithms for structured problems are considered in [39. [6]. and thus we exploit structure in the matrix A. 16]. and our fast deblurring algorithms cannot be used. we can form products of A with arbitrary vectors quite fast. we might solve over all choices of x and EA. for example. 48. This is known as a regularized total least squares problem. In LSQR. .100 Chapter 7. so we will stop the iteration early. How can we exploit these properties? We can apply an iterative method to our deblurring problem. as shown in the table in VIP 13. This method belongs to the family of Krylov subspace methods and is particularly well adapted to problems that need regularization. LSQR is an algorithm due to Paige and Saunders [47]. see. 7. The classic reference on total least squares problems is a book by Van Huffel and Vandewalle [58]. we construct a sequence of approximate solutions x(. as mentioned in VIP 19. then to our generalized model we add this constraint. reducing the number of unknowns and speeding up the computation but making the solution algorithm more complicated. Color Images.. and deblurring with these matrices can be done quickly. we have the (noise contaminated) naive solution x = A-1b. Smoothing Norms. Some blind deconvolution models avoid total least squares by assuming a special parametric form for the PSF.g. After N = mn steps. the main work per iteration is multiplication of a vector by A and solution of a linear system involving the preconditioner. Our matrix A is then BTTB. A good choice of iterative method is the LSQR algorithm of Paige and Saunders [47]. Second. over all choices of x. and Other Topics POINTER.7 When Spectral Methods Cannot Be Applied We know that our deblurring problem can be solved by a fast spectral algorithm when the matrix A has certain special structure. We do have two very nice properties. we solve the minimization problem POINTER. BTTB).

we have several tools: • The early subspaces generated by LSQR tend to contain spectral directions corresponding to the large singular values of A.4 . then we need an approximation to A r A + a 2 D 7 D as a preconditioned For example. If LSQR is applied to the minimization problem without regularization. then our computed image will be useless. see. then M must be chosen even more carefully. We can use the discrepancy principle. The danger is that if the preconditioner magnifies the noise. using the DCT in place of the FFT. Therefore. we can apply LSQR to the Tikhonov-regularized problem If LSQR is applied to the Tikhonov-regularized problem.. • Since j will remain small. When Spectral Methods Cannot Be Applied 101 over all vectors x in a j-dimensional subspace of the space of N-dimensional vectors. or GCV to choose the best value of j.7. we may be able to stop the iteration with a small value of j before the computed solution has significant components in directions corresponding to small singular values. if then we might use M = VEV 7 as a preconditioner. • Alternatively. the iterative method is the regularization method. in which £ has ones in place of the small spectral values of £ to avoid magnifying error. so M is well conditioned. L-curve. e. 46] for details. We summarize the choices of preconditioners in Table 7. [36. In order to produce a good regularized solution. Therefore. Note that for this preconditioner.7. we can construct BCCB approximations we might choose M = ¥*(\AA\2 + « 2 | A D | 2 ) 1 / 2 F as our preconditioner.g. These methods are called hybrid methods for regularization. This allows us to use a larger value of j without significant contamination by directions corresponding to small singular values and can yield a better deblurred image. In this case. We can use a similar approach for BTTB + BTHB + BHTB + BHHB approximations. by replacing zero boundary conditions with periodic boundary conditions. regularization is built in. • We can choose a preconditioner to bias the subspaces so that this regularizing effect of the iteration is enhanced. we can afford to use our SVD-based methods on the (j+1) x j matrix generated by LSQR and use either Tikhonov or TSVD regularization.

Color Images. PSF Rank > 1 Nonsymmetric or rank > 1 Spatially variant Boundary condition Zero Reflexive Arbitrary Matrix structure BTTB BTTB + BTHB + BHTB + BHHB None Preconditioner BCCB or Kronecker approximation Symmetric or Kronecker approximation Any of the above POINTER.102 Chapter 7. Since the book is meant to be a tutorial rather than a research monograph. .4. Smoothing Norms. the references we give are also biased toward our own writing. and we have chosen them based on our own experience. and Other Topics Table 7. We have outlined in this book just a small fraction of algorithms for deblurring images. We hope that what you have learned encourages you to investigate other classes of algorithms and to explore alternate viewpoints given in the work of the many other researchers in this area. A summary of some choices ofpreconditioners for structured deblurring problems.

which may be used for the analysis and solution of discrete ill-posed problems. PSF.). center. 103 . PSF. tol] = tsvd_fft(B. These codes. TSVD Regularization Methods Periodic Boundary Conditions function [X. % tol Regularization parameter (truncation tolerance). % center [row. which is an object-based package for iterative image deblurring algorithms. % % Input: % B Array containing blurred image. % Reference: See Chapter 6.. PSF. and Restore Tools [43]. center) . tol) % % X = tsvd_fft(B. PSF. center.Appendix MATLAB Functions This three-part appendix includes MATLAB codes that illustrate how some of the different techniques and methods discussed in this book can be implemented. tol) %TSVD_FFT Truncated SVD image deblurring using the FFT algorithm. center. tol] = tsvd_fft(B. % [X. tol) . % PSF Array containing the point spread function. . % Default parameter chosen by generalized cross validation. 1. tol] = tsvd_fft(B.. % % Compute restoration using an FFT-based truncated spectral factorization.% X = tsvd_fft(B. We emphasize that this is not intended to be a library or a complete software package. same size as B. and the image data used in this book. % % Output: % X Array containing computed restoration. we suggest Regularization Tools [21] and MOORe Tools [30]. % %function [X. col] = indices of center of PSF. may be obtained from the book's website. % tol Regularization parameter used to construct restoration. for more complete MATLAB packages. PSF.

% Phi = (abs(S) >= tol). % . X = real(ifft2(bhat . % %function [X.104 % % % Appendix: MATLAB Functions "Deblurring Images . col] = indices of center of PSF. % center [row. % % Check number of inputs and set default parameters.Matrices. % S = fft2 ( circshift(PSF. Philadelphia. center). use GCV to find one./ S(idx). PSF. center. tol) %TSVD_DCT Truncated SVD image deblurring using the DCT algorithm. . and center must be given. SIAM. bhat(:)). tol] = tsvd_dct(B. PSF. % X = tsvd_dct(B. % tol Regularization parameter (truncation tolerance).% [X. Sfilt(idx) = Phi(idx) .') end if (nargin < 4) tol = [] . % % Compute restoration using a DCT-based truncated spectral factorization. % % Input: % B Array containing blurred image.). 2006. % bhat = fft2 (B) . PSF. Nagy. end % % Compute the TSVD regularized solution. J. Spectra. PSF. center. G. O'Leary. C. and Filtering" by P. tol] = tsvd_dct(B. idx = (Phi~=0). tol) % % X = tsvd_dct(B. P. tol) .. % PSF Array containing the point spread function. 1-center) ) . % if (nargin < 3) error('B. if (ischar(tol) | isempty(tol)) tol = gcv_tsvd(S(:). Reflexive Boundary Conditions function [X.. % % If a regularization parameter is not given. Hansen. same size as B. Sfilt = zeros(size(Phi)).* Sfilt)). PSF. tol] = tsvd_dct(B. end % % Use the FFT to compute the eigenvalues of the BCCB blurring matrix. center. % Default parameter chosen by generalized cross validation. and D. PSF.

S = dcts2( dctshift(PSF. Regularization parameter used to construct restoration. and Filtering" % by P. Hansen. end % % If a regularization parameter is not given. Spectra. Sfilt(idx) = Phi(idx) . O'Leary. center) ) . % use our simple codes. and D. % Phi = (abs(S) >= tol). 2006.% Check again to see if the built-in dct2 function is available. % if (ischar(tol) | isempty(tol)) tol = gcv^tsvd(S(:). J. if exist('dct2') == 2 X = idct2(bhat . % Check to see if the built-in dct2 function is available. end . if not. G. % Reference: See Chapter 6.* Sfilt).* Sfilt). if exist('dct2') == 2 bhat = dct. % "Deblurring Images . else bhat = dcts2 (B). and center must be given. idx = (Phi~=0).1) = 1. end % % Compute the TSVD regularized solution./ dct2(el). Sfilt = zeros(size(Phi)). use GCV to find one. % SIAM. S = dct2( dctshift(PSF. C. PSF. % el = zeros(size(PSF)).Matrices. bhat(:)). % if (nargin < 3) error('B.') end if (nargin < 4) tol end = [] ./ dcts2(el).2 (B) . center) ) . % % Check number of inputs and set default parameters. P. else X = idcts2(bhat ./ S(idx). el (1.Appendix: MATLAB Functions % Output: 105 % % X tol Array containing computed restoration. % % Use the DCT to compute the eigenvalues of the symmetric % BTTB + BTHB + BHTB + BHHB blurring matrix. Philadelphia. Nagy.

% %function [X. Vr] = svd(Ar). C. O'Leary. Spectra. PSF. P. Hansen.106 Appendix: MATLAB Functions Separable Two-Dimensional Blur function [X. % center [row. Note that if the PSF is not separable. PSF. % ('zero'..). Ac). PSF. and center must be given. % X = tsvd_sep(B. % % Input: % B Array containing blurred image. % BC String indicating boundary condition. center. center. % tol Regularization parameter (truncation tolerance). BC) . end % % First compute the Kronecker product terms. Sr. end if (nargin < 5) BC = 'zero'. PSF. % "Deblurring Images . Ar and Ac. 2006. % % Check number of inputs and set default parameters. Philadelphia. where % A = kron(Ar. center). % SIAM. tol). % tol Regularization parameter used to construct restoration. center. % [Ar. G. col] = indices of center of PSF. % [Ur. same size as B. BC) . . Ac] = kronDecomp (PSF.) % % Output: % X Array containing computed restoration. or 'periodic'.% % Compute SVD of the blurring matrix. J. Nagy. % if (nargin < 3) error('B. % [X. tol.Matrices. BC) % % X = tsvd_sep(B. tol. tol] = tsvd_sep(B. default is 'zero'. this % step computes a Kronecker product approximation to A. . PSF. center. PSF. % % Compute restoration using a Kronecker product decomposition and % a truncated SVD. % X = tsvd_sep(B. center. 'reflexive'. % PSF Array containing the point spread function. and Filtering" % by P. tol. tol] = tsvd_sep(B.. PSF. and D. tol] = tsvd_sep(B. BC) %TSVD_SEP Truncated SVD image deblurring using Kronecker decomposition. % Reference: See Chapter 6.') end if (nargin < 4) tol = [] . % Default parameter chosen by generalized cross validation.

107 % % If a regularization parameter is not given. Sfilt = zeros(size(Phi)). Sfilt(idx) = Phi(idx) ./ s(idx). % % % Output: tol Truncation parameter.*Sfilt . G. Hansen.Matrices. % Phi = (abs(s) >= tol). % "Deblurring Images . all abs(s) < tol should be truncated. bhat(:)).diag(Sc)). % Reference: See Chapter 6. idx = (Phi~=0). % %function tol = gcv_tsvd(s. J. bhat) %GCV_TSVD Choose GCV parameter for TSVD image deblurring. Choosing Regularization Parameters function tol = gcv_tsvd(s. bhat = bhat! : ) . s = flipud(s). O'Leary. Philadelphia. and D. size(B)). bhat = abs ( bhat (idx) ) . Nagy. Vc] = svd(Ac). bhat). end % % Compute the TSVD regularized solution. use GCV to find one. 2006. % % The GCV function G for TSVD has a finite set of possible values % corresponding to the truncation levels. It is computed using . Bhat = reshape(bhat .Appendix: MATLAB Functions [Uc. X = Vc*Bhat*Vr'. P. Sc. % [s. % SIAM. bhat) % % tol = gcv_tsvd(s.n = length(s). and Filtering" % by P. C. idx = flipud(idx). if (ischar(tol) isempty(tol)) tol = gcv_tsvd(s. % bhat Vector containing the spectral coefficients of the blurred % image. idx] = sort(abs(s)). % bhat = Uc'*B*Ur.s = kron(diag(Sr). % % Sort absolute values of singular/spectral values in descending order. % % Input: % s Vector containing singular or spectral values. Spectra. % % This function uses generalized cross validation (GCV) to choose % a truncation parameter for TSVD regularization.

Tikhonov Regularization Methods Periodic Boundary Conditions function [X. % alpha Regularization parameter. for k=n-2:-l:l rho(k) = rho(k+l) + bhat(k+1)~2. % alpha Regularization parameter used to construct restoration. G = zeros(n-1.. % %function [X. PSF. end end % % Now find the minimum of the discrete GCV function. alpha) %TIK_FFT Tikhonov image deblurring using the FFT algorithm. .k) ~2. % That is. G(n-l) = rho(n-l) . any singular values < tol are truncated.1). end % Ensure that the parameter choice will not be fooled by pairs of % equal singular values. center. alpha] = tik_fft(B. % with the identity matrix as the regularization operator. alpha] = tik_fft(B. center) . % % Output: % X Array containing computed restoration. % [X. same size as B.). col] = indices of center of PSF. % rho = zeros(n-1. % [minG. % % reg_min is the truncation index. % % Input: % B Array containing blurred image. ..1).108 Appendix: MATLAB Functions % rho. for k=l:n-2. % Reference: See Chapter 6. 2. alpha). % center [row. G(k) = rho(k)/ (n . % tol = s(reg_min(1)). alpha) % % X = tik_fft(B. a vector containing the squared 2-norm of the residual for % all possible truncation parameters tol. PSF. alpha] = tik_fft(B. center.% X = tik_fft(B. and tol is the truncation parameter.reg_min] = m i n ( G ) .% % Compute restoration using an FFT-based Tikhonov filter. PSF. PSF. PSF. rho(n-l) = bhat(n)~2. % Default parameter chosen by generalized cross validation. center. % PSF Array containing the point spread function. if (s(k)==s(k+l) ) G(k) = inf.

PSF./ D. % [X. alpha). . O'Leary. PSF. bhat = conj(s) .*s + abs(alpha)"2. % PSF Array containing the point spread function. Hansen. C. bhat = bhat(:). end % % Use the FFT to compute the eigenvalues of the BCCB blurring matrix. % % Input: % B Array containing blurred image. bhat). Spectra.Matrices. PSF. center. xhat = reshape (xhat. % center [row. % %function [X.. Philadelphia. P. and D.Appendix: MATLAB Functions % % % a. PSF. center). alpha) %TIK_DCT Tikhonov image deblurring using the DCT algorithm. Nagy. s = S(:). 1-ceriter) ) .* bhat. PSF. PSF. alpha] = tik_dct(B. % if (nargin < 3} error('B. center. % % Compute restoration using a DCT-based Tikhonov filter. % D = conj(s). same size as B. if (ischar(alpha) isempty(alpha)) alpha = gcv_tik(s.X = real(ifft2(xhat}). G. center. alpha) % % X = tik_dct(B. 109 "Deblurring Images . xhat = bhat . 2006. % with the identity matrix as the regularization operator.). alpha] = tik_dct(B. alpha] = tik_dct(B. % X = tik_dct(B. col] = indices of center of PSF. and Filtering" by P. % bhat = fft2(B). J. % S = f f t2 ( c i r c s h i f t (PSF. % Check number of inputs and set default parameters. and center must be given. use GCV to find one. size(B)).') end if (nargin < 4) alpha = [] . . Reflexive Boundary Conditions function [X. SIAM. % % If a regularization parameter is not given.. end % % Compute the Tikhonov regularized solution.

% D = conj(s).Matrices.*s + abs(alpha)"2. xhat = reshape(xhat. bhat). % % Check number of inputs and set default parameters. % if (nargin < 3) error('B./ dct2(el). if exist('dct2') == 2 bhat = dct2(B). % "Deblurring Images . Philadelphia. Default parameter chosen by generalized cross validation.1) = 1. end % If a regularization parameter is not given.110 % % % % Appendix: MATLAB Functions alpha Regularization parameter. P. . and Filtering" % by P. % % Check to see if the built-in dct2 function is available. G. Regularization parameter used to construct restoration./ D. el(1. and center must be given. % Reference: See Chapter 6. s = S( : ) . if (ischar(alpha) | isempty(alpha)) alpha = gcv_tik(s. and D. center) ) . use GCV to find one. C. % use our simple codes. % Check again to see if the built-in dct2 function is available./ dcts2(el). bhat = conj(s) . PSF. Spectra. 2006. S = dcts2( dctshift(PSF. else bhat = dcts2(B). if exist ('dct2' ) == 2 X = idct2(xhat). J. % bhat = bhat(:). end % Use the DCT to compute the eigenvalues of the symmetric % BTTB + BTHB + BHTB + BHHB blurring matrix. end % Compute the Tikhonov regularized solution. xhat = bhat . Nagy. O'Leary. center) ) . Output: % % X alpha Array containing computed restoration.* bhat. el = zeros(size(PSF)}. % SLAM. Hansen.') end if (nargin < 4) alpha = [] . if not. size(B)). S = dct2( dctshift(PSF.

end if (nargin < 5) BC = 'zero'• end % First compute the Kronecker product terms. % center [row. BC) %TIK_SEP Tikhonov image deblurring using the Kronecker decomposition. Spectra. Nagy. % X . % SIAM. % alpha Regularization parameter. center. % X = tik_sep(B. and D. PSF. PSF. and Filtering" % by P. % % Output: % X Array containing computed restoration.. % [X. % % Check number of inputs and set default parameters. % BC String indicating boundary condition. 2006.') end if (nargin < 4) alpha = [] . % Reference: See Chapter 6.Matrices. alpha. PSF. PSF. PSF. where % the blurring matrix A = kron(Ar. PSF. alpha] = tik_sep(B.). P. Hansen. % %function [X. O'Leary. BC) % % X = tik_sep(B. J. center. alpha. col] = indices of center of PSF. % "Deblurring Images . C.. center). % % Compute restoration using a Kronecker product decomposition and a % Tikhonov filter. % if (nargin < 3) error CB. alpha] = tik_sep(B. PSF. Ar and Ac. alpha. and center must be given. end 111 Separable Two-Dimensional Blur function [X. % Default parameter chosen by generalized cross validation. alpha). with the identity matrix as the regularization operator. alpha] = tik_sep(B. Philadelphia. or 'periodic') % Default is 'zero'. this . center. . % % Input: % B Array containing blurred image. 'reflexive'. center. BC). % alpha Regularization parameter used to construct restoration. % Note that if the PSF is not separable. Ac). same size as B. % PSF Array containing the point spread function.Appendix: MATLAB Functions else X = idcts2(xhat). % ('zero'. G.tik_sep(B.

112

Appendix: MATLAB Functions

% step computes a Kronecker product approximation to A. % [Ar, Ac] = kronDecomp(PSF, center, BC) ;

% % Compute SVD of the blurring matrix. % [Ur, Sr, Vr] = svd(Ar); [Uc, Sc, Vc] = svd(Ac); % % If a regularization parameter is not given, use GCV to find one. % bhat = Uc'*B*Ur; bhat = bhat( :) ,s = kron (diag (Sr) , diag (Sc) ),if (ischar(alpha) | isempty(alpha)) alpha = gcv_tik(s, bhat); end % % Compute the Tikhonov regularized solution. % D = abs(s).~2 + abs(alpha)~2; bhat = s .* bhat; xhat = bhat ./ D; xhat = reshape(xhat, size(B)); X = Vc*xhat*Vr';

Choosing Regularization Parameters
function alpha = gcv_tik(s, bhat) %GCV_TIK Choose GCV parameter for Tikhonov image deblurring. % %function alpha = gcv_tik(s, bhat) % % alpha = gcv_tik(s, bhat); % % This function uses generalized cross validation (GCV) to choose % a regularization parameter for Tikhonov filtering. % % Input: % s Vector containing singular or spectral values % of the blurring matrix. % bhat Vector containing the spectral coefficients of the blurred % image. % % Output: % alpha Regularization parameter. % Reference: See Chapter 6, % "Deblurring Images - Matrices, Spectra, and Filtering" % by P. C. Hansen, J. G. Nagy, and D. P. O'Leary, % SIAM, Philadelphia, 2006. alpha = fminbnd(@GCV, min(abs(s)), max(abs(s)), [], s, bhat);

Appendix: MATLAB Functions
function G = GCV(alpha, s, bhat) % % This is a nested function that evaluates the GCV function for % Tikhonov filtering. It is called by fminbnd. % phi_d = 1 ./ (abs(s).~2 + alpha~2); G = sum(abs(bhat.*phi_d)."2) / (sum(phi_d)~2); end
end

113

3. Auxiliary Functions
function y = dcts(x) %DCTS Model implementation of discrete cosine transform. % %function y = dcts(x) % % y = dcts(x); % % Compute the discrete cosine transform of x. % This is a very simple implementation. If the Signal Processing % Toolbox is available, then you should use the function dct. % % Input: % x column vector, or a matrix. If x is a matrix then dcts(x) % computes the DCT of each column % % Output: % y contains the discrete cosine transform of x. % % % % % % % % Reference: See Chapter 4, "Deblurring Images - Matrices, Spectra, and Filtering" by P. C. Hansen, J. G. Nagy, and D. P. O'Leary, SIAM, Philadelphia, 2006. If an FFT routine is available, then it can be used to compute the DCT. Since the FFT is part of the standard MATLAB distribution, we use this approach. For further details on the formulas, see:

% "Computational Frameworks for the Fast Fourier Transform1 % by C. F. Van Loan, SIAM, Philadelphia, 1992. % % "Fundamentals of Digital Image Processing" % by A. Jain, Prentice-Hall, NJ, 1989. % [n, m] = size(x); omega = exp(-i*pi/(2*n)); d = [l/sqrt(2); omega.~(1:n-l).'] / sqrt(2*n); d = d(:,ones(1,m));

xt = [x; flipud(x) ] ;

114
yt = fft(xt); y = real(d .* yt(l:n, ; ) ;

Appendix: MATLAB Functions

function y = dcts2(x) %DCTS2 Model implementation of 2-D discrete cosine transform. % %function y = dcts2(x) % % y = dcts2(x); % % Compute the two-dimensional discrete cosine transform of x. % This is a very simple implementation. If the Image Processing Toolbox % is available, then you should use the function dct2. % % Input: % x array % % Output: % y contains the two-dimensional discrete cosine transform of x. % % % % % % % % % % % % % % % y Reference: See Chapter 4, "Deblurring Images - Matrices, Spectra, and Filtering" by P. C. Hansen, J. G. Nagy, and D. P. O'Leary, SIAM, Philadelphia, 2006. See also: "Computational Frameworks for the Fast Fourier Transform" by C. F. Van Loan, SIAM, Philadelphia, 1992. "Fundamentals of Digital Image Processing" by A. Jain, Prentice Hall, NJ, 1989. The two-dimensional DCT is obtained by computing a one-dimensional DCT of the columns, followed by a one-dimensional DCT of the rows. = dcts(dcts(x).').';

function Ps = dctshift(PSF, center) %DCTSHIFT Create array containing the first column of a blurring matrix. % %function Ps = dctshift(PSF, center) % % Ps = dctshift(PSF, center); % % Create an array containing the first column of a blurring matrix % when implementing reflexive boundary conditions. % % Input: % PSF Array containing the point spread function. % center [row, col] = indices of center of PSF. % % Output: % Ps Array (vector) containing first column of blurring matrix.

% "Deblurring Images . Hansen. % %function y . % % Compute the inverse discrete cosine transform of x. and Filtering" % by P. [m.idcts(x) % % y = idcts(x).1). Nagy.n] = size(PSF). O'Leary. Spectra. Pa(l:2*k+l.k+l). 2006. % % Input: % x column vector. % SIAM.Matrices. Hansen. P.l:2*k+l) = PP. C. 2006. PP = Z1*PP*Z1' + Z1*PP*Z2' + Z2*PP*Z1' + Z2*PP*Z2'. % PP = PSF(i-k:i+k. P.n-j] ) . J. % The first column is obtained by reordering the entries of the PSF. k = min( [i-l. j = center (2) . Spectra. J.') end i = center (1) .k). and D.m-i. C. Nagy. and Filtering" % by P. 115 % The PSF gives the entries of a central column of the blurring matrix. If the Signal Processing % Toolbox is available. Philadelphia. If x is a matrix then idcts % computes the IDCT of each column. G. % Reference: See Chapter 4. Zl = diag(ones(k+1. O'Leary. % SIAM. % "Deblurring Images . Ps = zeros (m. n) .j-k:j+k). if nargin == 1 error('The center must be given.Appendix: MATLAB Functions % Reference: See Chapter 4. Z2 = diag(ones(k. for % a detailed description of this reordering. function y = idcts(x) %IDCTS Model implementation of inverse discrete cosine transform.j-l. and D. .Matrices. % This is a very simple implementation. then you should use the function idct. % % % Output: y contains the inverse discrete cosine transform of x.1). or a matrix. see the reference cited % above. Philadelphia. G.

:).*x.:)).m) ) . NJ. O'Leary. For further details on the formulas.omega = exp(i*pi/(2*n)). G. Since the inverse FFT is part of the standard MATLAB % distribution. P. Reference: See Chapter 4. Prentice-Hall. % [n. we use this approach. If the Image Processing Toolbox % is available. % % "Fundamentals of Digital Image Processing" % by A. xt = [d. See also: "Computational Frameworks for the Fast Fourier Transform" by C. Prentice-Hall. % % Compute the inverse two-dimensional discrete cosine transform of x. -i*d(2:n.y = real(yt(1:n. Jain. SIAM. F. and Filtering" by P. 1989. J. C. Nagy. % This is a very simple implementation. ' . then it can be used to compute % the inverse DCT.:))]. then you should use the function idct2. d = d(: . SIAM. Philadelphia. % see % "Computational Frameworks for the Fast Fourier Transform" % by C. Van Loan. Jain. NJ. Van Loan. "Fundamentals of Digital Image Processing" by A. % %function y = idcts2(x) % % y = idcts2(x). Philadelphia. % % Input: % x array % % Output: % % % % % % % % % % % % % % % % % y contains the two-dimensional inverse discrete cosine transform of x. SIAM.Matrices. Hansen. 1992.*flipud(x(2:n. function y = idcts2(x) %IDCTS2 Model implementation of 2-D inverse discrete cosine transform. 2006. zeros(l. Spectra. followed by a one-dimensional inverse DCT of the rows. 1989. The two-dimensional inverse DCT is obtained by computing a one-dimensional inverse DCT of the columns.m). . "Deblurring Images .116 Appendix: MATLAB Functions % If an inverse FFT routine is available. d(l) = d(l) * sqrt(2) .ones (l. F. Philadelphia. m] = size (x) . d = sqrt(2*n) * omega. 1992. and D. yt = if ft (xt) ."(0:n-l) .

center. end % % Find the two largest singular values and corresponding singular vectors % of the PSF -. Ac). P. % ('zero'. Ar Matrices in the Kronecker product decomposition. Some notes: % * If the PSF. a warning is displayed % indicating the decomposition is only an approximation. Ac] = kronDecomp(P. % [U. V] = svdsfP. and Filtering" % by P. S. The result is % an approximation only. 2). Ac] = kronDecomp(P. % [Ar. P is not separable. if ( S(2. % SIAM. G. Spectra.l) > sqrt(eps) ) warning('The PSF. % % Check inputs and set default parameters. % BC String indicating boundary condition.2) / S(l.these are used to see if the PSF is separable. P. % where A is a blurring matrix defined by a PSF array. % % Input: % P Array containing the point spread function. 'reflexive'. % % Output: % Ac. BC) % % [Ar. P is not separable. center).Matrices. C. Hansen.Appendix: MATLAB Functions 117 % y = idcts(idcts(x) . % center [row. % if (nargin < 2) error('P and center must be given. ' ) . and D. Ac] = kronDecomp(P. function [Ar. % "Deblurring Images . Ac] = kronDecomp(P.') end if (nargin < 3) BC = 'zero'. J. center. % % Compute terms of Kronecker product factorization A = kron(Ar. if the PSF array is not rank-one.') end % . BC). center. Philadelphia. or 'periodic') % Default is 'zero'. O'Leary. Nagy. % * The structure of Ac and Ar depends on the BC: % zero ==> Toeplitz % reflexive ==> Toeplitz-plus-Hankel % periodic ==> circulant % Reference: See Chapter 4. ' . BC) %KRONDECOMP Kronecker product decomposition of a PSF array % %function [Ar. using separable approximation. 2006. col] = indices of center of PSF.

center(2)). center(1)).!})). V = -V. row = col'. maxU = max(abs(U{:.col = zeros(n. k) % % Build a banded circulant matrix from a central column and an index . The next few statements check this. end % % % % c r The matrices Ar and Ac are defined by vectors r and c. center(1)). Ac = buildToepfc. % % The structure of Ar and Ac depends on the imposed boundary condition.1) ) *U( : . center(2)). T = toeplitz(col.case 'reflexive' % Build Toeplitz-plus-Hankel matrices here Ar = buildToep(r. center(1)) + buildHank(c. if minU == maxU U = -U. col(l:n-k+l. center(2)) + buildHank(r. k) % % Build a banded Toeplitz matrix from a central column and an index % denoting the central column. = sqrt (S (1. respectively. center(2)). row). row(l. % switch BC case 'zero' % Build Toeplitz matrices here Ar = buildToep(r. % minU = abs(min(U{:.l).l) = c(k:n). center (I)). 1) . Ac = tauildCirc(c.1) .118 Appendix: MATLAB Functions % Since the PSF has nonnegative entries. These vectors can be computed as follows: = sqrt (3(1. 1) ) *V( : . % n = length (c) .!})). That % is. otherwise error('Invalid boundary condition. case 'periodic' % Build circulant matrices here Ar = buildCircfr. the singular vectors corresponding to the largest singular value of % P should have nonnegative entries. Ac = buildToep(c. % and change sign if necessary. function C = buildcirc(c. we would like the vectors of the % rank-one decomposition of the PSF to have nonnegative components.l:k) = C(k:-1:1) ' .') end function T = buildToeptc.

% % Pad PSF with zeros to make it an m-by-n array. P. % SIAM. size(B)). 2006. % If only m is specified. n Desired dimension of padded array. m). col = [c(k:n) . % % Input: % PSF Array containing the point spread function. 119 function H = buildHankfc. row). % where B is the blurred image array. % if nargin == 2 if length(m) == 1 n = m. and m is a scalar.row(n-k+2:n) = c(l:k-l).Appendix: MATLAB Functions % denoting the central column. % %function P = padPSF(PSF. . % Reference: See Chapter 4.l). k) % % Build a Hankel matrix for separable PSF and reflexive boundary % conditions. col(1:n-k) = c(k+l:n). G.Matrices. n). O'Leary. % % If the PSF is an array with dimension smaller than the blurred image. and D. such as: % PSF = padPSF(PSF. [m.1). then n = m. % "Deblurring Images . C = toeplitz(col. Hansen. % m. m. and Filtering" % by P. function P = padPSF(PSF. c(l:k-l) ] . % % Set default parameters. c(n:-l:k+l)']. m. row). n) %PADPSF Pad a PSF array with zeros to make it bigger. n) % % P = padPSF(PSF. row = zeros (n. J. Spectra. C. Nagy. col = zeros(n. Philadelphia.n]). % n = length(c) . row = [c(k:-l:l) r . % P = padPSF(PSF. H = hankel(col. % then deblurring codes may require padding first. % n = length(c). % P = padPSF(PSF. m. % % Output: % P Padded m-by-n array.

2)) = PSF. n).) m = m(l)m = m(l) . .120 else n = m(2n = m(2) . P = zeros(m. P(l:size(PSP. l:size(PSF. Appendix: MATLAB Functions end end % Pad the PSF with zeros.1).

(Cited on p. Philadelphia. F. SIAMJ. Bertero and P. (Cited on pp. (Cited on pp.Bibliography [ 1 ] H. and M.) [7] T. 100. Addision-Wesley. Vogel..) [5] P. Prentice-Hall. Desbat and D. PDE. Direct blind deconvolution. Boccacci. (Cited on pp. (Cited on p. 1996. 2002. 1994. Modular solvers for constrained image restoration problems using the discrepancy principle. Wiley. Engl and W. 1989. 28. Sci. NJ. Comput. M. Digital Image Restoration. Math. IEEE Engineering in Medicine and Biology. R. Bjorck. H. Comput. S. The "minimum reconstruction error" choice of regularization parameters: Some more efficient methods and their application to deconvolution problems.. Philadelphia. Numerical Methods for Least Squares Problems. 82. September/October 1999. Ramoino. Introduction to Inverse Problems in Imaging.) [3] M. 91. SIAM. 81. (Cited on p. and Stochastic Methods. Andrews and B.. 16:1387-1403. Probability and Statistics. Englewood Cliffs. Carasso. Shen. Circulant Matrices. Math. 1977.) [8] P. F. A nonnegatively constrained convex programming method for image reconstruction. Robello. 1979.. P. DeGroot. Numer. 25:1326-1343.) [12] H.) [4] A. (Cited on p. 29. pages 18-22.. Girard. 2005. (Cited on p. W.. SIAMJ. 84. Image Processing and Analysis: Variational. 99.) [9] M. SIAMJ. Hunt.) [11] A. 61:1980-2007. London. M.) [2] J. Chan and J. Davis. 22. (Cited on p.) [6] A. 80. Wavelet. Corosu. 1998. 23. Linear Algebra Appl. 22. Grever. (Cited on pp. J. New York.) [10] L. Blomgren and T. 69:25-31. Diaspro. Sci. 2003. SIAM. 42. 9. Bardsley and C. (Cited on p. Numer.) 121 . Chan. 1995. 8. IOP Publishing Ltd. 80. 9:347-358. Two-photon excitation imaging based on a compact scanning head. Reading. 2001. MA. Using the L-curve for determining optimal regularization parameters. Appl. (Cited on p.

2002.) [20] P. Sci. NJ. Johns Hopkins University Press. The use of the L-curve in the regularization of discrete ill-posed problems.) [26] S. 6:1-35. O'Leary.) [23] P. 1984. 2. 96. Robinson. 91. (Cited on pp. Golub. Prentice-Hall.) [16] G. P. and P.. J. 100. G. Technometrics. M. SIAM. 2004. 81.) [15] D. O'Leary. Hansen. Englewood Cliffs. Milanfar.. 2005. J.) [25] J. SIAM J. Philadelphia. 96. (Cited on p. 14:1487-1503.. Philadelphia. Algorithms. 100. SIAM. 9. 21:215-223. C. MA.) [18] G. Haykin. Theory of Tikhonov Regularization for Fredholm Equations of the First Kind. Baltimore. NJ. Rank-Deficient and Discrete Ill-Posed Problems. F. 1998. Technol. (Cited on p.imm. Imaging Syst. 1997.) [27] D. 34:561-580. AppL. (Cited on p. C. SIAM Rev. Forsyth and J. Regularization tools: A MATLAB package for the analysis and solution of discrete ill-posed problems. Hansen.) [21] P. 1991. Regularization by truncated total least squares. and G. H. (Cited on p. Fierro. MATLAB Guide. Hansen and D. Hofmann. C. Comput. Generalized cross-validation as a method for choosing a good ridge parameter. 1994. (Cited on p. A Modern Appraoch. P. TeubnerTexte Mathe. SIAM J. H. 29:323-378. H. C. Groetsch. Second Edition. Hardy. 1986. 2003.dtu. Heath.) [14] R. (Cited on p. Tikhonov regularization and total least squares. Matrix Anal. Int. (Cited on p. Van Loan. 1993. 18:1223-1241. 14(2):47-57.) [24] P. and D. Philadelphia. 1999. Deconvolution and regularization with Toeplitz matrices.) [19] W.) [17] G. See also http://www2. (Cited on p. (Cited on p. Numer. Farsiu. (Cited on p. 80. Matrix Computations. Hansen. Analysis of discrete ill-posed problems by means of the L-curve. J. Prentice-Hall. and D. Adaptive Filter Theory. 1996. Computer Vision. 81. 22. (Cited on p. M. Golub. Advances and challenges in superresolution.. 1992. Golub..) [29] B. Hansen. (Cited on p. Golub and C. Higham. Boston. Second Edition. 21:185-194. Regularization for Applied Inverse and Ill-Posed Problems. 81. P. (Cited on p. 23.dk/~pch/Regutools. Sci. O'Leary. 1994. Third Edition. Ponce. D. Algorithms. 80. Numer. Comput.1979. A. P.) [28] N. D. Pitman Publishing Ltd. (Cited on p. Scientific American. SIAM J. 2002.) [22] P. H. W.) . Accuracy and Stability of Numerical Algorithms. 91. Englewood Cliffs. 103. Hansen. Leipzig. Wahba. Teubner. (Cited on p. Elad. 8.Higham.122 Bibliography [13] S. 81. Higham and N. C. J. C. P. 270(6):60-65. Adaptive optics. C. SIAM. (Cited on p. Hansen.

and Signal Processing. Morozov. New York. Perrone. (Cited on p. 22:533-553. 81. 2000. and L. 38:1155-1179. B.. Algorithms. (Cited on p. Kluwer Academic Publishers. 80. L. 1966.) [42] V. Regularization theory in image restoration—the stabilizing functional approach. S.) [38] C. 1994. Karayiannis and A. Mastronardi. Boukir. Modular Regularization Algorithms. Moffat. (Cited on p. L. Ph. Iterative Identification and Restoration of Images. Wiley.) [34] D.dtu. B. 96. Dokl. P. Astrophys. SI AM J. Numer. NJ. (Cited on p. Prentice-Hall. Englewood Cliffs.) . NJ. Venetsanopoulos. 28. Informatics and Mathematical Modelling.D. J. P. Iterative methods for image deblurring: A MATLAB object oriented approach. Matrix Anal. To appear. K. Besserer. thesis. (Cited on pp. G. Hanson. Denmark.) [32] N. Matrix Anal. Lyngby. A theoretical investigation of focal stellar images in the photographic emulsion and applications to photographic photometry. 2004. (Cited on p. Soviet Math. Philadelphia. 25. (Cited on p.dk/~pch7MOOReTools. and S..) [37] R. 85. 1991. 2001. W. L. and N. 100. A. Kotz. S. Biemond. 101. Jacobsen. 103. 103. (Cited on p. 19:504—516. Appl. Continuous Univariate Distributions. K. Johnson.emory. 2000. 1995. (Cited on pp. (Cited on p. 22.) [35] N. F. Solving Least Squares Problems. 36:73-93. SIAM J. 26. Kammler.) [36] M. On the solution of functional equations by the method of regularization. M. 1974. 42. 2000. 1.) [43] J. Prentice-Hall. Application to film restoration. (Cited on p. (Cited on p. (Cited on p.edu/~nagy/RestoreTools. Buisson. Appl. Mastronardi and D. See also http://www. O'Leary. 73. (Cited on p.mathcs.) [39] N. Englewood Cliffs. Nagy. Boston/Dordrecht/London. Jain. Acoustics. Lawson and R.) [41] A. 3:455461.Bibliography 123 [30] M. J. Speech. See also http://www2. Van Huffel. Reprinted STAM.) [33] L. Lagendijk and J. P. N. IEEE Trans.) [40] N. Robust regression and t\ approximations for Toeplitz problems. Vol. NJ. 81.imm. Joyeux. 1989. Fundamentals of Digital Image Processing. Prentice-Hall. 47. 2004. Englewood Cliffs. Technical University of Denmark. 1. Fast structured total least squares algorithm for solving the basic deconvolution problem. Palmer. E. 97. Lemmerling. Balakrishnan.1969. Image and Vision Computing. 1990. 22:1204-1221. 22.) [31] A. 7:414^117. Kilmer and D. Reconstruction of degraded image sequences. Second Edition.. A First Course in Fourier Analysis. Astronom. O'Leary. and O. Choosing regularization parameters in iterative methods for ill-posed problems.

21:851-866. 100.) [54] S. 100. 1999. A fast algorithm for deblurring models with Neumann boundary conditions. 1997. O'Leary. Robust regression computation using iteratively reweighted least squares. CRC Press. SIAM J. 1996. H. Matrix Anal.) [45] D. (Cited on p. Roggemann and B. (Cited on p. (Cited on pp. Van Huffel and J. Amer. Pruessner and D. (Cited on p. and J. 28. Rosen. A. 11:466-480. ACM Trans. Appl. Boca Raton. California Techincal Publishing. 9. 25. Matrix Anal. A. Park. (Cited on pp. 100. Matrix Anal. Vandewalle.) [56] D. Ng. SIAM J. Comput. 22. Helstrom. SIAM. Philadelphia. 53. W. and A. 100. M. Comput. 1982. (Cited on p. 10:1014-1023. P. 90. Stewart. C. Click.) [46] D. Chan.) [55] D. (Cited on p. O'Leary. A. Sci.) [50] J. 45. W. 84. Saad. Blind deconvolution using a regularized structured total least norm approach. Compensation for readout noise in CCD images. Philadelphia. Simmons. (Cited on p. K. Iterative Methods for Sparse Linear Systems. Opt. San Diego. 1990. Appl. Image Process. Imaging Through Turbulence. W. B. Welsh. Total least norm formulation and solution for structured problems.. J.124 Bibliography [44] M. O'Leary and J. (Cited on p. Soc. SIAM J. H. Estimation and Data Analysis. 1981. Trussell. W.) [52] K. P. FL.) [48] A.) [58] S. A bidiagonalization-regularization procedure for large scale discretizations of ill-posed problems. (Cited on p. L.) [57] G. 101. Smith. 1998.. 17:110-126. 97. 1991.. The Scientist and Engineer's Guide to Digital Signal Processing. Sharma and H. 1996. (Cited on p. The Total Least Squares Problem: Computational Aspects and Analysis. SIAM. Digital color imaging. C. Philadelphia. S. Snyder. Amer. 1988. (Cited on p. Snyder. 6:901-932. 1994. Math. 1997.. C. D.) [53] G. White. Breipohl. (Cited on p. Lanterman. New York. Paige and M. Opt. LSQR: An algorithm for sparse linear equations and sparse least squares. Software. CA. SIAM J. Saunders. SIAM J. IEEE Trans. 28. SIAM.. Appl. 2003. C. R. Hammoud. (Cited on p.) . and W. P. Soc. L.) [47] C. L. 8:43-71. J. Stat. 2003.-C. Shanmugan and A. Wiley. 12:272-283. 100. 97. and R. Image recovery from data acquired with a charge-coupled-device camera. Sci. Second Edition. 2:474489. J. 24:1018-1037. 1993. A. (Cited on p.) [51] Y..) [49] M. Matrix Algorithms: Volume 1: Basic Decompositions. Tang. Random Signals: Detection.

(Cited on p. Vogel.) [62] C. SIAM News. 1983. Pitfalls in the numerical solution of linear ill-posed problems. SIAM. 5. 12:535-547. Philadelphia.) . Wittman.) [61] C. Van Loan. (Cited on p. Sci. 2002. 99. SIAM. Computational Frameworks for the Fast Fourier Transform. Comput. September 2004. (Cited on pp. 1996. 82. (Cited on p. Stat. Computational Methods for Inverse Problems. (Cited on p. F. Philadelphia. 82. Vogel. Lost in the supermarket: Decoding blurry barcodes. Varah.) [63] T.Bibliography 125 [59] C. 1992. M. 39.) 1601 J. Non-convergence of the L-curve regularization parameter selection method. 42. 37(7): 16. 47. R. Inverse Problems. R. SIAM J. 4:164-176.

This page intentionally left blank .

86. 33. 53. 8. 30. 29. 63. 23. 69. 44^48. 87-90 127 condition number. 63 convolution one-dimensional. 56 spectral. 32. 24. 81 relative. 102 within-channel. 59. 1. 28. 14 CMYK.72. 84 deblurring. 7 rounding. 30. 102 zero. 72-75 TSVD.63-66. 36. 57-59. 34. 71-74. 82. 90 Gaussian. 28. 30 Moffat. 27.5. 99 reflexive. 59 conj.Index backslash. 55. 81 quantization. 1. 58. 28 edge enhancement.71. 27 motion. 10. 37 covariance matrix. 57. 77-80 finite difference. 22. 45—47. 7. 63. 12 model. 99 blur. 2.49. 22. 92. 5 regularization. 31. 1. 78. 73. 36. 52.85 high frequency. 74 high-pass. 13 blind deconvolution. 76. 72 error. 87 binary image. 51. 32. 75 periodic. 29 random (see noise). 13. 58. 35. 22 atmospheric turbulence. 61-64 Bayer pattern. 75. 24. 33. 95. 78-80 Tikhonov. 17. 63. 67 matrix. 27 separable. 29-32. 11 perturbation. 42-44. 10 filter factor. 31. 10.75.68. 4. 4. 2. 77-79. 7. 28. 86. 49 basis image.87 color CMY. 99.45 cross-channel. 28 low-pass. 25. 75 filtering methods. 88. 25. 79 edge detection. 71-74. 89 spatially variant. 27 out-of-focus. 39 two-dimensional.69 discrete Picard condition. 80. 102 charge-coupled device (CCD). 21-24. 22. 6. 14 RGB.46. 6. 36-40. 28 eigenvalues. 73 pseudo-inverse. 22. 38-44. 4. 85 discrete cosine transform (DCT). 25. 100. 48 eigenvectors. 27 astigmatism. 96. 59. 26. 27. 52. 72. 67-70. 55-57. 25-27. 13 color image.52. 57. 88 boundary condition. 22. 23 discrepancy principle.4.54. 14 HSV. 61. 52. 38 spatially invariant. 38-40. 10. 77-79. 92 .

29 importdata. 13. 19 imsubtract. 8 iterative method. 47. 29 randn. 75. 100 linear blur. 16 immultiply.27. 48. 19 cond. 42. 28 gcv_tik. 47 padPSF. 66. 19. 13. 16. 101 GIF. 113 dcts2. 71. 70 psfGauss.47. 34.45. 14. 75 idcts2. 7. 48. 13. 18. 15. 13-16. 73 discrete (DFT). 83 117 load. 48. 96 integral equation. 34. 29 psfDefocus. 91. 54 kronDecomp. 47. 49. 17 imwrite. 27. 93-96 least squares. 7 conv2.21. 15. 4.42.83. 19. 54 generalized cross validation (GCV). 96 diag. 97 total. 19 Krylov subspace method. 18 norm. 52 rand. 51. 96 deconvwnr. 19. 19 kron.42 colormap. 15-17. 75. 34. 47. 107 help. 17. 52.83. 97 damped. 114 dctshift. 17 image. 27. 43 f f t2. 21. 90 iteratively reweighted. 71. 67. 51. 69 dcts. 49 Fourier transform. 34. 2. 3. 4.21-23 LSQR. 20 hybrid method. 116 i f f t 2 . 112 gcv_tsvd.48.14 idcts. 13. 115 idct2. 83 f special. 34. 73 coordinate system. 81-82 Laplacian. 20 imshow. 13. 16 grayscale image. 18 medfilt2. Flexible Image Transport System (FITS). 21.34. 17. 41. 23 high-contrast image.28 min. 63-69 fast (FFT). 69. 2 axis. 101 Image Processing Toolbox (IPT). 14 double. 114 deconvreg. 41^4. 19 max. 21. 100 MATLAB. 20 forward slash. 15.128 Index doc. 13.70.41-44.47. 14. 31 f minbnd. 17 eig. 15 fliplr. 75. 80-83. 8.27. 54 ones. 54. 75 imadd. 100 L-curve. 14 imdivide. 18. 13. 17 imnoise. 41. 21. 17. 13-16. 2. 17 imf info. 47.34. 16. 13-16 imagesc. 21. 47.27. 2. 52. 29.14 imformats. 20 imread. 67. 34. 13. 34. 13-15. 119 poissrnd. 19 imdemos. 16 mat2gray. 15. 47. 13. 85. 54 .47.45 flipud. 34. 69 dct2. 47. 31. 29. 75 figure.2. 100 JPEG. 19 • circshift. 34.

83 regularization parameter. 75 svds. 17. 40 circulant.40^4. 83. 35 atmospheric turbulence.66. 78 Gaussian. 38.40. 46. 102 block Hankel with Toeplitz blocks (BHTB). 1. 90-92. 6. 76. 29. 30.84-85 filtered. 13. 58. 102 block Toeplitz with Hankel blocks (BTHB). 38-40. 36. 38.65.71.44. 69. 34.46. 37.47. 72. 42.66. 75 error. 103 tsvd_sep. 8. 63. 17. 21. 29 white. 37. 17. 55. 56.52.48. 5. 66/101. 78 Toeplitz. 78. 43 rgb2gray. 90 Tikhonov. 40 trace. 39. 82 Tikhonov.86.71. 104 tsvd_fft. 97 2-norm. 48.86.92. 13. 97 1 -norm. 60. 54 separable. 14. 100-102 regularization.40. 106 uintl6. 90 p-norm. 24. 58. 34.94. 45. 7. 73. 39.100. 45. 66. 61.Index 129 real. 7. 16. 52. 25 center. 109 tik_fft. 18 whos. 79 .75 Gaussian. 102 blurring. 54 tik_dct. 40 condition number. 1. 19 size. 52. 18 svd. 39. 55 rank-one. 51. 23-25 point spread function (PSF).71.86. 26. 40 Toeplitz-plus-Hankel. 54.71. 29.2. 101. 102 block Hankel with Hankel blocks (BHHB).44. 94 TSVD.49 tic-toe. 101. 28. 54. 35. 72. 13-15. 85 norm oo-norm. 33 noise. 12. 29. 75. 10. 79 matrix. 34.58.40. 71. 111 tsvd_dct. 29 uniform. 94 Frobenius. 58 out-of-focus.52. 56. 14. 54. 60. 101. 18 uintS. 27. 38. 61. 33 orthogonal. 17 save. 34. 23. 80 unitary. 27. 10. 58. 26. 34. 91. 24. 52. 66. 52. 29 quantization. 34. 23. 38 regularized inverse. 96-97 normal equation. 38. 52. 94. 71.94. 96 smoothing. 38. 72. 23. 56. 71.86. 19 point source. 100 of the residual.86. 27-29PNG.89. 56.46. 75. 75 preconditioner. 47 doubly symmetric. 102 block Toeplitz with Toeplitz blocks (BTTB). 54. 40 Kronecker product. 29. 100. 37. 45. 56.79-81. 71. 46. 55.102 LU factorization. 39.86. 59 Hankel. 73. 75 TSVD. 48.52. 84 choosing. 37. 33. 91. 36. 91 pixel.63.44. 50 normal. 52.44. 40. 78. 17 zeros. 78 Poisson. 16. 9. 50. 79.19. 108 tik_sep. 85 inverted.23-25. 40. 24. 47 matrix block circulant with circulant blocks (BCCB). 85 readout.

45. 11 undersmoothed. 90-92.61 singular values.74 oversmoothed. 91 total least squares. 78. 56. 9-11. 72 spectral decomposition. 100 total variation. 56. 10. 11. 61.59. 78 spectral coordinate system. 57. 43. 57. 11.65. 83 spectral filtering. 83. 41.58. 72-78. 10. 8 Wiener filter.130 Index regularized solution. 62. 72. 92 low-frequency components. 57.33. 61. 74.55. 92-96 using smoothing norms.55. 95 using derivative operators. 80. 10. 80 singular value decomposition (SVD). 63. 50. 90. 78. 77. 78 truncated SVD. 10. 51. 55-57. 61. 59 singular vectors. 72-77. 59. 44. 63 soft focus image. 81 residual. 15-17 Tikhonov regularization. 78. 93 TIFF. 60. 33. 9. 9. 80. 48.66. 60. 78 naive. 67.50. 59 stencil. 67. 97 truncated SVD (TSVD). 5-8. 20 solution high-frequency components.52. 51.54. 56. 96 . 80-83 vec. 94.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.