496

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 4, APRIL 2004

Differentiation of Discrete Multidimensional Signals
Hany Farid and Eero P. Simoncelli, Senior Member, IEEE
Abstract—We describe the design of finite-size linear-phase separable kernels for differentiation of discrete multidimensional signals. The problem is formulated as an optimization of the rotation-invariance of the gradient operator, which results in a simultaneous constraint on a set of one-dimensional low-pass prefilter and differentiator filters up to the desired order. We also develop extensions of this formulation to both higher dimensions and higher order directional derivatives. We develop a numerical procedure for optimizing the constraint, and demonstrate its use in constructing a set of example filters. The resulting filters are significantly more accurate than those commonly used in the image and multidimensional signal processing literature. Index Terms—Derivative, digital filter design, discrete differentiation, gradient, steerability.

A. One-Dimensional (1-D) Derivatives The lack of attention to derivative filter design probably stems from the fact that the most natural solution—the difference between adjacent samples—appears at first glance to be completely acceptable. This solution arises from essentially dropping the limit in the continuous definition of the differential operator (1) and holding fixed at the distance between neighboring samples. These “finite differences” are widely used, for example, in numerical simulation and solution of differential equations. But in these applications, the spacing of the sampling lattice is chosen by the implementor, and thus can be chosen small enough to accurately represent the variations of the underlying signal. In many digital signal processing applications, however, the sampling lattice is fixed beforehand, and finite differences can provide a very poor approximation to a derivative when the underlying signal varies rapidly relative to the spacing of the sampling lattice. An alternative definition comes from differentiating a continuous signal that is interpolated from the initial discrete signal. , was obIn particular, if one assumes the discrete signal, tained by sampling an original continuous function containing cycles/length at a sampling frequencies no higher than rate of samples/length, then the Nyquist sampling theorem implies that the continuous signal may be reconstituted from the samples (2) is a (continuous) “sinc” where is the continuous-time signal, and is its disfunction, cretely sampled counterpart. Assuming that the sum in the above equation converges, we can differentiate the continuous function on each side of the equation, yielding

I. INTRODUCTION

O

NE OF THE most common operations performed on signals is that of differentiation. This is especially true for multidimensional signals, where gradient computations form the basis for most problems in numerical analysis and simulation of physical systems. In image processing and computer vision, gradient operators are widely used as a substrate for the detection of edges and estimation of their local orientation. In processing of video sequences, they may be used for local motion estimation. In medical imaging, they are commonly used to estimate the direction of surface normals when processing volumetric data. Newton’s calculus provides a definition for differentiation of continuous signals. Application to discretized signals requires a new definition, or at the very least a consistent extension of the continuous definition. Given the ubiquity of differential algorithms and the ever-increasing prevalence of digital signals, it seems surprising that this problem has received relatively little attention. In fact, many authors that describe applications based on discrete differentiation do not even describe the method by which derivatives are computed. In this paper, we define a set of principled constraints for multidimensional derivative filter design, and demonstrate their use in the design of a set of highquality filters.

Manuscript received June 10, 2003; revised November 12, 2003. The work of H. Farid was supported by an Alfred P. Sloan Fellowship, a National Science Foundation (NSF) CAREER Grant (IIS-99-83806), and a departmental NSF Infrastructure Grant (EIA-98-02068). The work of E. P. Simoncelli was supported by an NSF CAREER Grant (MIP-9796040), an Alfred P. Sloan Fellowship, the Sloan-Swartz Center for Theoretical Visual Neuroscience at New York University, and the Howard Hughes Medical Institute. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Phillippe Salembier. H. Farid is with the Computer Science Department, Dartmouth College, Hanover, NH 03755 USA (e-mail: farid@cs.dartmouth.edu). E. P. Simoncelli is with the Center for Neural Science and the Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 USA (e-mail: eero.simoncelli@nyu.edu). Digital Object Identifier 10.1109/TIP.2004.823819

(3) where is the derivative of the sinc function . Note that the derivative operator is only being applied to continuous and . functions,

1057-7149/04$20.00 © 2004 IEEE

then convolution with the sampled derivative of the sinc function will produce the correct result. we can reconstruct the original continuous function as a superposition of continuous interpolation functions. for example. B. As before. this solution (and approximations thereof) are widely used in 1-D signal processing (e. we have the following: (5) where is the sample spacing (assumed to be identical along is the continuous function.. we can differentiate both sides of the equation. whereas the -derivative filter has nonzero samples only along a column. as shown in Fig. Multidimensional Derivatives For multidimensional signals. images or video). 1. In two dimensions. however.. where the separable sinc seems somewhat odd. one relies on the linear algebraic property of gradients that the derivative in an arbitrary direction can be computed from a linear combination of the axis derivatives (9) is a unit vector [5]–[7]. for example. and accurate implementation requires very large filters. Assume again that the sampled function has been formed by uniformly sampling a continuous function above the Nyquist rate. Furthermore. In practice. motion (optical flow). one arrives at a definition of the derivative of the discrete function by sampling both sides of the above equation on the original sampling lattice (4) where is the -sampled sinc derivative. consider the partial derivative with respect to (7) indicates a functional that computes the partial where derivative of its argument in the horizontal direction. directional derivatives are used to compute. both the and axes). the resulting two-dimensional (2-D) filter does not take into consideration the primary use of derivatives when applied to signals of more than one dimension (e. or depth from stereo. however. as illustrated in Fig. the sinc derivative operator has a large response at high frequencies. In these applications. because the resulting 2-D -derivative filter has nonzero samples only along a row of the input lattice. is the discretely sampled function. local edge orientation.FARID AND SIMONCELLI: DIFFERENTIATION OF DISCRETE MULTIDIMENSIONAL SIGNALS 497 Fig. Note that the right side of this expression is a convolution of the discretely sampled . If the original function was sampled at or above the Nyquist rate. The ideal interpolator (sinc) function (left) and its derivative (right). with the discrete filter . In addition. One arrives at a definition of discrete differentiation by sampling both sides of the above equation on the original sampling lattice Again assuming that the sum in (5) converges. Nevertheless. The directional derivative at an will contain nonzero samples from a single row angle of and column. making it fragile in the presence of noise. 2. the coefficients of this filter decay very slowly and accurate implementation requires very large filters. (6) . Consideration of this problem leads to an additional set of constraints on the choice of derivative filters. the coefficients of the sinc derivative decay very slowly. Again.g. Without loss of generality. In the field of computer vision. the derivative is replaced with the gradient: the vector of partial derivatives along each axis of the signal space. In this regard. and thus the continfunction uous convolution need never be performed. Once function again. 1. Dots indicate sample locations. and the interpolation function is a separable product of sinc functions (8) where -sampling the sinc function gives the Kroenecker delta and is the -sampled sinc derivative.g. [1]–[4]).

and . We then develop a practical design algorithm for interpolator filters with matched derivative filters of arbitrary order and length. who compared various standard filter sets based on a similar criterion [5]. respectively [2] (13) (11) . 2. [14]–[17]). somewhat arbitrary. although we discuss its use as a regularizing constraint in the filter design stage in Section III. 13. differentiation in is accomplished by separable convolution with the differentiation along the -dimension and with the interpolator filter in the -dimension. We formulate the desired constraints for first-order differentiation. and (c) oblique (45 ) differentiation filters. 4. which is virtually impossible with short filters. Two alternative alternative interpolation functions interpolation solutions appear commonly in the literature: fitting polynomial approximations (e. Thus. for example. If we were to include a steerability constraint with our assumption that the interpolator be a separable product of symmetric 1-D functions. Illustration of sinc-based 2-D differentiators. Furthermore. [8]–[13]). With this interpolator. and smoothing with a (truncated) Gaussian filter (e.. That is. This type of rotation-invariance property was first examined by Danielsson.g. however. will not be approximation means that the interpolator. The last of these is constructed from a linear combination of the on-axis filters. First Order We first compute the Fourier transform of both sides of (9). which are the -sampled versions of filters. How then do we choose ? Rather than trying to approximate the sinc. These choices are. with the second-order and higher order constraints building upon this constraint. APRIL 2004 Fig. we directly impose the constraint of (9). we note that the directional derivative filters computed using (9) will not necessarily be rotated versions of a common filter (a property that has been termed steerability in the computer vision literature [7]). the 2-D interpolator is written as a sepa. The partial derivative (with rable product respect to ) of this interpolator is (10) where is the derivative of . relying on the well-known fact that differentiation corresponds to multiplication by a unit-slope ramp function in the frequency domain (12) where and of the 1-D filters are the discrete-space Fourier transforms and . but of a spectrally reshaped signal. Thus. We have also explored nonseparable filters. spectrally flat. we chose not to impose steerability as a primary constraint.. both local differences and approximations to sinc derivatives seem inadequate as discrete extensions of differential operators. Finally.g. (b) vertical. Thus. and improves the efficiency of subsequent derivative computations. the interpolator function is a separable product of identical symmetric 1-D functions. the resulting derivative filters will not compute the derivative of the original signal. NO. then the resulting interpolator would necessarily be a Gaussian. The solution to this dilemma is to consider in (5). More precisely. We also assume that the interpolator is symmetric about the origin. respectively. This both simplifies the design problem. VOL. but the marginal improvement in accuracy appears to be not worth the considerably higher computational expense. and the primary topic of this paper is the development of a more principled choice of interpolator function. we seek filters such that the derivative in an arbitrary direction can be computed from a linear combination of the axis derivatives. we assume that all axes should be treated the same. A. II. the sampled derivative (as in (8)) becomes The discrete derivatives are computed using two discrete 1-D and . It is important to note that giving up the sinc . In summary. as specified by (9).498 IEEE TRANSACTIONS ON IMAGE PROCESSING. As such. Shown are: (a) horizontal. DERIVATIVE FILTER CONSTRAINTS We start with an assumption that the interpolation function should be separable.

Equation (16) is the fundamental error functional used within this paper. as represented by the unit vector . we can factor it out of both numerator and symmetry of denominator expressions and eliminate it from the quotient concepts can be generalized to both higher order derivatives and higher dimensions. The integrands in the numerator differ only by permutation of the axes. we divide the expression by the total Fourier energy of the prefilter as shown in (14). The first term reduces to a 1-D constraint. Expanding the square in the numerator. we show how these same basic (20) As in the first-order case.. as well as the unit vector that specifies the direction of differentiation. in the form of a Rayleigh quotient. the full error functional is formed by dividing by the -norm of the prefilter. but we note that the integral in the numerator can be augmented to include a weighting function over frequency. leaving (16) Thus. B. we note that the ) will integrate to zero becross terms (those containing cause they are anti-symmetric. As before. the constraint embodying the desired linear-algebraic properties of the second-order derivative is (17) In the Fourier domain. and thus we are forced to optimize the full 2-D expression. we 1-D filters define the numerator of our error functional by integrating over both frequency and orientation variables (19) This can again be simplified by noting that cross terms conand are antisymmetric and will integrate to taining zero. but the second term does not. at the bottom of the page. Before discussing the optimization method. this constraint takes the form (18) where and are the Fourier transforms of the . We now define an error functional by dropping the factor of and integrating the squared error in this equality over the frequency domain. and . For applications in which the signals to be differentiated are known to occupy a particular range of frequencies. (14) (21) . in order to make the error independent of the magnitude of the filters (e. Second Order Similar to the first-order derivative. noting that . respectively. inclusion of such a weighting function can improve differentiation accuracy over this range.g.FARID AND SIMONCELLI: DIFFERENTIATION OF DISCRETE MULTIDIMENSIONAL SIGNALS 499 This constraint should hold for and for all orientations. at the bottom of page. Combining these into a single term (and dropping the irrelevant factor of two) gives (15) is strictly real due to the assumed Finally. The remaining error expression is as shown in the equation at the bottom of page. In addition. the 2-D constraint has been reduced to a 1-D constraint. to avoid the trivial zero solution). as shown in (21). and thus the value of the two integrals will be identical.

SHOWN ARE HALF OF THE FILTER TAPS. VOL. A. we define a parameter vector of length containing the independent prefilter samples (the others are determined by the symmetry assumption). For larger filters.through th-order derivative filters. . and the prefilter and derivative filters are then constructed by symmetrizing or antisymmetrizing the corresponding portions of (i. with concatenated matrices and vector .e. This error function is in the form of a Rayleigh quotient. DISCRETE DERIVATIVE FILTER DESIGN With the desired constraints in place. NO. 4. Thus. 2. ). this can be used to generate an error functional for filter design.e. We note that it would be preferable to introduce the unit-sum constraint directly into the error function. as shown in Fig. Following the same general framework as in the previous two sections yields a constraint on the form of the prefilter and first.500 IEEE TRANSACTIONS ON IMAGE PROCESSING. dimensions. As mentioned earlier.. First Order In order to design a filter of length . the second eigenvalue also approaches zero. for filters of length . we define a parameter vector of length containing the independent derivative kernel samples (the remaining samples determined by antisymmetry of the kernel). This may be bundled more compactly as (25) .e. As before. regularize) the solution. we found the minimal eigenvalue comes within the machine tolerance of zero when designing filters of length samples. 13. In experimenting with the design procedure. . Similarly. This eigenvector is then rescaled so that the resulting prefilter has unit sum. In full generality. we assume the design of odd-length filters—the formulation for even-length filters requires only minor modifications. an additional constraint must be invoked to uniquely constrain (i. we now describe a practical method for designing derivative filters of a given order and support (length). we solve for the generalized eigenvector associated with the minimal eigenvalue of and (i. and thus the solution may be found using standard techniques. th-order derivatives come from expanding the following expression: (23) where we have used a numerical index variable as an axis label in place of the previous mnemonic labels ( . but this would destroy the Rayleigh quotient and would lead to a less elegant solution. such that gives the discrete Fourier transform (DFT) of the (symmetric) prefilter. the prefilter is constructing by concatenating a reversed copy of with itself. APRIL 2004 C. and gives the DFT of the (antisymmetric) derivative filter. Specifically. 1) Uniqueness: The uniqueness of the solution depends on the isolation of the minimal eigenvalue. ). The problem formulation defined above in two dimensions extends naturally to higher dimensions. III. The sinc-based derivative filters are extreme examples of the failure of this property. We construct a discrete version of the error functional in (16) (24) and are matrices whose columns contain the real where (symmetric) and imaginary (asymmetric) components of the discrete Fourier basis of size . and the derivative filter is constructing by concatenating a reversed and negated copy of with itself).. For notational clarity. our primary design constraint does not guarantee that directional derivative filters will be steerable. THE OTHER HALF ARE DETERMINED BY SYMMETRY: THE PREFILTER AND EVEN-ORDER DERIVATIVES ARE SYMMETRIC AND THE ODD-ORDER DERIVATIVES ANTI-SYMMETRIC ABOUT THE ORIGIN (SAMPLE NUMBER 0) (22) where the right-hand side will require th order derivative filters . We also assume that the prefilter and all even-order derivative kernels are symmetric about the central tap and that all odd-order derivative kernels are antisymmetric.. Higher Order and Higher Dimensions The constraint embodying the desired linear algebraic properties of the th-order derivative is TABLE I EXAMPLE FILTER TAPS FOR OPTIMAL DIFFERENTIATORS OF VARIOUS ORDERS AND SIZES.

A gradient descent minimization is then performed on in order to optimize. In general. These filters should be compared with those of Fig. One possible secondary constraint is that the low-pass filter be as flat as possible over the entire frequency range [4]. In particular. the error functional given in (21) (28) As before. the error functional in (23) reduces to the constraint . The last of these is constructed from a linear combination of the on-axis filters. Second Order Unlike the first-order filter design problem. C. we seek a solution of the form (26) where is any solution of minimizing (25) and is a matrix containing the zero-eigenvalue eigenvectors from (25). for odd-order. Note that this secondary steerability constraint does not interfere with the primary constraint of (25). along with the lower order by derivative filters. B. since the two constraints are imposed in complementary orthogonal subspaces. (23)].g. we have chosen to use steerability as a secondary constraint in order to choose from amongst those filters that satisfy the primary constraint. and (c) oblique (30 ) differentiation filters. Shown are: (a) horizontal. Specifically. are used to initialize a gradient descent minimization of the full error functional of (21). and is the 2-D Fourier operator.through th-order derivative filters begins by designing a prefilter and first-order derivative pair (Section III-A). the design of a set of first. This nonlinear minimization is initialized with an optimally designed 11-tap filter padded with zeros to accommodate the desired filter length. the estimate is given . The minimum of this quadratic error function is easily seen to be (30) This filter. 3. The parameter vector is chosen to minimize the steerability error functional and the derivative filter may be simplified to . . we have adopted an iterative approach whereby first-order filters are used to initialize the design of the second-order filters. along with the previously designed prefilter and firstorder filter. But we choose instead a constraint that is more consistent with our emphasis on rotation-invariance. the steerability constraint of (27). as specified by (9).through th-order filters are then designed in an iterative fashion. assuming the first-order solutions for the prefilter It is relatively straightforward to generalize the filter design from the previous two sections to filters of arbitrary order.FARID AND SIMONCELLI: DIFFERENTIATION OF DISCRETE MULTIDIMENSIONAL SIGNALS 501 Fig. 2. The resulting minimization yields a pair of stable filters that perfectly satisfy the original constraint of (16) and are maximally steerable. The second. This estimate. are used to initialize a gradient descent minimization of the (nonlinear) error functional that describes the desired derivative relationships [e. given the prefilter and first. across all orientations. the second-order error functional (21) does not have a closed-form solution. Higher Order (27) is the 2-D rotation operator (implemented on a where discrete lattice using bicubic interpolation). As such. Specifically.through -1st derivative filters. each initialized with the design of the previous orders. (b) vertical. a discrete version of the error functional on the filter values is written as (29) where contains one half of the full filters taps. from which the th-order derivative filter is initially estimated as (31) for even-order derivatives.. Specifically.

Note that when designing.html. Summary A summary of the algorithm is as follows. The solid line izing with solution from part a). 1) Choose dimensionality D . First order derivatives: Fourier magnitude. VOL. 6) For each order n = 2 up to N : a) solve for the nth-order filter using (31). 4. plotted from corresponds to the derivative filter.cs. 0 to .1 Each design produces a set of matched 1-D differentiation filters that are meant to be used jointly in computing the full set of separable derivatives of the chosen order. and kernel size L. together with the filters designed for order n 0 1. 3) Precompute the Fourier matrices Fs and Fa . 4. b) numerically optimize (23) over all filter orders 1 through n. 11). initial- IV. D.dartmouth. The dashed line corresponds to the product of a ramp and prefilter. the accompanying first-order differentiation filter is not necessarily optimal with respect to the first-order constraint in isolation. 1Source code (MATLAB) is available at: [Online] http://www.edu/farid/research/derivative. order N . using (25). 13. 2) Choose number of Fourier samples K  L. Note also that although we have constrained all differentation filters in each set to be the same size. solve for unique filters of size L by numerically optimizing (27). 4) Solve for first-order filters of size min(L. there is no inherent reason why different filters could not be designed with a different number of taps. NO. . APRIL 2004 Fig. a second-order derivative. RESULTS We have designed filters of various sizes and orders—some example filter taps are given in Table I. initialized with the L = 11 solution.502 IEEE TRANSACTIONS ON IMAGE PROCESSING. 5) If L > 11. for example.

 ].FARID AND SIMONCELLI: DIFFERENTIATION OF DISCRETE MULTIDIMENSIONAL SIGNALS 503 Fig. as per our design criteria. a 3-tap Sobel operator ( [18] ( . a 5. Note the synthesized directional derivative at an angle of that these filters are much closer to being rotation invariant than the sinc-based derivatives of Fig. and a . First Order Shown in Fig. then these responses should be identical. For comparison. The plots in the second and third columns correspond to the indicated radial and angular slices through this 2-D frequency space.and 7-tap Gaussian ( . If the filters were perfectly matched. 5-. we show the responses of a variety of commonly used differentiators: a standard (binomial) 2-tap . ). ). 3 are the horizontal and vertical derivatives and . a standard (binomial) 3-tap ( . 0 Shown in Fig. Shown are the responses from our optimally designed 3-. 5. 4 is the frequency response of several first-order derivative filters and the product of an imaginary ramp and the frequency response of their associated prefilters. ). In the following sections. ). A. First-order derivatives: errors in estimating the orientation of a sinusoidal grating of varying spatial frequencies and orientations. The intensity image in the left column shows the orientation error (brighter pixels correspond to larger errors) across all spatial frequencies—the axes span the range [  . 2. the accuracy of these filters is compared with a number of standard derivative filters. and 7-tap filters.

APRIL 2004 Fig. VOL. 13. 4. . NO.504 IEEE TRANSACTIONS ON IMAGE PROCESSING. 6. First-order derivatives: errors in estimating the orientation of a sinusoidal grating of varying spatial frequencies and orientations (see caption of Fig. Note that the scaling of the truncated sinc filter errors is different than the others. 5).

7. Right side: second-order filter compared with product of squared ramp and prefilter.12 pixels. respectively. These values minimize the rms error in the frequency response matching between the . Second-order derivatives: Fourier magnitudes.FARID AND SIMONCELLI: DIFFERENTIATION OF DISCRETE MULTIDIMENSIONAL SIGNALS 505 Fig.and 7-tap Gaussian filter was set to 0. Left side: first-order filter (solid line). Note that the standard deviation of the 5.96 and 1. compared with product of a ramp and the prefilter (dashed line). usually by a substantial margin. The optimal filters of comparable (or smaller) support are seen to outperform all of the conventional filters. plotted from  to  . 0 15-tap truncated sinc function (see Fig. 1).

506 IEEE TRANSACTIONS ON IMAGE PROCESSING. Note that the scaling of the 3-tap filter’s errors are different than the others. APRIL 2004 Fig. NO. . 8. 13. 4. Second-order derivatives: errors in estimating the orientation of a sinusoidal grating of varying spatial frequencies and orientations. VOL.

vol. “Representation of local geometry in the visual system. [23]. 5. Unlike previous methods. New York: Academic. DISCUSSION We have described a framework for the design of discrete multidimensional differentiators. [6] J. and a 5-tap Gaussian (as in the previous section the width of the Gaussian is 0. NJ: Prentice-Hall. of size 32 32 with horizontal and vertical spatial frequency ranging from 0. PAMI-6. 7 is the frequency response of several first. no.. [9] R. The orientation is computed using a least-squares estimator across the entire 32 32 image. “Digital step edges from zero crossing of second directional derivatives. pp. vanDoorn. As before. we solve for the orientation that minimizes V. Faugeras. Recall that our filters were designed so as to optimally respect the linear algebraic properties of multidimensional differentiation. 1991.” Biol. and have not incorporated a prior model on images.. . This property is critical when estimating. 14. we show the response from a standard . Simple spectral models can easily be included. Finally. Figs. Circuits Syst. vol. 49. Machine Intell.. Mar. pixels Barron et al. le Bihan. We have ignored noise. Syst. the marginal improvement in accuracy appears to not be worth the considerably higher computational expense. Object Enhancement and Extraction. II.g. M.” Electron.4062 cycles/pixel. then these responses should be identical to the frequency responses of the corresponding derivative filters. pp. Shown are the responses from our optimally designed 3-.g. Second Order Shown in Fig. 24. Computer Vision.” in Proc.g. Haralick. Pattern Anal. 1992. Englewood Cliffs. .FARID AND SIMONCELLI: DIFFERENTIATION OF DISCRETE MULTIDIMENSIONAL SIGNALS 507 prefilter and derivative filter. 1984. [2] A. Koenderink and A. “Maximally linear FIR digital differentiators. vol. “Coefficients of maximally linear. It is also natural to combine multiscale decompositions with differential measurements (e. 1987. Note that the errors for our optimal filters are substantially smaller than the standard and Gaussian filters.. For comparison.. but a more realistic treatment would likely prove impractical. and thus might be worth considering the problem of designing a multiscale decomposition that provides optimized differentiation. For example. S. derivatives.” IEEE Trans.0078 to 0. Circuits. 5-. 5 and 6 show the errors in estimating the orientation of a 2-D sinusoidal grating. Adelson. V. for a fixed size kernel. where the sum is taken over all image positions. “The design and use of steerable filters. Although we also tested nonseparable designs (see also [20]). simple spectral models could be included. we formulate the problem in terms of minimizing errors in the estimated gradient direction. Lett. Finally. Roy. The result is a matched set of filters—low-pass prefilter and differentiators up to the desired order—a concept first introduced in [19]. 1970. Shown in Fig. 75–149. Cybern. It would be interesting to consider the joint design of IIR and FIR filters for differentiation of video signals. pp. 633–637. “Rotation-invariant linear operators with directional response. This emphasis on the accuracy of the direction of the gradient vector is advantageous for many multidimensional applications. Freeman and E. where the primary constraint was insufficient to give a unique solution. 367–375. Oppenheim and R. is computed by applying the first-order derivative filter in the horizontal and vertical directions. 203–211.” in Proc. Signal Processsing. 2002. Italy. [10]).” IEEE Trans. Selesnick..4062 cycles/pixel. [8] J.96 pixels). 13. for use in temporal differentiation [22]. 1995. 8 are the errors from estimating the orienta. pp. Santa Margherita Ligure. 1988. we solve for the angle that minimizes where the sum is taken over all positions in the image. local orientation in an image (by taking the arctangent of the two components of the gradient).” J. Again. 9.and second-order derivative filters. Specifically. Miami. 58–68. vol. [21]). The orientation is computed as a least-squares estimate across the entire 32 32 image. Danielsson. A number of authors have considered infinite-impulse response (IIR) solutions. vol. 3-tap ( ). H. and 7-tap filters. and thus are superior to most Gaussian differentiators used in the literature. pp. European Conf. We have also not explicitly modeled the image sensing process (e. M. “Robust and fast computation of unbiased intensity derivatives in images. the basic case of finite finite-impulse response (FIR) filter design can be extended in several ways. using second-order tion of a 2-D sinusoidal grating. 563–565. 5th Int. we incorporated steerability as an orthogonal regularizing constraint for large filters. 1989. W. Also shown for the first-order (second-order) derivative is the product of a negative parabola (squared imaginary ramp) and the frequency response of the prefilter. 55. Schafer. The errors for our optimal filters are substantially smaller than all of the conventional differentiators. We have also enforced a number of auxiliary properties. Conf.0078 to 0. orientation estimation was performed on gratings of size 32 32 with horizontal and vertical spatial frequency ranging from 0. including a fixed finite extent. Apr. symmetry. B. D. Prewitt. [7] W. Pattern Recognition. Dec. [10] T.. 1980. FL. 891–906. Vieville and O. no. [4] I. suggest a 5-tap Gaussian of width [16]. for example. A number of enhancements could be incorporated into our design method. Jan. [3] J.” IEEE Trans. and is computed by applying the second-order derivative filter in the horizontal (vertical) direction and the prefilter in the vertical (horizontal) direction. Pattern Anal. The basic design method might also be optimized for specific applications (e. vol. Sept. FIR digital differentiators for low frequencies. unit D. If the filters were perfectly matched. [5] P. pp. C. Machine Intell. REFERENCES [1] B. [24]–[27]). T. as per our design criteria. [9]. “Maximally flat lowpass digital differentiators. Discrete-Time Signal Processing. but a more sophisticated treatment might make the design problem intractable. response (in the prefilter). Kumar and S. and separability.C. pp. 219–223. pp. Specifically.

Granlund. 1st Int. 61–67. pp. Kiel. 1992. following a two year post-doctoral position in brain and cognitive sciences at the Massachusetts Institute of Technology. 13. Wilting. A. 12. Oct. 1993. 1996. Vitsnudel. [16] J. Ahlen. vol. 43–77.” Int. in 1984. and the M. Inst. vol. Sternad. Eero P. J. Conf.. pp. Image Processing.508 IEEE TRANSACTIONS ON IMAGE PROCESSING. [18] R. Teo. pp.” J. degree in computer science from the University of Pennsylvania. 341–353.” Proc. Digital Image Processing. Computer Vision.” Int. R. Philadelphia. no. Azaria.” J. 1981. Philadelphia. 790–793.” IEEE Trans. Washington. Y.” Comput. 1st Int. Commun. Florack. Image Represent. “In search of a general picture processing operator. from the Massachusetts Institute of Technology. “Pyramid methods in image processing. 4. S. 185–189. Zeevi. I. pp. J. 155–173. Sept. Jan. no. Carlsson. Hany Farid received the B. 1999. [13] I. 1. pp. “Optimally rotation-equivariant directional derivative kernels. 8. D. Cambridge. Computer Analysis of Images and Patterns. [14] P. Graph. Image Processing. He was an Assistant Professor in the Computer and Information Science Department. Rochester. APRIL 2004 [11] J. 1994.” in Proc. in 1988 and the Ph. J. TX. in 1997.S. Artificial Intelligence. from 1993 to 1996. pp. His research interests span a wide range of topics in the representation and analysis of visual images. 39. Canada. NH. University of Pennsylvania. and S. “Fast computation of unbiased intensity derivatives in images using separable filters. MA. “Design of multi-dimensional derivative filters. He moved to New York University. “Smoothed differentiation filters for images. 1994. [21] M. M. Int. New York. 1995. 3. and M. 1978. “Differential structure of images: accuracy of representation. Weiss. vol. 207–214. [24] G. and Ph. pp. vol. Farid and E. Image Processing..” in Proc. Comput. 2nd Int.” in Proc. vol. Kerkyra. Cambridge. Visual Image Signal Processing. Joint Conf. where he is currently an Associate Professor in Neural Science and Mathematics. Pattern Anal. Simoncelli. “New design of full band differentiators based on Taylor series. in September 1996. VOL. III. no. J.. pp. 155–159. 7th Int. Conf. “Discrete derivative approximations with scale-space properties: a basis for low-level feature extraction. Ohba. Image Processing. 1994. Hanover. no.. 17.D. “The design of two-dimensional gradient estimators based on one-dimensional operators. Vancouver. Conf. Jan. “The steerable pyramid: a flexible architecture for multi-scale derivative computation. P. 58–72. [25] B. under their new program in computational biology.D. T. Vis. “Recursive filters for optical flow. 3. Conf. Freeman.. degree in physics from Harvard University. Meer and I.” in Proc.S. M. 4. Elad. both in electrical engineering. pp. 559–565. Vis.” IEEE Trans. in 1999. pp. Fleet.” in Int. [23] D. [19] E. I. and Y. Lindeberg. [15] T. Aug. 1994. 1997. “Optimal differentiators based on stochastic signal models. in both machine and biological systems. Machine Intell. [22] B. P. 349–376. [27] E. 21–25. Imaging Vis. Eng. vol. Barron. and L. Lucas and T. pp. Image Processing. W. Simoncelli (S’92–M’93–SM’04) received the B. he became an Associate Investigator of the Howard Hughes Medical Institute. de Vriendt. [17] B. respectively. P. “An iterative image registration technique with an application to stereo vision. Nov. Langley. Math. 1. [12] M. Cambridge. Vis. IEEE Int. . Burt.” IEEE Trans. Sept. Fleet and K. Austin. 444–447. 674–679. 1992. Conf. 5. J. NY. Simoncelli and W. He joined the faculty at Dartmouth College. In August 2000. pp. pp. pp. Conf. New York: Addison-Wesley. 3. Kanade. Simoncelli. Comput. Elect. Gonzalez and R. Image Processing. “Optimal filters for gradient-based motion estimation. degrees in 1988 and 1993. ter Haar Romeny. 1991. [20] H. pp. Greece. no. BC. [26] P. 146. Feb. H. 1994. vol.S. vol. Beauchemin. vol. J. 1999. and Y. 1.” in Proc. 4. TX. Niessen. vol. Germany. 259–269. Khan and R. L. Signal Processing. “Performance of optical flow techniques. vol. degree in computer science and applied mathematics from the Unversity of Rochester.” in Proc. Woods. DC. Austin. NO. 13. 1995. Hel-Or. vol. D. Nov.

Sign up to vote on this title
UsefulNot useful