You are on page 1of 11

Fast and accurate auto-focusing algorithm based

on the combination of depth from focus and


improved depth from defocus
Xuedian Zhang, * Zhaoqing Liu, Minshan Jiang, and Min Chang
Shanghai Key Laboratory of Contemporary Optics System, College of Optical-Electrical and Computer Engineering,
University of Shanghai for Science and Technology, 200093 Shanghai, China
*
zhangxuedian@hotmail.com

Abstract: An auto-focus method for digital imaging systems is proposed


that combines depth from focus (DFF) and improved depth from defocus
(DFD). The traditional DFD method is improved to become more rapid,
which achieves a fast initial focus. The defocus distance is first calculated
by the improved DFD method. The result is then used as a search step in the
searching stage of the DFF method. A dynamic focusing scheme is
designed for the control software, which is able to eliminate environmental
disturbances and other noises so that a fast and accurate focus can be
achieved. An experiment is designed to verify the proposed focusing
method and the results show that the method's efficiency is at least 3-5
times higher than that of the traditional DFF method.
©2014 Optical Society of America
OCIS codes: (100.2000) Digital image processing; (110.3000) Image quality assessment;
(110.5200) Photography; (110.3925) Metrics.

References and links


1. S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-
based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008).
2. J. W. Han, J. H. Kim, H. T. Lee, and S. J. Ko, “A novel training based auto-focus for mobile-phone cameras,”
IEEE Trans. Consum. Electron. 57(1), 232–238 (2011).
3. B. K. Park, S. S. Kim, D. S. Chung, S. D. Lee, and C. Y. Kim, “Fast and accurate auto focusing algorithm based
on two defocused images using discrete cosine transform,” Proc. SPIE 6817, 68170D (2008).
4. C. S. Liu, P. H. Hu, and Y. C. Lin, “Design and experimental validation of novel optics-based autofocusing
microscope,” Appl. Phys. B 109(2), 259–268 (2012).
5. C. S. Liu, Y. C. Lin, and P. H. Hu, “Design and characterization of precise laser-based autofocusing microscope
with reduced geometrical fluctuations,” Microsyst. Technol. 19(11), 1717–1724 (2013).
6. W. Y. Hsu, C. S. Lee, P. J. Chen, N. T. Chen, F. Z. Chen, Z. R. Yu, C. H. Kuo, and C. H. Hwang, “Development
of the fast astigmatic auto-focus microscope system,” Meas. Sci. Technol. 20(4), 045902 (2009).
7. D. K. Cohen, W. H. Gee, M. Ludeke, and J. Lewkowicz, “Automatic focus control: the astigmatic lens
approach,” Appl. Opt. 23(4), 565–570 (1984).
8. C. Mo and B. Liu, “An auto-focus algorithm based on maximum gradient and threshold,” 5th International
Congress on Image and Signal Processing (CISP) (IEEE, 2012), pp.1191–1194.
9. M. Subbarao, T. Choi, and A. Nikzad, “Focusing techniques,” J. Opt. Eng. 32(11), 2824–2836 (1993).
10. M. Subbarao and J. K. Tyan, “The optimal focus measure for passive autofocusing and depth-from-focus,” Proc.
SPIE 2598, 89–99 (1995).
11. M. Subbarao and J. K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,”
IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 864–870 (1998).
12. S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16(8), 824–831
(1994).
13. J. Kautsky, J. Flusser, B. Zitova, and S. Simberova, “A new wavelet-based measure of image focus,” Pattern
Recognit. Lett. 23(14), 1785–1794 (2002).
14. C. H. Lee and T. P. Huang, “Comparison of Two Auto Focus Measurements DCT-STD and DWT-STD,” in
Proceedings of the international MultiConference of Engineers and Computer Scientists (Academic, 2012),
pp.746–750.
15. G. Yang and B. J. Nelson, “Wavelet-based autofocusing and unsupervised segmentation of microscopic images,”
in Proceedings of IEEE Conference on Intelligence Robots and Systems (IEEE, 2003), pp. 2143–2148.
16. K. Ooi, K. Izumi, M. Nozaki, and I. Takeda, “An advanced autofocus system for video camera using quasi
condition reasoning,” IEEE Trans. Consum. Electron. 36(3), 526–530 (1990).
17. K. S. Choi, J. S. Lee, and S. J. Ko, “New autofocus technique using the frequency selective weighted median
filter for video cameras,” IEEE Trans. Consum. Electron. 45(3), 820–827 (1999).
18. J. He, R. Zhou, and Z. Hong, “Modified fast climbing search autofocus algorithm with adaptive step size
searching technique for digital camera,” IEEE Trans. Consum. Electron. 49(2), 257–262 (2003).
19. M. Subbarao, “Parallel Depth Recovery by Changing Camera Aperture,” in Proceedings of International
Conference on Computer Vision, (Academic, 1988), pp. 149–155.
20. M. Subbarao and G. Surya, “Depth from Defocus: a Spatial Domain Approach,” Int. J. Comput. Vis. 13(3), 271–
294 (1994).
21. P. Favaro and S. Soatto, “A Geometric Approach to Shape from Defocus,” IEEE Trans. Pattern Anal. Mach.
Intell. 27(3), 406–417 (2005).
22. A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM
Trans. Graph. 26(70), 70 (2007).
23. C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J.
Comput. Vis. 93(1), 53–72 (2011).
24. L. Hong, J. Yu, and C. Hong, “Depth estimation from defocus images based on oriented heat-flows,” in
Proceedings of IEEE 2nd International Conference on Machine Vision (IEEE, 2009), pp. 212–215.
25. D. T. Huang, Z. Y. Wu, X. C. Liu, and H. S. Zhang, “A depth from defocus fast auto-focusing technology for
any target,” Journal of Optoelectronics laser. 24(4), 799–804 (2013).(in Chinese)

1. Introduction
Auto-focus technology plays an important role in optical vision imaging systems. It has been
widely used in a variety of optical imaging systems, such as consumer cameras, industrial
inspection tools, microscopes, and scanners [1, 2]. There are many kinds of auto-focus
methods that have been studied since 1990. Generally, these methods can be categorized into
active auto-focusing methods and passive auto-focusing methods [3].
Active auto-focusing methods use some auxiliary optical devices to measure the position
of a reference point on the sample. For example, Liu et al. introduced a laser beam, a splitter,
and an extra CCD to the normal optical system and the sample position could be measured by
detecting the centroid of the reflected light spot on the sample [4, 5]. Additionally, Hsu et al.
embedded an astigmatic lens into the optical path to produce a focus error signal, which could
be converted to the sample’s defocus distance [6, 7]. However, the timing and geometrical
fluctuations of the light source and the mechanical error reduced the positioning accuracy of
the auto-focus system. Furthermore, this method is expensive and complicated. Alternatively,
passive methods are based on a number of images taken by varying focus lens positions. The
advantage of passive methods over active methods is that they are simpler and less expensive.
Therefore, passive auto-focusing methods are widely used in vision applications.
The depth from focus (DFF) method and the depth from defocus (DFD) method are two
typical passive auto-focusing methods that are currently studied. The DFF-type methods are
based on the fact that the image formed by an optical system is focused at a particular
distance whereas objects at other distances are blurred or defocused [3]. DFF includes two
stages: the first stage is to determine the focus function that describes the degree of focus at
different positions and the second stage is to search and find the best focus position according
to the focus function. Researchers have proposed various focus functions, including the Sobel
gradient method [8], band pass filter-based technique [9], energy of Laplacian [10, 11], and
sum-modified Laplacian [12]. Wavelet transforms based on the discrete cosine transform
have also been developed in recent years [13–15]. Additionally, searching algorithms are
studied, which include the mountain climbing servo [16], the fast hill-climbing search [17],
and the modified fast climbing search with adaptive step size [18]. Very high accuracy can be
achieved by DFF methods. However, since all these methods need to acquire a large number
of images at different distances and calculate their focus values, they require long scanning
times and high power consumption, which limit their application.
The DFD method is popularly used in depth estimation and scene reconstruction, which
can measure the position of samples by just a few images. The DFD method directly
estimates the focus location from a measurement of the level of defocus. Therefore, the
efficiency of the method is high, which makes it suitable for real-time auto-focusing.
However, the accuracy of the DFD method is relatively low because the method needs to
build a model of the optical imaging system that is approximate and introduces theoretical
errors.
In this paper, we propose a new auto-focusing method with low computation amount and
accuracy suitable for real application. Firstly, the traditional DFD methods are improved by a
rapid calculation. Then, the improved DFD method is combined with the DFF method to form
a new fast and accurate auto-focus method. The combination method is verified by
experiments and the results show that the proposed auto-focusing method can decrease the
computational cost and achieve high accuracy for real application.
2. The improved DFD method
2.1 Conventional DFD methods
The conventional DFD methods are mainly based on the power spectrum in the frequency
domain or the point spread function (PSF) of the image in the spatial domain. Subbarao and
Surya proposed a general method in which an S-transform was applied to conduct
deconvolution in the frequency domain on two defocused images taken with different camera
settings [19, 20]. Favaro and Soatto applied a functional singular value decomposition to
compute the PSF [21]. Zhou et al. used a coded aperture [22, 23] that customized the PSF by
modifying the aperture shape with a complex statistical model for depth estimation. Hong et
al. analyzed the power spectrum from a novel aspect, namely, oriented heat-flow diffusion
[24], based on the fact that its strength and direction correspond to the amount of blur.
Generally, complex modeling or great computation loads are required when these
conventional DFD methods are used, which may involve computation times comparable with
or longer than the DFF methods.
2.2 The improved DFD method
2.2.1 Defocus blurring analysis
Figure 1 shows the imaging situation of three adjacent points A, B, and C on three different
planes FP, IP1, and IP2. FP is the focal plane, and IP1 and IP2 are two defocused planes. A, B,
and C become three separate blurred spots that possess a certain shape and size owing to the
fact that A, B, and C will spread in the propagation direction of the light path according to
geometrical optics. When IP1 and IP2 are far from FP, the three blurred spots will overlap
within a certain area on planes IP1 and IP2; this will produce a small region containing
information on multiple imaging points, which is the reason that the images on IP1 and IP2 are
fuzzy.

Fig. 1. Spreading process of the image points.

Suppose that the spread angles of A, B, and C on FP are the same and there are no energy
losses in the spreading process, and set the radii of the blurred spots on IP1 and IP2 as R1 and
R2, respectively, then all the points within the area of radius R1 (R2) around the coordinate (x,
y) on FP can spread towards the same coordinate (x, y) and overlap at (x, y) on IP1 (IP2). The
pixel value at any point on imaging plane corresponds to the light intensity on it, and
generally the light intensity is supposed to be evenly distributed in the blurred spots. Set the
area of the pixel (x, y) as S0, then S0 / π R12 times of light intensity at point A on FP will
spread to the position of pixel (x, y) on IP1. The intensity at point B, C and all points within
the area of radius R1 around the coordinate (x, y) on FP can also be deduced in this way. So,
the pixel value at (x, y) on IP1 (IP2) is S0 / π R12 (S0 / π R22) times of all the pixel values in the
area of radius R1 (R2) around (x, y) on FP [25]. So far, the qualitative relationship between the
radius of the blurred spots and the corresponding pixel values has been established and this is
the basis of the calculation method presented in the next section.
2.2.2 The improved DFD calculation method
The two defocused images can be obtained in two ways. One is by changing the image
distance; in this way, the defocus distance (the distance between the current imaging plane
and the focal plane) in the image space can be calculated using the two defocused images.
The other is by changing the object distance; in this case, the defocus distance (distance
between the current object plane and the best imaging object plane) in the object space can
also be calculated. According to these two ways, the improved DFD method will be
separately analyzed in the following. The selection of methods is determined by the hardware
structure of the auto-focus system.
a) Improved DFD method by changing image distance
Figure 2 shows a scheme of the improved DFD method by changing image distance. P is an
object point on the object plane FP, which is blurred within a radius of R1 (R2) on the imaging
plane IP1 (IP2).

Fig. 2. The optical imaging model.

The radius of blurred spots can be calculated with the similar triangle principle:
D 1 1
R1 = d ( − ). (1)
2 f u
Where D is the lens diameter, and u, f, and v denote the objective distance, the focal
length, and the image distance of the optical imaging system, respectively. The parameter d is
the distance between the imaging plane and the focal plane. In the automatic focusing
imaging system, the imaging plane corresponds to IP1 or IP2 and the corresponding defocus
distance is d or d + Δd (Fig. 2). Therefore, the aim is the calculation of d.
Suppose that u, f, and D are constants; the relationship between R1, R2, and d can be
expressed as:
R1 = k d . (2)

R2 = k (d + Δd ). (3)
Where k = D/2(1/f-1/u) and Δd is the distance between IP1 and IP2. Figure 3 shows the
blurred spots on IP1 and IP2.

Fig. 3. Blurred image of P on different planes.

In Fig. 3(b), set the pixel value at (x, y) on IP1 as V1 and the area of the blurred spot as S1 ;
in Fig. 3(c) the corresponding parameters on IP2 are set as V2 and S2 ; the area of the pixel (x,
y) is assumed as S0. According to the previous description, the pixel (x, y) on IP1 (IP2)
includes all the information on the points contained within the area of radius R1 (R2) around
the coordinates (x, y) on the focal plane FP; furthermore, V1 is S0 /S1 times the sum of the
pixel values in the area S1 on FP, and V2 is S0 /S2 times the sum of the pixel values in the area
S2 on FP. Since S2 incudes S1, then
V1 S1 < V2 S 2 . (4)

S1 = π R12 and S 2 = π R2 2 ; substitute them into Eq. (4), then

R1 V
< 2. (5)
R2 V1

When V1 > V2, we can get from Eqs. (3)-(5)

V2 V1
d < Δd . (6)
1 − V2 V1

Set

V2 V1
d m a x = Δd . (7)
1 − V2 V1
Although the specific calculation formula of d has not been given, its maximum dmax can
be estimated from Eq. (7). Additionally, the calculation is simple and quick, which may
provide the possibility for a fast algorithm for auto-focus.
On theoretical deduction, the (x, y) location doesn’t change on FP, IP1 and IP2. In fact,
however, it changes according to the slope of the chief ray as shown in Fig. 2. And the error
becomes greater as slope increases. In this paper, V1 and V2 are taken in the center area of the
images, thus the slope of chief ray is small. The method we proposed is still efficient with a
small error.
b) Improved DFD method by changing object distance
For the application of the DFD method by changing the object distance u, which is a variable,
the calculation of dmax in Eq. (7) must be converted into the calculation of the defocus
distance of the object plane in the object space. The converting scheme and process are
separately shown in Fig. 4(a) and Fig. 4(b).

Fig. 4. (a). The relevant parameters in the modeling change. ∆u denotes the variation of the
object distance u, corresponding to two images taken at two positions in the focusing process,
∆v is the variation of the image distance v caused by ∆u, dmax is the defocus distance in image
space, and uref denotes the defocus distance in the object space. (b) The process of the
modeling change.

When the object distance changes with an amount of ∆u (∆u is very small), ∆v, which is
the corresponding variation of the image distance, can be calculated according to the
Gaussian imaging formula,
1 1 1
= + . (8)
f u + Δu v + Δ v
In this case, ∆v corresponds to ∆d in Eq. (2). Then

V2 V1
d m a x = Δv . (9)
1 − V2 V1

According to geometrical optics, ∆u and ∆v can be expressed as


Δv f f
= β1 β 2 = . (10)
Δu u1 − f u2 − f
Similarly,
dm a x f f
= β1 β 0 = . (11)
uref u1 − f u0 − f
Where u1 and u2 are two object distances corresponding to two different positions in the
object space, u0 is the best imaging distance in the object space. β 0 , β1 , and β 2 denote the
paraxial magnifications at the object distance of u0, u1, and u2, respectively.
From Eqs. (9)-(11), uref can be calculated as in Eq. (12):
uref ( K ) = KC Δu. (12)
Where

V2 V1 u0 − f
C= , K= . (13)
1 − V2 V1 u2 − f
In Eq. (13), section C is calculated similarly to the previous example. In imaging systems
that need to be focused, u2 is normally close to u0, thus K approximately equals to 1. Even in
the case of u2 is relatively far from u0, K can be set as 1, then uref (K = 1) is still a valuable
estimated value because just 1/n times of it is used according to the focusing scheme which
will be proposed in section 3. So,

V2 V1
uref = Δu . (14)
1 − V2 V1

The real defocus distance ureal <uref, set

V2 V1
ureal ( m a x ) = uref = Δu . (15)
1 − V2 V1
Similarly, the defocus distance in the object space can’t be calculated precisely, but its
maximum can be estimated from Eq. (15). And V1 and V2 are taken in the center area of the
images to reduce the error.
3. The combination of DFF and the improved DFD
A focusing range can be acquired by the improved DFD method in a rapid calculation.
Furthermore, the goal of accurate automatic focusing can be achieved by the combination of
the improved DFD with the DFF method. The combination method can be divided into two
stages: the rough focusing stage and the fine focusing stage, as shown in Fig. 5. In the rough
focusing stage, at a certain position, two images of an object are taken with a certain interval
distance to estimate the current defocus distance by the improved DFD method. The next
position is taken with a step of 1/n times the above estimated defocus distance (since the
estimated defocus distance does not equal the real defocus distance, we consider 1/n times of
it for the final accuracy, and n is usually assumed as 5-10). At the new position, we sample
the two images and estimate the current defocus distance again. The process is repeated until
the defocus distance is smaller than the default threshold, which depends on the optical
imaging system. Next, the fine focusing stage will be conducted. The DFF method is used to
search for the peak position of the focus function with a fixed step length that is less than the
depth of focus (DOF). The peak position of the focus function is the focusing position.
The conventional automatic focusing methods perform the searching using the same step
along the whole process. Additionally, the step value, usually a fraction of the DOF, is very
small to ensure accuracy. Thus, the searching efficiency is low and the local peak of the focus
function may be acquired owing to the small step length, which further lowers the efficiency
and may even cause focusing failure. In contrast to the conventional methods, the proposed
searching strategy divides the whole searching process into two stages. Large steps are used
in the rough searching stage and small steps in the fine searching stage, which can remove the
influence of the local peak. Meanwhile, high searching efficiency can be achieved.
Fig. 5. The combined focusing method.

4. Experiments and results


4.1 Experimental implementation
An experiment was conducted to verify the auto-focusing algorithm with a microscopic
system controlled by a PC. The structure diagram is shown in Fig. 6. It is composed of an
optical microscope (Olympus IX71, 10X zoom, working distance 18.5 mm, DOF 50 μm with
maximum magnification), a CCD camera, an image acquisition card, an invisible LED light
source, an electric translation stage/motion controller (displacement accuracy of 0.36 μm
corresponding to one pulse), and a computer (2.50 GHz × 2 CPU, 2.0 GB RAM).

Fig. 6. The schematic of the experimental setup.

In this experiment, auto-focusing was achieved by changing the object distance. The
entire auto-focus process was as follows: the sample on the translation stage was imaged by
the microscope and the images were analyzed by the computer, which then gave the
corresponding command to the motion controller; then, the motion controller adjusted the
translation stage’s vertical motion to change the object distance until it was at the best
imaging position.
Glass slides were used as the focusing targets. In order to test the accuracy of the
proposed method for estimating the defocus distance, the translation stage was driven to 18
known positions (the corresponding real defocus distance ureal was 720 x 1, 720 x 2, 720 x 3,
720 x 4, …., 720 x 18 μm, 0.36 μm/pulse), and at every position, two images were sampled
with a certain interval distance ∆u (36, 72, and 108 μm) to estimate the defocus distance.
4.2 Experimental data analysis
We tested 10 glass slides and acquired 10 groups of data. There were many similarities
between the groups, so we selected one group as follows.
4.2.1 Convergence analysis
Figure 7 shows the estimated defocus distance at the 18 positions, calculated by the improved
DFD method with a sampling interval distance ∆u of 36, 72, and 108 μm.

Fig. 7. Curve of the estimated defocus distance. The horizontal axis corresponds to the 18
sampling positions from small to large distances from the focal position, and the value is the
real corresponding defocused distance. The vertical axis denotes the corresponding estimated
defocus distance.

It can be seen that the estimated defocus distances increase with the sampling positions,
and the estimated values are approximately linearly proportional to the true values real
defocus distance to some degree, which verifies that the proposed method is qualitatively
correct. Furthermore, the estimated defocus distances increase with ∆u, which also presents
an approximate linear relationship.
It should be noted that for ∆u = 36 μm case, the estimated defocus distance is less than the
real defocus distance, which seems contradictive to the theoretical deduction before. The
probable reasons for the error may be as follows. In order to reduce random error, multiple
sets of V1 and V2 in the central zone of the images should be taken to calculate the defocus
distances and the average of these defocus distances are used. It is different from the
theoretical deduction, which may cause the error. Furthermore, the optical imaging system is
supposed incoherent. On this assumption, light intensities can be added directly. However, it
is actually partially coherent, and this may also lead to some error.
In order to verify the proposed method, further experiments are conducted using another
three objects, all of which are more complex focusing targets that include complex depth
variations. The three objects are a coin, a printed circuit board (PCB) containing chip pins
welding and an iron with irregular fracture surface, referred to as Obj 1, Obj 2 and Obj 3
respectively. The results are listed in Table 1.
Table 1. Estimated defocus distance for the three objects.

Uref (the estimated defocus distance) (μm)


Ureal
(μm) ∆u = 36 μm ∆u = 72 μm ∆u = 108 μm
Obj 1 Obj 2 Obj 3 Obj 1 Obj 2 Obj 3 Obj 1 Obj 2 Obj 3
720 928 1197 835 1442 1476 1283 1810 1830 2332
1440 1494 2071 1673 2334 2774 2332 3725 4358 3721
2160 3042 3198 2859 3798 4707 4002 5657 5353 6671
2880 3349 2915 3381 5609 5896 5669 6678 8487 8186
3600 3250 3209 4055 7166 6479 6798 10248 9701 8713
4320 3892 3276 4680 7749 7313 7733 10367 8722 9327
5040 3180 3673 3748 7870 8128 8294 10312 9079 9877
5760 4411 5912 4748 7973 7957 8937 13417 12145 13641
6480 4421 5043 3901 7765 9708 7858 14745 13839 13577
7200 4429 4606 4273 9955 10319 8750 13805 14359 13320
7920 5471 4816 5501 10637 10327 12076 14741 15745 13867
8640 4885 4074 4836 9826 9320 9719 14686 16710 15287
9360 5386 4317 5549 10461 10282 9755 16493 16211 16476
10080 4997 5417 5152 11795 10170 12085 17969 15493 16772
10800 5093 6122 5217 12324 12204 11487 18407 18211 17036
11520 6653 6335 6480 12193 11198 10688 20229 19153 19653
12240 7206 7277 7046 11867 12937 13202 19808 21594 22035
12960 6467 7274 5667 14125 13160 12543 21390 219262 20897
It can be seen that the results for each object are similar to the one that is diagramed
before, which shows the proposed method provides a good applicability for different focusing
targets.
4.2.2 Error analysis
In order to reduce the computation load and increase the efficiency, the improved DFD
method proposed in this paper considers some approximations regarding the optical imaging
system model, which introduce some errors. For the data in Fig. 7, the relative errors between
the estimated and the true values of the defocus distance are shown in Fig. 8.

Fig. 8. Curve of the relative error.

As shown in Fig. 8, the relative errors in the three conditions (∆u = 36, 72, 108 μm) vary
widely and change with the sampling position. It is evident that the relative error increases
with the increase of ∆u. We can choose a reasonable ∆u to control the magnitude of the
relative error. Additionally, according to ∆u, we can expand or contract the estimated defocus
distance by a certain proportion to compensate for the inadequacy of the method. From Fig. 8,
it seems that the relative error of the improved DFD is still significant (the maximum is
175.4%); however, it is used only in the rough focusing stage. Thereafter, fine focusing is
implemented to ensure accuracy. The benefit of the application of rough focusing is that an
effective efficiency can be achieved.
4.2.3 Efficiency analysis of the combination of DFF and the improved DFD
In the searching stage of auto-focusing, the searching step is determined by the defocus
distance, which is calculated by the improved DFD method. Therefore, the searching times
are much smaller than in the DFF method, which is why the proposed combination method
has a higher efficiency. Next, some specific cases will be analyzed.
In the range from ureal = 12960 to ureal = 720 μm, when ∆u = 36 μm, n = 5, and Pth = 720
μm (about 10 DOF, where Pth is the default threshold of the defocus distance stated before,
often set as several times the value of DOF), 10 images were taken, 5 calculations were
required, and the translation stage needed to be driven 5 times (the 5 step values were 4287,
3162, 2510, 1430, and 727 μm). Similarly, when n = 10 and Pth = 720 μm, 20 images were
taken and 10 calculations were required, corresponding to 10 moves of the translation stage
(the step values were 2143, 1780, 1661, 1349, 1313, 1036, 959, 715, 573, and 364 μm).
For conventional DFF methods, under the same accuracy conditions, at least 17 images
need to be taken and 17 repetitions of the calculations are conducted (in this case, the
searching step is set as 720 μm, and the translation stage is driven with the step 17 times).
Furthermore, the single computation load (computation of the focus function value, Grey
Level Variance [11]) is more than that of the improved DFD (computation of the defocus
distance). Table 2 shows the comparison between the different methods.
Table 2. Comparison between the proposed combination method and the DFF method.

The proposed combination method


Focusing Process Conventional DFF method
n=5 n = 10
Image taken 17 10 20
Computation times 17 5 10
Driven times 17 5 10
Total focusing time (s) 4.098 0.628 1.456

It can be seen that the efficiency of the proposed method is at least 3-5 times that of the
traditional DFF method at a rough estimate. If we choose more suitable values of ∆u, n, and
Pth, then the efficiency will be further improved. Actually, the searching step in conventional
DFF methods is very small (much less than 720 μm in this experiment), thus the efficiency of
the proposed combination method is much more than 3-5 times that of the conventional DFF
method.
5. Conclusions
In this paper, a combination algorithm of DFF and improved DFD for auto-focusing is
proposed and experimentally demonstrated to be efficient and significantly more effective
than the traditional ones. The proposed novel DFD auto-focus method can be applied to the
auto-focus system both by changing the image distance and the object distance, which
provides much more flexibility when designing the structure of the auto-focusing system.
However, in real application, the accurate estimation of the defocus distance is not easy. An
inaccurate estimation will directly affect the searching efficiency or even lead to focusing
failure. Thus, the combination method is still worthy of our continued study.
Acknowledgments
The authors thank the financial support from the National Science and Technology Major
Project of the Ministry of Science and Technology of China (Grant No. 2014ZX07104) and
National Key Foundation for Exploring Scientific Instrument of China (2013YQ03065104),
the National Science and Technology Support Program of China (2012BAI23B00).

You might also like