This action might not be possible to undo. Are you sure you want to continue?
FACULTY OF ENGINEERING AND ARCHITECTURE
DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING
EECE695C – Adaptive Filtering and Neural Networks
Fingerprint Identification – Project 2
I. Introduction
Fingerprints are imprints formed by friction ridges of the skin and thumbs. They
have long been used for identification because of their immutability and individuality.
Immutability refers to the permanent and unchanging character of the pattern on each
finger. Individuality refers to the uniqueness of ridge details across individuals; the
probability that two fingerprints are alike is about 1 in 1.9x10
15
.
However, manual fingerprint verification is so tedious, time consuming and expensive that
is incapable of meeting today’s increasing performance requirements. An automatic
fingerprint identification system is widely adopted in many applications such as building or
area security and ATM machines [12].
Two approaches will be described in this project for fingerprint recognition:
• Approach 1: Based on minutiae located in a fingerprint
• Approach 2: Based on frequency content and ridge orientation of a fingerprint
II. First Approach
Most automatic systems for fingerprint comparison are based on minutiae matching
Minutiae are local discontinuities in the fingerprint pattern. A total of 150 different
minutiae types have been identified. In practice only ridge ending and ridge bifurcation
minutiae types are used in fingerprint recognition. Examples of minutiae are shown in
figure 1.
(a) (b)
Figure 1. (a) Different minutiae types, (b) Ridge ending & Bifurcation
Many known algorithms have been developed for minutiae extraction based on orientation
and gradients of the orientation fields of the ridges [3]. In this project we will adopt the
method used by Leung where minutiae are extracted using feedforward artificial neural
networks [1].
The building blocks of a fingerprint recognition system are:
Figure 2. Fingerprint recognition system
a) Image Acquisition
A number of methods are used to acquire fingerprints. Among them, the inked
impression method remains the most popular one. Inkless fingerprint scanners are also
present eliminating the intermediate digitization process.
In our project we will use the database available for free at University of Bologna
(http://bias.csr.unibo.it/fvc2000/) as well as building an AUB database; each one must
gather 36 inked fingerprints images from 3 persons (12 images per finger).
Fingerprint quality is very important since it affects directly the minutiae extraction
algorithm. Two types of degradation usually affect fingerprint images: 1) the ridge lines are
not strictly continuous since they sometimes include small breaks (gaps); 2) parallel ridge
lines are not always well separated due to the presence of cluttering noise. The resolution
of the scanned fingerprints must be 500 dpi while the size is 300x300.
b) Edge Detection
An edge is the boundary between two regions with relatively distinct gray level
properties. The idea underlying most edgedetection techniques is on the computation of a
local derivative operator such as ‘Roberts’, ‘Prewitt’ or ‘Sobel’ operators.
In practice, the set of pixels obtained from the edge detection algorithm seldom
characterizes a boundary completely because of noise, breaks in the boundary and other
effects that introduce spurious intensity discontinuities. Thus, edge detection algorithms
typically are followed by linking and other boundary detection procedures designed to
assemble edge pixels into meaningful boundaries.
For a detailed explanation refer to “Digital Image Processing” by Gonzalez, chapters 3  4.
It is also useful to check the Image Toolbox Demos available in MATLAB.
c) Thinning
An important approach to representing the structural shape of a plane region is to
reduce it to a graph. This reduction may be accomplished by obtaining the skeleton of the
region via thinning (also called skeletonizing) algorithm.
The thinning algorithm while deleting unwanted edge points should not:
• Remove end points.
• Break connectedness
Image
Acquisition
Edge
Detection
Thinning Feature
Extractor
Classifier
Physical
Fingerprint
Classification
Decision
• Cause excessive erosion of the region
For a detailed explanation refer to “Digital Image Processing” by Gonzalez, chapter 9. It is
also useful to check the following link:
http://www.fmrib.ox.ac.uk/~steve/susan/thinning/node2.html
d) Feature Extraction
Extraction of appropriate features is one of the most important tasks for a
recognition system. The feature extraction method used in [1] will be explained below.
A multilayer perceptron (MLP) of three layers is trained to detect the minutiae in the
thinned fingerprint image of size 300x300. The first layer of the network has nine neurons
associated with the components of the input vector. The hidden layer has five neurons and
the output layer has one neuron. The network is trained to output a “1” when the input
window in centered on a minutiae and a “0” when it is not. Figure 3 shows the initial
training patterns which are composed of 16 samples of bifurcations in eight different
orientations and 36 samples of nonbifurcations. The networking will be trained using:
• The backpropagation algorithm with momentum and learning rate of 0.3.
• The AlAlaoui backpropagation algorithm.
State the number of epochs needed for convergence as well as the training time for the two
methods. Once the network is trained, the next step is to input the prototype fingerprint
images to extract the minutiae. The fingerprint image is scanned using a 3x3 window given
Figure 3. Training set
(a) (b) (c) (d)
Figure 4. Core points on different fingerprint patterns. (a) tented arch, (b) right loop, (c)
left loop, (d) whorl
e) Classifier
After scanning the entire fingerprint image, the resulting output is a binary image
revealing the location of minutiae. In order to prevent any falsely reported output and select
“significant” minutiae, two more rules are added to enhance the robustness of the
algorithm:
1) At those potential minutiae detected points, we reexamine them by increasing the
window size by 5x5 and scanning the output image.
2) If two or more minutiae are to close together (few pixels away) we ignore all of
them.
To insure translation, rotation and scaleinvariance, the following operations will be
performed:
• The Euclidean distance d(i) from each minutiae detected point to the center is
calculated. The referencing of the distance data to the center point guarantees the
property of positional invariance.
• The data will be sorted in ascending order from d(0) to d(N), where N is the number
of detected minutiae points, assuring rotational invariance.
• The data is then normalized to unity by shortest distance d (0), i.e: d
norm
(i) =
d(0)/d(i); This will assure scale invariance property.
In the algorithm described above, the center of the fingerprint image was used to calculate
the Euclidean distance between the center and the feature point. Usually, the center or
reference point of the fingerprint image is what is called the “core” point.
A core point, is located at the approximate center, is defined as the topmost point on the
innermost upwardly curving ridgeline.
The human fingerprint is comprised of various types of ridge patterns, traditionally
classified according to the decadesold Henry system: left loop, right loop, arch, whorl, and
tented arch. Loops make up nearly 2/3 of all fingerprints, whorls are nearly 1/3, and
perhaps 510% are arches. Figure 4 shows some fingerprint patterns with the core point is
marked. Many singularity points detection algorithms were investigated to locate core
points, among them the famous “Poincaré” index method [45] and the one described in
[6]. For simplicity we will assume that the core point is located at the center of the
fingerprint image.
After extracting the location of the minutiae for the prototype fingerprint images, the
calculated distances will be stored in the database along with the ID or name of the person
to whom each fingerprint belongs.
The last phase is the verification phase where testing fingerprint image:
1) is inputted to the system
2) minutiae are extracted
3) Minutiae matching: comparing the distances extracted minutiae to the one stored in
the database
4) Identify the person
State the results obtained (i.e: recognition rate).
III. Second Approach
Most methods for fingerprint identification use minutiae as the fingerprint features.
For small scale fingerprint recognition system, it would not be efficient to undergo all the
preprocessing steps (edge detection, smoothing, thinning ..etc), instead Gabor filters will be
used to extract features directly from the gray level fingerprint as shown figure 5. No
preprocessing stage is needed before extracting the features [7].
Figure 5. Building blocks for the 2
nd
approach
a) Image Acquisition
The procedure is the same explained in the 1
st
approach.
b) Feature Extractor
Gabor filter based features have been successfully and widely applied to face
recognition, pattern recognition and fingerprint enhancement. The family of 2D Gabor
filters was originally presented by Daugman (1980) as a framework for understanding the
orientation and spatial frequency selectivity properties of the filter. Daugman
mathematically elaborated further his work in [8].
In a local neighborhood the gray levels along the parallel ridges and valleys exhibit some
ideal sinusoidal shaped plane waves associated with some noise as shown in figure 6 [3].
Physical
Fingerprint
Image
Acquisition
Feature
Extractor
Classifier
Classification
Decision
Figure 6. Sinusoidal plane wave
The general formula of the Gabor filter is defined by:
) 2 exp(
2
1
exp ) , (
2
2
2
2
k
k k
fx i
y x
y x h
y x
θ
θ θ
π
σ σ


.

\

+ − = (1)
Where
•
k k
y x x
k
θ θ
θ
sin cos + =
•
k k
y x x
k
θ θ
θ
cos sin + − =
• f is the frequency of the sinusoidal plane
• θ is the orientation of the Gabor filter
• σ
x
and σ
y
are the standard deviations of the Gaussian envelope along the x and y
axes
The next step is to specify the values of the filter’s parameters; the frequency is calculated
as the inverse of the distance between two successive ridges. The number of orientation is
specified by “m” where m k
k
/ ) 1 ( − = π θ , k = 1, 2, …., m. The standard deviations σ
x
and
σ
y
are determined empirically. In [7] σ
x
= σ
y
= 2 was used, it is advisable to try other
values also.
Equation (1) can be written in the complex form giving:
) 2 sin(
2
1
exp ) , (
) 2 cos(
2
1
exp ) , (
. ) , (
2
2
2
2
2
2
2
2
k
k k
k
k k
fx
y x
y x h
fx
y x
y x h
h i h y x h
y x
odd
y x
even
odd even
θ
θ θ
θ
θ θ
π
σ σ
π
σ σ


.

\

+ − =


.

\

+ − =
+ =
(2)
Figure 7 shows the filter response in spatial and frequency domain for a zero orientation.
Figure 7. Gabor filter response
Table 1 extracted from [8] described the filter properties in space and spectral domains.
2D Space Domain 2D Frequency Domain
Table 1. Filter properties
The fingerprint print image will be scanned by a 8x8 window; for each block the magnitude
of the Gabor filter is extracted with different values of m (m = 4 and m = 8). The features
extracted (new reduced size image) will be used as the input to the classifier.
b) Classifier
The classifier is based on the knearest neighborhood algorithm KNN. “Training”
of the KNN consists simply of collecting k images per individual as the training set. The
remaining images consists the testing set.
The classifier finds the k points in the training set that are the closest to x (relative to the
Euclidean distance) and assigns x the label shared by the majority of these k nearest
neighbors. Note that k is a parameter of the classifier; it is typically set to an odd value in
order to prevent ties.
Figure 8 shows how the KNN algorithm works for a two class problem. The KNN query
starts at the test point x and grows a spherical region until it encloses k training samples,
and it labels the test point by a majority vote of these samples. In this k = 5 case, the test
point x would be labeled in the category of the red points [9].
Figure 8. The KNN algorithm
The last phase is the verification phase where the testing fingerprint image:
1) is inputted to the system
2) magnitude features are extracted
3) perform the KNN algorithm
4) Identify the person
State the recognition rate obtained.
c) Suggested enhancement
In order to enhance the performance of the 2
nd
approach below is a list of proposed ideas:
X
• Instead of using only the magnitude Gabor filter features, try to use also the phase
of the filter [10].
• Try to use the Mahalanobis distance given by: ) ( ) (
1
m x C m x D
T
− − =
−
where m is
the mean and C is the covariance matrix. Appendix A provides an example of
Mahalanobis distance.
• Try to other classifiers such as backpropagation and ALBP. Indicate the number of
layers used as well as the number of neurons.
• The Gabor filter assumes a sinusoidal plane wave which is not always the case as
depicted in figure 9. Try to use the modified Gabor filter described in [11].
Figure 9. A fingerprint with corresponding ridges and valleys.
References
[1] W.F. Leung, S.H. Leung, W.H. Lau and A. Luk, "Fingerprint Recognition Using
Neural Network", proc. of the IEEE workshop Neural Network for Signal Processing, pp.
226235, 1991.
[2] A. Jain, L. Hong and R. Boler, “Online Fingerprint Verification”, IEEE trans, 1997,
PAMI19, (4), pp. 302314
[3] L. Hong, Y. Wan, A.K. Jain, “Fingerprint image enhancement: Algorithm and
performance evaluation”, IEEE Trans. Pattern Anal. Machine Intell, 1998, 20 (8), 777–
789.
[4] Q. Zhang and K. Huang, “Fingerprint classification based on extaction and analysis of
singularities and pseudoridges”, 2002
[5] http://www.owlnet.rice.edu/~elec301/Projects00/roshankg/elec301.htm
[6] A. Luk, S.H. Leung,”A Two Level Classifier For Fingerprint Recognition”, in Proc.
IEEE 1991 International Symposium on CAS, Singapore, 1991, pp. 26252628.
[7] C.J. Lee and S.D. Wang, “Fingerprint feature extraction using Gabor Filters”, IEE
Electronics Letters, vol.35, 1999, pp. 288290
[8] J.G Daugman, Uncertainty relation for resolution in space, spatial frequency, and
orientation optimized by twodimensional visual cortical filters. J. Optical Soc. Amer. 2
(7), 1985, pp. 1160–1169.
[9] R. Duda and P. Hart, “Pattern Classification”, Wiley publisher, 2
nd
edition, 2001.
[10] M.T. Leung, W.E. Engeler and P. Frank, “Fingerprint Image Processing Using Neural
Network”, proc. 10th conf. on Computer and Communication Systems, pp.
582586, Hong Kong 1990.
[11] J. Yang , L. Liu and alt., “A Modified Gabor Filter Design Method for Fingerprint
Image Enhancement”, to be published in the Pattern Recognition Letters
Appendix A
The thinning algorithm while deleting unwanted edge points should not: • Remove end points. c) Thinning An important approach to representing the structural shape of a plane region is to reduce it to a graph.4. The idea underlying most edgedetection techniques is on the computation of a local derivative operator such as ‘Roberts’. b) Edge Detection An edge is the boundary between two regions with relatively distinct gray level properties. • Break connectedness . Fingerprint quality is very important since it affects directly the minutiae extraction algorithm. The resolution of the scanned fingerprints must be 500 dpi while the size is 300x300.unibo. Inkless fingerprint scanners are also present eliminating the intermediate digitization process. The building blocks of a fingerprint recognition system are: Image Acquisition Edge Detection Thinning Feature Extractor Classifier Classification Decision Physical Fingerprint Figure 2. For a detailed explanation refer to “Digital Image Processing” by Gonzalez. ‘Prewitt’ or ‘Sobel’ operators. In our project we will use the database available for free at University of Bologna (http://bias. Two types of degradation usually affect fingerprint images: 1) the ridge lines are not strictly continuous since they sometimes include small breaks (gaps). It is also useful to check the Image Toolbox Demos available in MATLAB. Fingerprint recognition system a) Image Acquisition A number of methods are used to acquire fingerprints. chapters 3 .Many known algorithms have been developed for minutiae extraction based on orientation and gradients of the orientation fields of the ridges [3]. In practice. the set of pixels obtained from the edge detection algorithm seldom characterizes a boundary completely because of noise.it/fvc2000/) as well as building an AUB database. In this project we will adopt the method used by Leung where minutiae are extracted using feedforward artificial neural networks [1]. 2) parallel ridge lines are not always well separated due to the presence of cluttering noise. Among them. This reduction may be accomplished by obtaining the skeleton of the region via thinning (also called skeletonizing) algorithm. edge detection algorithms typically are followed by linking and other boundary detection procedures designed to assemble edge pixels into meaningful boundaries. breaks in the boundary and other effects that introduce spurious intensity discontinuities.csr. each one must gather 36 inked fingerprints images from 3 persons (12 images per finger). the inked impression method remains the most popular one. Thus.
Training set . the next step is to input the prototype fingerprint images to extract the minutiae.ac.ox. Once the network is trained. The networking will be trained using: • The backpropagation algorithm with momentum and learning rate of 0. Figure 3 shows the initial training patterns which are composed of 16 samples of bifurcations in eight different orientations and 36 samples of nonbifurcations. The first layer of the network has nine neurons associated with the components of the input vector. It is also useful to check the following link: http://www. The fingerprint image is scanned using a 3x3 window given Figure 3.fmrib. The feature extraction method used in [1] will be explained below.3.• Cause excessive erosion of the region For a detailed explanation refer to “Digital Image Processing” by Gonzalez. The hidden layer has five neurons and the output layer has one neuron. The network is trained to output a “1” when the input window in centered on a minutiae and a “0” when it is not. • The AlAlaoui backpropagation algorithm. chapter 9.uk/~steve/susan/thinning/node2. A multilayer perceptron (MLP) of three layers is trained to detect the minutiae in the thinned fingerprint image of size 300x300. State the number of epochs needed for convergence as well as the training time for the two methods.html d) Feature Extraction Extraction of appropriate features is one of the most important tasks for a recognition system.
Loops make up nearly 2/3 of all fingerprints. and perhaps 510% are arches. (c) left loop. (d) whorl e) Classifier After scanning the entire fingerprint image. A core point. 2) If two or more minutiae are to close together (few pixels away) we ignore all of them. Figure 4 shows some fingerprint patterns with the core point is marked. is defined as the topmost point on the innermost upwardly curving ridgeline. the resulting output is a binary image revealing the location of minutiae. two more rules are added to enhance the robustness of the algorithm: 1) At those potential minutiae detected points. Usually. the following operations will be performed: • The Euclidean distance d(i) from each minutiae detected point to the center is calculated. and tented arch. The human fingerprint is comprised of various types of ridge patterns. arch. whorl. we reexamine them by increasing the window size by 5x5 and scanning the output image. assuring rotational invariance. traditionally classified according to the decadesold Henry system: left loop. among them the famous “Poincaré” index method [45] and the one described in [6].e: dnorm(i) = d(0)/d(i). Many singularity points detection algorithms were investigated to locate core points. where N is the number of detected minutiae points. (b) right loop. In order to prevent any falsely reported output and select “significant” minutiae. In the algorithm described above. This will assure scale invariance property. • The data is then normalized to unity by shortest distance d (0). The referencing of the distance data to the center point guarantees the property of positional invariance. i. is located at the approximate center. rotation and scaleinvariance. For simplicity we will assume that the core point is located at the center of the fingerprint image. (a) tented arch. Core points on different fingerprint patterns. • The data will be sorted in ascending order from d(0) to d(N). . To insure translation. whorls are nearly 1/3. right loop. the center or reference point of the fingerprint image is what is called the “core” point. the center of the fingerprint image was used to calculate the Euclidean distance between the center and the feature point.(a) (b) (c) (d) Figure 4.
The last phase is the verification phase where testing fingerprint image: 1) is inputted to the system 2) minutiae are extracted 3) Minutiae matching: comparing the distances extracted minutiae to the one stored in the database 4) Identify the person State the results obtained (i. the calculated distances will be stored in the database along with the ID or name of the person to whom each fingerprint belongs.After extracting the location of the minutiae for the prototype fingerprint images. Image Acquisition Feature Extractor Classifier Classification Decision Physical Fingerprint Figure 5.etc).e: recognition rate). Building blocks for the 2nd approach a) Image Acquisition The procedure is the same explained in the 1st approach. thinning . In a local neighborhood the gray levels along the parallel ridges and valleys exhibit some ideal sinusoidal shaped plane waves associated with some noise as shown in figure 6 [3]. smoothing. For small scale fingerprint recognition system. III.. b) Feature Extractor Gabor filter based features have been successfully and widely applied to face recognition. Second Approach Most methods for fingerprint identification use minutiae as the fingerprint features. No preprocessing stage is needed before extracting the features [7]. it would not be efficient to undergo all the preprocessing steps (edge detection. instead Gabor filters will be used to extract features directly from the gray level fingerprint as shown figure 5. . The family of 2D Gabor filters was originally presented by Daugman (1980) as a framework for understanding the orientation and spatial frequency selectivity properties of the filter. Daugman mathematically elaborated further his work in [8]. pattern recognition and fingerprint enhancement.
m. Equation (1) can be written in the complex form giving: . In [7] σx = σy = 2 was used. Sinusoidal plane wave The general formula of the Gabor filter is defined by: 1 xθ2 yθ2k k h( x. 2. the frequency is calculated as the inverse of the distance between two successive ridges. The number of orientation is specified by “m” where θ k = π (k − 1) / m .Figure 6. it is advisable to try other values also. The standard deviations σx and σy are determined empirically.. …. k = 1. y ) = exp − 2 + 2 exp(i 2πfxθ k ) 2 σ x σ y Where • xθ k = x cosθ k + y sin θ k (1) • • • • xθ k = − x sin θ k + y cosθ k f is the frequency of the sinusoidal plane θ is the orientation of the Gabor filter σx and σy are the standard deviations of the Gaussian envelope along the x and y axes The next step is to specify the values of the filter’s parameters.
y ) = exp − 2 2 k 2 σ x σ y 1 xθ2 yθ2k k hodd ( x. Gabor filter response Table 1 extracted from [8] described the filter properties in space and spectral domains. y ) = heven + i. Figure 7. 2D Space Domain 2D Frequency Domain Table 1.hodd 1 xθ2 y 2 k + θ k cos(2πfxθ ) heven ( x. y ) = exp − 2 + 2 sin( 2πfxθ k ) 2 σ x σ y (2) Figure 7 shows the filter response in spatial and frequency domain for a zero orientation. Filter properties .h( x.
Note that k is a parameter of the classifier.The fingerprint print image will be scanned by a 8x8 window. Figure 8 shows how the KNN algorithm works for a two class problem. the test point x would be labeled in the category of the red points [9]. In this k = 5 case. X Figure 8. and it labels the test point by a majority vote of these samples. The classifier finds the k points in the training set that are the closest to x (relative to the Euclidean distance) and assigns x the label shared by the majority of these k nearest neighbors. for each block the magnitude of the Gabor filter is extracted with different values of m (m = 4 and m = 8). The features extracted (new reduced size image) will be used as the input to the classifier. “Training” of the KNN consists simply of collecting k images per individual as the training set. The KNN algorithm The last phase is the verification phase where the testing fingerprint image: 1) is inputted to the system 2) magnitude features are extracted 3) perform the KNN algorithm 4) Identify the person State the recognition rate obtained. The remaining images consists the testing set. The KNN query starts at the test point x and grows a spherical region until it encloses k training samples. it is typically set to an odd value in order to prevent ties. b) Classifier The classifier is based on the knearest neighborhood algorithm KNN. c) Suggested enhancement In order to enhance the performance of the 2nd approach below is a list of proposed ideas: .
try to use also the phase of the filter [10]. A fingerprint with corresponding ridges and valleys.• • • • Instead of using only the magnitude Gabor filter features. Try to other classifiers such as backpropagation and ALBP. Indicate the number of layers used as well as the number of neurons. Try to use the modified Gabor filter described in [11]. The Gabor filter assumes a sinusoidal plane wave which is not always the case as depicted in figure 9. . Try to use the Mahalanobis distance given by: D = ( x − m) T C −1 ( x − m) where m is the mean and C is the covariance matrix. Figure 9. Appendix A provides an example of Mahalanobis distance.
2002 [5] http://www. vol. [4] Q. Lau and A. Wang. on Computer and Communication Systems. Luk. “Pattern Classification”. in Proc. Hong Kong 1990. "Fingerprint Recognition Using Neural Network". 1999.htm [6] A. pp. Pattern Anal. S. 2nd edition. “Fingerprint feature extraction using Gabor Filters”.”A Two Level Classifier For Fingerprint Recognition”.H. “Online Fingerprint Verification”. to be published in the Pattern Recognition Letters . “Fingerprint Image Processing Using Neural Network”. L.owlnet. 1160–1169. [7] C. IEE Electronics Letters. pp.F. proc. Hart. 20 (8). (4).rice. Jain. 1997. and orientation optimized by twodimensional visual cortical filters. 2 (7).J. Machine Intell. Optical Soc. pp. [9] R. 2001. “Fingerprint image enhancement: Algorithm and performance evaluation”. Engeler and P. of the IEEE workshop Neural Network for Signal Processing. [10] M. Luk.References [1] W.D.H. 1998. 1991.35. 26252628. 10th conf. “Fingerprint classification based on extaction and analysis of singularities and pseudoridges”. W. Singapore. [11] J. Frank.G Daugman. 226235. IEEE trans. IEEE Trans. IEEE 1991 International Symposium on CAS. Leung. A.K. Leung. Lee and S. Amer.E. J. Zhang and K. Wiley publisher.. 302314 [3] L. 777– 789. 1985. Yang . pp.T. Jain. Hong. proc.edu/~elec301/Projects00/roshankg/elec301. W. Liu and alt. Duda and P. Leung. 288290 [8] J. Hong and R. Uncertainty relation for resolution in space. Y.H. Wan. 582586. Huang. L. [2] A. 1991. spatial frequency. “A Modified Gabor Filter Design Method for Fingerprint Image Enhancement”. Leung. pp. pp. S. PAMI19. Boler.
Appendix A .
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.