You are on page 1of 39

Summary

Video-Based Face Spoofing


Detection through Visual Rhythm
Analysis

Allan S. Pinto1 Hélio Pedrini1 William


Schwartz 2 Anderson Rocha1
1 Institute of Computing

University of Campinas
2 Department of Computer Science
Universidade Federal de Minas Gerais

XXV SIBGRAPI - Conference on Graphics,


Patterns and Images

1 / 39
Summary
Summary

1 Introduction and Motivation

2 Contributions

3 Related Work

4 Proposed Method

5 Experiments

6 Results

7 Conclusion and Future Work

8 Acknowledgment

2 / 39
Introduction and Motivation
Introduction and What is biometrics?
Motivation

Contributions
Technology to recognize human of automatic and
unique manner
Related Work
Fingerprint, geometric and vein of the hand, face,
Proposed Method
iris, voice, etc.
Experiments

Results Recent advances in the area of pattern recognition


Conclusion and apply in face recognition
Future Work

Acknowledgment
Access control, surveillance and criminal
identification, etc.

3 / 39
Introduction and Motivation
Introduction and
Motivation

Contributions

Related Work

Proposed Method However, several attack techniques have been


Experiments developed to deceive the biometric systems
Results
Attacks can occur:
Conclusion and
Future Work By manipulation of scores of the recognition
Acknowledgment
system
When a person tries to masquerade as someone
else falsifying the biometric data that are captured
by the acquisition sensor
Spoofing Attack

4 / 39
Introduction and Motivation
Introduction and
Motivation

Contributions

Related Work

Proposed Method
But in practice, what is easier? Manipulation of
Experiments
the scores or present a biometric fake data for
Results
acquisition sensor?
Conclusion and
showing a photography of a valid user
Future Work showing a video of a valid user
Acknowledgment showing a 3D facial model of a valid user
Our face is the biometric data more exposed
Download on Facebook (photo), YouTube (video),
Personal Website (photo)

5 / 39
Contributions
Introduction and
Motivation
First method proposed for the video-based
Contributions
spoofing attack detection
Related Work Creation of a dataset (available upon acceptance a )
Proposed Method composed of 700 videos
Experiments 100 videos of valid access
Results 600 videos of fake access attempts
Conclusion and All videos with 640 × 480 pixel resolution and 25
Future Work fps
Acknowledgment
a
http://www.ic.unicamp.br/∼rocha/pub/communications.html

6 / 39
Contributions
Introduction and
Motivation

Contributions

Related Work
Creation of the robust and simple method that can
be easy embedded in a biometric system in
Proposed Method
operation
Experiments
Can execute parallel to recognition system,
Results
requiring less time to validate access
Conclusion and
Future Work

Acknowledgment

7 / 39
Related Work
Introduction and
Motivation There are many works to solve the photo-based
Contributions spoof attack detection
Related Work The methods seek to find differences between a
Proposed Method real and fake biometric data
Experiments Based on attribute of the images as texture, color,
Results light reflection, optical flow analysis, among others
Conclusion and Topic quite explored
Future Work

Acknowledgment

Competition on counter measures to 2-D facial spoofing


attacks
In this competition, we were the second best group
of researchers in the world, with only one miss
classification a
a
W. R. Schwartz, A. Rocha, and H. Pedrini, “Face Spoofing Detection through
Partial Least Squares and Low-Level Descriptors,” in Intl. Joint Conference on
Biometrics, Oct. 2011, pp. 1–8.

8 / 39
Related Work
Introduction and
Motivation

Contributions

Related Work
We can categorize currents anti-spoofing methods
Proposed Method
in four non-disjoint groups
Experiments Data-driven characterization
Results
User behavior modeling
Conclusion and
User interaction need
Future Work Presence of additional devices
Acknowledgment

Non-intrusive methods without extra devices and


human involvement may be preferable
Could be easily integrated into an existing
biometric system, where usually only a generic
webcam is deployed

9 / 39
Proposed Method
Introduction and
Motivation Motivation
Contributions There are artifacts that are added to the biometric
Related Work samples during the viewing process of the videos
Proposed Method in the display devices
Experiments Distortion, flickering, moiring, among others
Results There are noise signatures that are added during
Conclusion and the recapture process
Future Work

Acknowledgment Our hypothesis is that both noise and artifacts are


sufficient to detect the face liveness

10 / 39
Overview
Introduction and
Motivation

Contributions

Related Work

Proposed Method

Experiments

Results

Conclusion and
Future Work

Acknowledgment

11 / 39
Step One
Introduction and
Motivation
Firstly, we calculate the noise residual video
Contributions
(Vnoise ) for all videos in training set
Related Work

Proposed Method Filtering Process


Experiments
(t)
Results
Vnoise = V (t) − f (Vcopy
(t)
) ∀ t ∈ T = {1, 2, . . . , t},
Conclusion and
Future Work (1)
Acknowledgment where V (t) ∈ N2 is the t-th frame of V and f a filtering
operation.

12 / 39
Step Two
Introduction and Next, we calculate the Fourier spectrum in the
Motivation
logarithmic range and with origin at the center of
Contributions
the frame of all noise residual video (Vnoise )
Related Work

Proposed Method 2D Discrete Fourier Transform


Experiments
M
X −1 N
X −1
Results F(υ, ν) = V(noise) (x, y)e−j2π[(υx/M )+(νy/N )] (2)
Conclusion and x=0 y=0
Future Work

Acknowledgment
Fourier Spectrum
q
|F(υ, ν)| = R(υ, ν)2 + I(υ, ν)2
S(υ, ν) = log(1 + |F(υ, ν)|) (3)

13 / 39
Step Two
Introduction and Example of Fourier spectrum video frame
Motivation

Contributions

Related Work

Proposed Method

Experiments

Results

Conclusion and
Future Work
(a) Valid video
Acknowledgment

(b) Attack video consid- (c) Attack video consid-


ering a Gaussian filter ering a Median filter
14 / 39
Step Three
Introduction and
Motivation

Contributions

Related Work We calculate visual rhythms of each Fourier


Proposed Method spectrum video
Experiments
Visual Rhythm is a technique that can capture the
Results
temporal information and summarize the video
Conclusion and
Future Work contents in a singe image
Acknowledgment Considering a video V in the domain 2D + t with t
frames of dimension M × N pixels, the visual
rhythm is a simplification of the video V
lines or columns of each frame t are sampled and
concatenated to form a new image, called visual
rhythm

15 / 39
Step Three
Introduction and
Motivation

Contributions

Related Work
Example of a visual rhythm
Proposed Method

Experiments

Results

Conclusion and
Future Work

Acknowledgment

16 / 39
Step Three
Introduction and Visual Rhythm
Motivation

Contributions
Two types of visual rhythm is generate for each
Related Work
video
Proposed Method Vertical visual rhythm, formed by the central
Experiments
vertical lines
Results
Horizontal visual rhythm, formed by the central
Conclusion and
horizontal lines;
Future Work

Acknowledgment

17 / 39
Step Three
Introduction and
Motivation

Contributions Example of horizontal visual rhythms (rotated in 90


Related Work
degrees)
Proposed Method

Experiments

Results

Conclusion and
Future Work

Acknowledgment

(d) Valid video (e) Attack attempt video

18 / 39
Step Three
Introduction and
Motivation

Contributions

Related Work
Example of vertical visual rhythms
Proposed Method

Experiments

Results

Conclusion and
Future Work

Acknowledgment

(f) Valid video (g) Attack attempt video

19 / 39
Step Four
Introduction and Visual Rhythm as a Texture Map
Motivation

Contributions
Gray-level co-occurrence matrices (GLCM) to
Related Work
extract textural information of the visual rhythm
A GLCM is a structure that describes the
Proposed Method
frequency of occurrence of gray levels between
Experiments
pairs of pixels at a distance d = 1 in a given
Results
orientation θ ∈ {0◦ , 45◦ , 90◦ , 135◦ }
Conclusion and
Future Work We extract 12 measures summarizing textural
Acknowledgment information from four matrix
angular second moment, contrast, correlation, sum
of squares, inverse difference moment, ...

20 / 39
Step Four
Introduction and
Motivation

Contributions

Related Work

Proposed Method

Experiments

Results

Conclusion and
Future Work

Acknowledgment

21 / 39
Step Four
Introduction and
Motivation

Contributions

Related Work

Proposed Method
angular second moment: G−1
P PG−1 2
Experiments i=0 i=0 p(i, j)
PG−1 ijp(i,j)−µx µy
correlation: G−1
Results
P
i=0 i=0 σx σy
Conclusion and
Future Work
PG−1 PG−1 2
contrast: i=0 i=0 (i − j) p(i, j)
Acknowledgment
...
where p is the hd,θ matrix normalized

22 / 39
Step Five
Introduction and
Motivation

Contributions Machine Learning


Related Work We use two machine learning technique for classify
Proposed Method the patterns that are extracted from the visual
Experiments rhythms using the texture descriptor GLCM
Results Partial Least Square (PLS)
Conclusion and Support Vector Machine (SVM)
Future Work

Acknowledgment

23 / 39
Dataset Creation
Introduction and
Motivation

Contributions

Related Work Extension upon of Print-Attack Dataset


Proposed Method 200 videos of valid access
Experiments 200 videos of spoof attacks using printed
Results photographs
Conclusion and All videos with 320 × 240 pixel resolution
Future Work

Acknowledgment
Creation of Attack attempt video
All videos that represent a valid access were up
sample to 640 × 480 pixel resolution
Shown in 6 monitors and captured with a Sony
CyberShot digital camera

24 / 39
Dataset Partitioning
Introduction and
Motivation

Contributions

Related Work

Proposed Method

Experiments

Results

Conclusion and
Future Work

Acknowledgment

25 / 39
What is the Influence of the Monitors?
Introduction and
Motivation

Contributions

Related Work

Proposed Method
To verify the influence of the monitors under our
Experiments
method we performed the experiments as follow:
Results
Train with Real 1 and Fake 1 groups and test with
Conclusion and
Future Work
Real 2 and Fake 2 groups
Acknowledgment
Train with Real 2 and Fake 2 groups and test with
Real 1 and Fake 1 groups
Finally, we calculate the average and standard
deviation

26 / 39
Analysis of the Filtering Process and
Introduction and
Visual Rhythm
Motivation

Contributions

Related Work

Proposed Method

Experiments

Results Filtering process analysis


Conclusion and
Future Work We use either Gaussian or Median filter (linear
Acknowledgment
and non-linear filter, respectively) in the filtering
process
Median with size of 3 × 3
Gaussian with σ = 2 and size of 3 × 3

27 / 39
Analysis of the Filtering Process and
Introduction and
Visual Rhythm
Motivation

Contributions Visual rhythm analysis


Related Work
The visual rhythms were calculate using the first 2
Proposed Method
seconds (50 frames)
Experiments
Vertical visual rhythm: 30 columns of pixels
Results
Horizontal visual rhythm: 30 rows of pixels
Conclusion and
Future Work We did experiments using the horizontal and
Acknowledgment
vertical visual rhythms separated and combined

Table 1: Number of features (dimensions) using either the


direct pixel intensities as features or the GLCM-based
texture information features.
Descriptor Dimensionality
Nome
Horizontal Vertical Horizontal + Vertical
Pixel Intensity 960,000 720,000 1,680,000
GLCM 48 48 96
28 / 39
Classification Techniques
Introduction and
Motivation

Contributions

Related Work

Proposed Method

Experiments
Partial Least Square (PLS)
Results
We did experiments considering different number
Conclusion and
of factors (the only parameter of this method)
Future Work
Support Vector Machine (SVM)
Acknowledgment
K(xi , xj ) = xTi xj (Linear kernel)
2
K(xi , xj ) = eγ||xi xj || , γ > 0 (RBF kernel)
Grid Search for tuning the parameter C and γ

29 / 39
Results
Introduction and
Motivation

Contributions

Related Work Table 2: Obtained results in terms of area under the receiver
Proposed Method operating characteristic curve (AUC) considering the SVM
Experiments classification technique and Gaussian filter. SVM was not
Results able to calculate a classification hyperplane when using
Conclusion and direct pixel intensities as features.
Future Work

Acknowledgment Type of Visual SVM Linear SVM RBF


Rhythms Intensity GLCM Intensity GLCM
– x = 98.4% – x = 99.9%
Vertical
– σ = 1.60% – σ = 0.10%
– x = 99.6% – x = 99.7%
Horizontal
– σ = 0.50% – σ = 0.10%
Horizontal – x = 100.0% – x = 100.0%
and Vertical – σ = 0.0% – σ = 0.0%

30 / 39
Results
Introduction and
Motivation

Contributions

Related Work
Table 3: Obtained results in terms of AUC considering the
Proposed Method
SVM classification technique and Median filter. SVM was
Experiments
not able to calculate a classification hyperplane when using
Results
direct pixel intensities as features.
Conclusion and
Future Work Type of Visual SVM Linear SVM RBF
Acknowledgment Rhythms Intensity GLCM Intensity GLCM
– x = 99.7% – x = 99.6%
Vertical
– σ = 0.20% – σ = 0.10%
– x = 99.9% – x = 100.0%
Horizontal
– σ = 0.10% – σ = 0.0%
Horizontal – x = 100.0% – x = 100.0%
and Vertical – σ = 0.0% – σ = 0.0%

31 / 39
Results
Introduction and
Motivation

Contributions

Related Work

Proposed Method
Table 4: Obtained results in terms of AUC considering the
Experiments
PLS classification technique and Gaussian filter.
Results
Type of Visual PLS
Conclusion and
Future Work Rhythm Intensity GLCM
x = 99.9% x = 98.2%
Acknowledgment Vertical
σ = 0.20% σ = 0.40%
x = 100.0% x = 98.9%
Horizontal
σ = 0.0% σ = 1.50%
Horizontal x = 100.0% x = 99.9%
and Vertical σ = 0.0% σ = 0.10%

32 / 39
Results
Introduction and
Motivation

Contributions

Related Work

Proposed Method
Table 5: Obtained results in terms of AUC considering the
Experiments
PLS classification technique and Median filter.
Results
Type of Visual PLS
Conclusion and
Future Work Rhythm Intensity GLCM
x = 100.0% x = 99.5%
Acknowledgment Vertical
σ = 0.0% σ = 0.70%
x = 100.0% x = 99.9%
Horizontal
σ = 0.0% σ = 0.10%
Horizontal x = 100.0% x = 100.0%
and Vertical σ = 0.0% σ = 0.0%

33 / 39
Summary
Introduction and
Motivation

Contributions

Related Work

Proposed Method The visual rhythm calculated on a logarithmic


Experiments scale Fourier Spectrum
Results Effective alternative to summarize videos and an
Conclusion and important forensic signature for detecting
Future Work
video-based spoofing
Acknowledgment
The filtering process do not have influence in our
method
The obtained results using the Median and
Gaussian filter are statistically comparable

34 / 39
Summary
Introduction and
Motivation

Contributions

Related Work

Proposed Method
The monitors do not have influence in our method
Experiments
Although the standard deviation showed in Table 2 is
Results
1.60% and 0.50% using vertical and horizontal visual
Conclusion and
Future Work rhythms, respectively
Acknowledgment

The combination of these features resulted in a perfect


classification (100.0% ± 0.0)

35 / 39
Summary
Introduction and
Motivation

Contributions

Related Work

Proposed Method
The monitors do not have influence in our method
Experiments
Although the standard deviation showed in Table 4 is
Results
1.50% and 0.40% using vertical and horizontal visual
Conclusion and
Future Work rhythms, respectively
Acknowledgment

The combination of these features resulted in a nearly


perfect classification (99.9% ± 0.10%)

36 / 39
Conclusion and Future Work
Introduction and
Motivation

Contributions

Related Work Fourier spectrum of video noise signatures and the


Proposed Method use of visual rhythms
Experiments Able to properly capture discriminative
Results information to distinguish between valid and fake
Conclusion and users for video-based spoofing
Future Work

Acknowledgment
The extraction of feature descriptors with GLCM
provided a compact representation while keeping
the method discriminability
Many classification techniques have memory
allocation problems when dealing with
high-dimensional feature spaces

37 / 39
Conclusion and Future Work
Introduction and Finally, directions for future work include
Motivation

Contributions
The exploration of new video summarization
approaches as well the use of more monitors and
Related Work
real videos
Proposed Method
Additional tests could be performed considering
Experiments
tablets and smart phones
Results
The investigation of illumination influences on the
Conclusion and
Future Work
proposed method
Acknowledgment
New experiments upon a new Dataset (Videos in
Full High Definition quality)

38 / 39
Acknowledgment
Introduction and
Motivation

Contributions

Related Work

Proposed Method

Experiments

Results

Conclusion and
Future Work

Acknowledgment

39 / 39

You might also like