Professional Documents
Culture Documents
Download textbook Digital Forensics And Watermarking 13Th International Workshop Iwdw 2014 Taipei Taiwan October 1 4 2014 Revised Selected Papers 1St Edition Yun Qing Shi ebook all chapter pdf
Download textbook Digital Forensics And Watermarking 13Th International Workshop Iwdw 2014 Taipei Taiwan October 1 4 2014 Revised Selected Papers 1St Edition Yun Qing Shi ebook all chapter pdf
https://textbookfull.com/product/combinatorial-algorithms-25th-
international-workshop-iwoca-2014-duluth-mn-usa-
october-15-17-2014-revised-selected-papers-1st-edition-
kratochvil-jan/
https://textbookfull.com/product/biometric-authentication-first-
international-workshop-biomet-2014-sofia-bulgaria-
june-23-24-2014-revised-selected-papers-1st-edition-virginio-
cantoni/
https://textbookfull.com/product/energy-efficient-data-centers-
third-international-workshop-e2dc-2014-cambridge-uk-
june-10-2014-revised-selected-papers-1st-edition-sonja-klingert/
Yun-Qing Shi
Hyoung Joong Kim
Fernando Pérez-González
Ching-Nung Yang (Eds.)
LNCS 9023
Digital-Forensics
and Watermarking
13th International Workshop, IWDW 2014
Taipei, Taiwan, October 1–4, 2014
Revised Selected Papers
123
Lecture Notes in Computer Science 9023
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zürich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7410
Yun-Qing Shi Hyoung Joong Kim
•
Digital-Forensics
and Watermarking
13th International Workshop, IWDW 2014
Taipei, Taiwan, October 1–4, 2014
Revised Selected Papers
123
Editors
Yun-Qing Shi Fernando Pérez-González
NJIT Universidad de La Laguna
Newark, NJ La Laguna
USA Spain
Hyoung Joong Kim Ching-Nung Yang
Korea University Department of Computer Science
Seoul and Information Engineering
Korea, Republic of (South Korea) National Dong Hwa University
Hualien
Taiwan
Honorary Chairs
Heekuck Oh KIISC, Korea
Ruay-Shiung Chang National Taipei University of Business, Taiwan
General Chairs
Ching-Nung Yang National Dong Hwa University, Taiwan
Chin-Chen Chang Feng Chia University, Taiwan
Europe
Anthony TS Ho Surrey University, UK
Mauro Barni Siena University, Italy
America
C.-C. Jay Kuo University of Southern California, USA
Nasir Memon New York University Polytechnic School
of Engineering, USA
Organizing Chairs
Jou-Ming Chang National Taipei University of Business, Taiwan
An-Hang Chen National Taipei University of Business, Taiwan
Sheng-Lung Peng National Dong Hwa University, Taiwan
Organizing Committee
Guanling Lee National Dong Hwa University, Taiwan
Shou-Chih Lo National Dong Hwa University, Taiwan
Hung-Lung Wang National Taipei University of Business, Taiwan
Jinn-Shyong Yang National Taipei University of Business, Taiwan
Contents
Forensics
Watermarking
Stereo Image Coding with Histogram-Pair Based Reversible Data Hiding. . . . 201
Xuefeng Tong, Guangce Shen, Guorong Xuan, Shumeng Li,
Zhiqiang Yang, Jian Li, and Yun-Qing Shi
Reversible Shared Data Hiding Based on Modified Signed Digit EMD . . . . . 266
Wen-Chung Kuo, Chun-Cheng Wang, Hong-Ching Hou,
and Chen-Tsun Chuang
Visual Cryptography
Poster Session
1 Introduction
Living in our digital age, manipulating digital images becomes much easier than
before. Fake images are often found in the news media nowadays such that one cannot
simply believe that the image they see represents what really have taken place like
before. Therefore, image tampering detection becomes an urgent and important issue in
the research of image forensics. People need to tell if a given image has been tampered
or not, and if the image has been tampered then where it has been tampered without a
priori knowledge. In the past more than one decade, researchers have made efforts on
image tampering detection and quite a number of methods have been proposed. In
[1, 2], two early algorithms to detect tampering and duplicated regions in images have
been proposed. Later, image lighting inconsistency [3], camera parameters [4],
bicoherence statistics [5], Hilbert-Huang transform [6], and statistics of 2-D phase
© Springer International Publishing Switzerland 2015
Y.-Q. Shi et al. (Eds.): IWDW 2014, LNCS 9023, pp. 3–17, 2015.
DOI: 10.1007/978-3-319-19321-2_1
4 G. Xu et al.
congruency [7] have been used to detect tampered images. According to what kind of
features is utilized, there are two main approaches to image tampering detection
research: one is the specific approach (designed to detect the tampering caused by some
specific method) and another is the general approach (designed for the tampering
caused by various methods).
Columbia Image Splicing Detection Evaluation Dataset [8] was established in
2004. As the first public domain available dataset, it has been playing an important role
in moving ahead the research on image tampering detection. The blind splicing
detection methods [6, 7, 9] have achieved success detection rates of 72 %, 80 %, and
82 %, respectively. The more advanced statistical models further moved the detection
rate to slightly above 91 %, [10, 11]. It was reported that the detection rate has reached
93 % [12] with additional features derived from wavelet transform domain.
However, the total number of spliced images and that of original images in the
Columbia dataset are not large enough (each is slightly less than 1000) to allow re-
searchers to apply high-dimensional feature set. That is, the more advanced statistical
modeling cannot be applied to image splicing detection. Furthermore, the original
images from which the dataset was established are JPEG images. Hence, there are some
JPEG artifacts remaining, which have in fact somehow un-reliably boosted the
detection rate. This partially explained the fact that while the tampering detectors well
trained can perform well in the dataset, they cannot achieve high detection rates on real
testing images. Specifically, the system trained in the Columbia dataset which achieved
a detection rate at 91 % could only achieve a detection rate of 66 % on a group of 60
tampered images collected from the open domain [13].
To actively move the research on image tampering detection ahead, the Technical
Committee (TC) of Information Forensics and Security (IFS) at IEEE Signal Pro-
cessing Society successfully organized a competition on Image Tampering Detection in
the summer and fall of 2013 [14]. The competition is worldwide and consists of two
stages. In the first stage, there are 1500 images for training and 5713 images for testing.
That is, the participants could use the training images to train and test their trained
classifiers. Afterwards, the research teams who participated in the competition were
expected to decide if each image in the test dataset has been tampered or not. To allow
the dataset to be used continuously in the website even after the competition in 2013,
the organizers have not announced the true-false ground truth for these test images even
after the competition. Instead, the organizers have only provided the test scores on the
performance according to the test results submitted by the joining teams. In the second
stage, each team worked on a set of tampered images, and was requested to tell for each
tampered image where the tampering took place in the accuracy of pixels. At the end of
the competition, the organizers of the TC of IFS, IEEE Signal Processing Society
awarded the first two teams as the Winner and the Runner-up, respectively.
The authors of this paper have won the second position (Runner-up) in the com-
petition in both the 1st stage and the 2nd stage [14]. In this paper, we first report what
we have used in the 1st stage of competition, i.e., a blind and effective image tampering
detection method based on advanced image statistical modeling. The statistical mod-
eling we used is based on what derived from the advanced steganalytic research with
some modification to reduce feature dimensionality drastically. Then, in the 2nd stage
of competition, we used two different technologies popularly used in copy-move
New Developments in Image Tampering Detection 5
detection, which have complimented each other. It is expected that by providing the
newest technologies utilized in image tampering detection as well as a summary for the
past research along this direction, this paper is useful for further moving the re-search
on image tampering detection ahead.
The rest of the paper is organized as follows. In Sect. 2, we discuss how we have
detected if a give image has been tampered or not. In Sect. 3, given an image which has
been identified as a tampered image, we present our approaches to identifying which
portions in the accuracy of pixel-level have been tampered. Discussion and summary
are given in Sect. 4.
In the first stage of the IEEE competition mentioned above, three schemes have been
developed in a row by our group. It turns out that the one which has achieved the best
performance among the three utilizes a subset of high-dimensional feature set originally
designed and used for the modern steganalysis.
2.2 Combination of LBP and LDP with Other Two Types of Features
Given the initial success by using the LBP, it is naturally to enhance its performance by
building more advanced and hence complicated statistical models. In [10, 11], the
6 G. Xu et al.
moments of 1-D and 2-D characteristic functions and Markov transition probability
matrices extracted from multi-size block DCT coefficients of images have been com-
bined together. This statistical model has achieved success based on experimental
results on Columbia dataset [8]. Inspired by the success of [10, 11], we designed an
even more complex statistical model which contained mainly the following four parts:
• LBP [15] calculated from original image (256D).
• LDP [19] calculated from original image (512D).
• Moments of characteristic functions features calculated from original image and
multi-block DCT 2D arrays [10] (168D).
• Markov transition probability matrices calculated from multi-block DCT 2D arrays
[10, 20] (972D).
The Local Derivative Patterns (LDP) is another set of statistical features, proposed in
[19], which captures directional changes of derivatives of neighboring pixels against
central pixel. In [19], the performance of LDP was claimed to outperform LBP in face
recognition. Thus, we involved the LDP as our second set of features in this statistical
model. As the feature size was large, only horizontal and vertical directions were
considered here, resulting in 2 × 256 = 512D features.
Moments of characteristic functions and Markov transition probability matrices
have been successfully used in [10, 11, 20]. Here, Markov features were calculated
from all the 2 × 2, 4 × 4 and 8 × 8 DCT coefficients arrays.
The block diagram of this statistical model is shown in Fig. 1. Total feature
dimension is 1,908D.
We used the same training process as in our first attempt mentioned above
(applying LBP alone). The result is shown in Table 3. The training accuracy boosted
from 91.2 % to 94.9 %, making us confident that the accuracy should also improve for
testing set. However, surprisingly, the feedback result was only around 81 %. Most
likely, however, this abnormality was caused by a bug in testing score calculation
which was reported by some of the participants and fixed later on by the organizers.
In all the following experiments, in the training phase, we set the training/testing
ratio to 0.8/0.2, the subspace dimensionality to 300, and the number of splits to 13. All
the other settings remained default. Note that there could be better choice of parameters
and we could not try all of them. Apart from the feature subspace ensemble, we also
applied data ensemble. As there was 100 more image data in positive class and the
classifier required equivalent training size, a random selection of 1050 out of 1150 data
was carried on before the start of training process. The data ensemble made sense
because the competition used balanced accuracy for evaluation. Thus, there was no
reason to have bias on training size of any class. The number of data ensemble was also
13 and during the training process, 13 × 13 = 169 classifiers have been built. Training
errors were generated and averaged by classifying the rest 20 % data available for the
training stage. Then, the 169 trained classifiers were used for classifying test dataset
consisting of 5,713 images. The final decisions made on the 5,731 testing images were
made by majority voting by the 169 decisions.
Table 4 shows the average error rate (0.5 × FP + 0.5 × FN) using the full feature set
in [1]. The training accuracy reaches 96.35 %. The online feedback testing result was
around 91.7 %.
So far we have shown that the full set of features (12,753-D) generated from the
rich model worked quite well. In spite of this success, we doubt that not all the features
in this rich model have had positive contribution. Furthermore, the image tampering
and the image steganography have some different behavior and hence make different
changes to the original image. Last but not the least, lowering feature dimensionality is
attractive because of the lacking of enough training images for allowing high feature
dimensionality. Hence, we took one step further to select a subset of the whole feature
set in [21] aiming at improving accuracy.
Our selection steps are shown as follows:
• Divide the whole rich model (12,753-D) into groups.
• Find classification errors and standard deviations (STD) on training data of each
group.
• Select and combine groups by simultaneously considering classification errors and
the STD.
When forming groups, we basically followed the residual types, i.e., first order (S1),
second order (S2), third order (S3), edge 3 × 3 (S3 × 3), edge 5 × 5 (S5 × 5), and 3 × 3,
5 × 5 spam (S35_spam). Features calculated from ‘spam’ and ‘minmax’ residuals were
grouped separately. For details of the residual types, please refer to [21]. The reason we
included STD into our feature selection was that all of the features generated are from
co-occurrence arrays, and some of them had quite uneven distributions,
and some even had a lot of zeroes which implied lower population of histogram bins.
New Developments in Image Tampering Detection 9
As the mean of each co-occurrence array was the same (or close considering that the
minmax occurrence matrices have 325 bins and spam occurrence matrices have 338
bins), it was natural to use standard deviation to measure the distribution of co-
occurrence values. Here we assumed that lower STD implied better distribution and
higher STD implied worse distribution.
Table 5 shows the error rates and STD corresponding to each group of the residual
images. By considering both of these two factors, we chose three groups: S3_spam,
S3 × 3_spam and S3 × 3_minmax, and the total feature dimensionality thus reduce to
1,976D. As the feature size was still high in S3 × 3_minmax (1,300D), we performed
backward selection on all the four sub-groups (four co-occurrence matrices), i.e., each
time we remove one 325D matrix. Only one 325D (S3 × 3_minmax24) was removed.
We could probably remove more, which could be done in our future research. The
details of this backward selection are shown in Table 6. The total dimensionality
now has been reduced to 1,651D. This 1,651D feature set served as our final feature
set for this competition. To make it clear, the final set we use is S3_spam
(338D) + S3 × 3_spam (338D) + S3 × 3_minmax22h (325D) + S3 × 3_minmax22v
(325D) + S3 × 3_minmax41 (325D) = 1,651D. The online feedback testing accuracy
was around 93.8 % - 94.0 %. There was about a 2 % increase compared with the whole
feature set of rich model. Although this might not be the optimal subset, it is the best
we could do within the limited period of time.
3 Tampering Localization
In Phase 2 of the competition, the participants were required to locate the tampered
region pixel-wisely on images provided by the organizers which had all been tampered.
Before starting the research, an analysis was made based on the training data about the
major tampering methods. By observing the training data and the ground truth masks in
training set, the tampering methods can be classified into two major categories. In plain
language, tampered regions could either be originated from the same images (copy-
move), or from different images (splicing). Figure 2 shows one example for splicing
and one for copy-move. It was noticed that these two categories might not cover all the
tampering cases, but they were definitely the main stream.
Fig. 2. From left to right: splicing example, mask of the splicing example, copy-move example,
mask of the copy-move example.
Once we have defined the tampering methods to work with, the next step was to
design corresponding forensic methods. The major idea of our algorithm design was to
break big problem into smaller problems and to design specific algorithms to solve
each small problem. Each algorithm was expected to work independently and the last
step was to fuse the outputs.
The copy-move problem can be separately into three categories based on the copied
content, i.e., smooth area, texture area, and object. And the copied content can be
further processed, e.g., scaling and rotation. To solve the problem that tampered areas
are directly copy-moved (without further processing), similarity is compared between
image blocks (or patches). We propose a new distance measure that counts the total
hamming distance of LBP [15] calculated from pixels inside patches. Similarity is
measured based on the count of hamming distance. This distance measure has the
advantage that copied regions can be detected even in smooth regions due to camera
sensor noise, not to say textured region and objects, as long as there is no other
processing. As most of the images have more than 1024 × 768 pixels, the searching
process could be rather slow. To speed up the searching process, we adopted the
PatchMatch algorithm [23, 24]. For copy-move cases that has involved further pro-
cessing on tampered region, we used the popular scale-invariant feature transform
(SIFT) [25] features and designed a somewhat naive method to use it. As we know, the
SIFT does not work for smooth areas. Fortunately, based on our observation and study
of popular image editing software, smooth areas, most likely, have not gone through
any further processing. Table 7 shows the different copy-move cases versus our
forensic methods. Because we entered the competition rather late, we have only
designed algorithms for copy-move tampering localization. Splicing localization has to
be future work.
New Developments in Image Tampering Detection 11
1. To match the patches in image A with the patches in image B according to simi-
larity measure, there is an initialization process that assigns each patch in A with
one patch in B. The most convenient assignment is randomization, which was the
initialization method adopted in our algorithm. The size of the patches can be user
defined. In our work, we considered sizes 5 × 5, 7 × 7, 9 × 9 and 11 × 11.
2. Assume one patch in A has been assigned a patch in B with small similarity
between them. However, one of its neighbor (in A) is assigned a patch with larger
similarity, the patch then looks for the better patch in the corresponding position in
B according to the correspondence of its neighbor. This process is referred to as
propagation. By doing this for all the pixels in image A, the similarity of each patch
can be greatly improved.
3. Better correspondence of each patch in image A can be found by search nearby
areas of its current match in image B.
Steps two and three are performed iteratively to guarantee the best matches of each
patch in A.
12 G. Xu et al.
Fig. 4. Left to right: original image, output with patch size 5 × 5, ground truth, processed output
(a) original image (b) 5x5 (c) 7x7 (d) 9x9 (e) 11x11 (f) ground truth
(g) 5x5 processed (h) 7x7 processed (i) 9x9 processed (j) 11x11 processed (k) combined
(a) original image (b) 5x5 (c) 7x7 (d) 9x9 (e) 11x11 (f) ground truth
(g) 5x5 processed (h) 7x7 processed (i) 9x9 processed (j) 11x11 processed (k) combined
(a) original image (b) 5x5 (c) 7x7 (d) 9x9 (e) 11x11 (f) ground truth
(g) 5x5 processed (h) 7x7 processed (i) 9x9 processed (j) 11x11 processed (k) combined
(a) original image (b) 5x5 (c) 7x7 (d) 9x9 (e) 11x11 (f) ground truth
(g) 5x5 processed (h) 7x7 processed (i) 9x9 processed (j) 11x11 processed (k) combined
(a) original image (b) SIFT matched (c) processed (d) ground truth
(e) original image (f) SIFT matched (g) processed (h) ground truth
processed masks. Note that there are still a lot of false positives. Those false positives
are mainly from (1) areas that the tampered regions copied from (2) areas near
boundaries of image content. More research should be done to control those false
positives.
that have addressed this problem using SIFT. Again, this should be our future work to
evaluate them on this dataset. At the time of the competition, a simple (somewhat
naïve) usage of SIFT was adopted.
A simple implementation of SIFT [28] was adopted as feature extraction tool. For
each detected feature points, a 128-D feature vector was generated. We performed a
brute force search to find the matched points by calculating the Euclidean distance
between feature vectors. A threshold was set to filter out the points that located the
copy-moved region. Two examples are shown in Fig. 9. After that, morphological
dilations were performed on each point with a square of size min(height, width)/8 to
include all the areas of tampered region. This usage was quite coarse and naïve with
still acceptable performance.
In Phase 1 of the competition organized by IEEE IFS-TC, the winner team took the
strategy of merging two tampering detection methods [29], i.e., the statistical-model
based classification and the copy-move detection. Similar to our approach, they also
used a subset of the rich model as features. While we used ensemble classifiers and
simultaneously considered classification performance and standard deviations of fea-
tures and came up with an efficient subset of the original rich model, they used the
SVM as classifier and the area under the receiver operating curve as the measure for
feature selection. It was discovered that the tampering detection by using the statistical
model generated a lot of miss detections. This problem was alleviated by introducing a
copy-move detector which by their experiments could efficiently ‘catch’ the miss
detections by the statistical classifier. Eventually, the two tampering detection methods
were merged and their testing score in Phase 1 has boosted to 94.2 %.
In Phase 2 of the competition, the winner team adopted and fused three methods
[30], i.e., PRNU based tampering detection, copy-move detection, and statistical model
based classification. Comparing with the other two, the PRNU based detector was the
most reliable one, as information of the camera noise was used. Although the camera
information and noise patterns were unavailable, they were successfully extracted by
the winner team using a noise-based fingerprint clustering methods on the training data.
However, there were some camera mismatch between training and testing da-taset.
Also, the PRNU based methods became unreliable at dark, saturated or highly textured
regions. Hence, they adopted a copy-move detector as the second approach. The ori-
ginal version of Patch-Match was used which includes capability of detecting rotated
and scaled copy-move regions. In their statistical-based method, the same feature set
was used as was developed during Phase 1 of the competition. Combined with a sliding
16 G. Xu et al.
References
1. Farid, H., Lyu, S.: Higher-order wavelet statistics and their application to digital forensics.
In: IEEE Workshop on Statistical Analysis in Computer Vision, vol. 8, p. 94, June 2003
2. Fridrich, J., Soukal, D., Lukáš, J.: Detection of copy-move forgery in digital images. In:
Proceedings of Digital Forensic Research Workshop (2003)
3. Johnson, K., Farid, H.: Exposing digital forgeries by detecting inconsistencies in lighting. In:
Proceedings of the 7th Workshop on Multimedia and Security, pp. 1–10. ACM, August
2005
4. Huang, Y.: Can digital image forgery detection be unevadable? a case study: color filter
array interpolation statistical feature recovery. In: Visual Communications and Image
Processing 2005, pp. 59602 W–59602 W. International Society for Optics and Photonics,
July 2005
5. Ng, T., Chang, F., Sun, Q.: Blind detection of photomontage using higher order statistics. In:
Proceedings of the 2004 International Symposium on Circuits and Systems, ISCAS 2004,
vol. 5, pp. V–688. IEEE, May 2004
6. Fu, D., Shi, Y.Q., Su, W.: Detection of image splicing based on Hilbert-Huang transform
and moments of characteristic functions with wavelet decomposition. In: Shi, Y.Q., Jeon, B.
(eds.) IWDW 2006. LNCS, vol. 4283, pp. 177–187. Springer, Heidelberg (2006)
7. Chen, W., Shi, Y.Q., Su, W.: Image splicing detection using 2-D phase congruency and
statistical moments of characteristic function. In: Society of Photo-Optical Instrumentation
Engineers (SPIE) Conference Series, vol. 6505, p. 26, February 2007
8. DVMM Research Lab. Columbia Image Splicing Detection Evaluation Dataset (2004).
http://www.ee.columbia.edu/ln/dvmm/downloads/AuthSplicedDataSet/AuthSplicedDataSet.
htm
9. Ng, T.-T., Chang, F., Sun, Q.: Blind detection of photomontage using higher order statistics.
In: Proceedings of the 2004 International Symposium on Circuits and Systems, ISCAS 2004,
vol. 5, pp. V–688. IEEE, May 2004
10. Shi, Y.Q., Chen, C., Chen, W.: A natural image model approach to splicing detection. In:
Proceedings of the 9th Workshop on Multimedia & Security, pp. 51–62. ACM, September
2007
11. Shi, Y.Q., Chen, C.-H., Xuan, G., Su, W.: Steganalysis versus splicing detection. In: Shi, Y.
Q., Kim, H.-J., Katzenbeisser, S. (eds.) IWDW 2007. LNCS, vol. 5041, pp. 158–172.
Springer, Heidelberg (2008)
12. He, Z., Lu, W., Sun, W., Huang, J.: Digital image splicing detection based on Markov
features in DCT and DWT domain. Pattern Recogn. 45(12), 4292–4299 (2012)
13. http://web.njit.edu/*shi/tech-report (2009)
New Developments in Image Tampering Detection 17
14. http://ifc.recod.ic.unicamp.br/
15. Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant
texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24
(7), 971–987 (2002)
16. Shi, Y.Q., Sutthiwan, P., Chen, L.: Textural features for steganalysis. In: Kirchner, M.,
Ghosal, D. (eds.) IH 2012. LNCS, vol. 7692, pp. 63–77. Springer, Heidelberg (2013)
17. Xu, G., Shi, Y.Q.: Camera model identification using local binary patterns. In: IEEE
International Conference on Multimedia and Expo (ICME), pp. 392–397. IEEE, July 2012
18. Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines. ACM Trans.
Intell. Syst. Technol. 2, 27:1–27:27 (2011). http://www.csie.ntu.edu.tw/*cjlin/libsvm
19. Zhang, B., Gao, Y., Zhao, S., Liu, J.: Local derivative pattern versus local binary pattern:
face recognition with high-order local pattern descriptor. IEEE Trans. Image Proc. 19(2),
533–544 (2010)
20. Sutthiwan, P., Shi, Y.Q., Zhao, H., Ng, T.-T., Su, W.: Markovian rake transform for digital
image tampering detection. In: Shi, Y.Q., Emmanuel, S., Kankanhalli, M.S., Chang, S.-F.,
Radhakrishnan, R., Ma, F., Zhao, L. (eds.) Transactions on Data Hiding and Multimedia
Security VI. LNCS, vol. 6730, pp. 1–17. Springer, Heidelberg (2011)
21. Fridrich, J., Kodovsky, J.: Rich models for steganalysis of digital images. IEEE Trans. Inf.
Forensics Secur. 7(3), 868–882 (2012)
22. Kodovsky, J., Fridrich, J., Holub, V.: Ensemble classifiers for steganalysis of digital media.
IEEE Trans. Inf. Forensics Secur. 7(2), 432–444 (2012)
23. Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.: PatchMatch: a randomized
correspondence algorithm for structural image editing. ACM Trans. Graph. -TOG 28(3), 24
(2009)
24. Barnes, C., Shechtman, E., Goldman, D.B., Finkelstein, A.: The generalized patchmatch
correspondence algorithm. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010,
Part III. LNCS, vol. 6313, pp. 29–43. Springer, Heidelberg (2010)
25. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis.
60(2), 91–110 (2004)
26. Christlein, V., Riess, C., Jordan, J., Angelopoulou, E.: An evaluation of popular copy-move
forgery detection approaches. IEEE Trans. Inf. Forensics Secur. 7(6), 1841–1854 (2012)
27. Fan, Z., de Queiroz, R.L.: Identification of bitmap compression history: JPEG detection and
quantizer estimation. IEEE Trans. Image Proc. 12(2), 230–235 (2003)
28. http://www.robots.ox.ac.uk/*vedaldi/code/sift.html
29. Cozzolino, D., Gragnaniello, D., Verdoliva, L.: Image forgery detection through residual-
based local descriptors and block-matching. In: IEEE International Conference on Image
Processing (ICIP) (2014)
30. Cozzolino, D., Gragnaniello, D., Verdoliva, L.: Image forgery localization through the
fusion of camera-based, feature-based and pixel-based techniques. In: IEEE International
Conference on Image Processing (ICIP) (2014)
Another random document with
no related content on Scribd:
as in the original. Some translations are poor or incorrect but have been retained as in the
original, except as noted in the list below.
The corrections listed in the Errata have been made except where the Errata entry is itself
incorrect. Other obvious printer or spelling errors have been corrected without note.
Archaic English and Scottish spellings have been retained as in the original.
The following substantive errors have been retained as in the original:
p. 13, maxim 7 - "not valid" should be "valid."
p. 18, maxim 5 - "Catella" should be "Catalla"; "A little whelp, (perhaps cattle)" should be
"Chattels."
p. 21, maxim 7 - "Condictio" should be "Conditio"; "The appointment of an action for a
certain day" should be "A condition."
p. 22, maxim 2 - "Condictio præcedens" should be "Conditio præcedens"; "The
appointment of an action preceding" should be "The fulfillment of a condition precedent."
p. 29, maxim 8 - "lex volentes" should be "lex nolentes"; "law draws those who are willing"
should be "law draws those who are unwilling."
p. 54, maxim 5 - The Errata indicate that "three witnesses may be brought" should be in
the translation, but that is not supported by the Latin, which is correct as is and says
nothing about witnesses.
p. 62, maxim 7 - "est alleganda" should be "non est allegenda"; "to be alleged" should be
"not to be alleged."
p. 80, maxim 3 - "casis" should be "cassis"; the translation should be "Law is the safest
helmet."
p. 139, maxim 2 - The Latin maxim should read, "Quemadmodum ad quæstionem facti
non respondent judices, ita ad quæstionem juris non respondent juratores"; the
translation should read, "As judges do not answer to questions of fact, jurors do not
answer to questions of law."
p. 151, maxim 3 - The Errata entry, "consensus," does not appear in the text of this
maxim.
*** END OF THE PROJECT GUTENBERG EBOOK A COLLECTION
OF LATIN MAXIMS & RULES, IN LAW AND EQUITY ***
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.