You are on page 1of 21

FAKE IMAGE DETECTION REPORT

1
TABLE OF THE CONTENTS

S.no Title Pg.no

1. Abstract 3

2. Introduction 4

3. Objective 5

4. Modules 6

5. System analysis 7

6. Benefits 8

7. Software Requirements 9

8. Source code/sample output 10

9. Conclusion 19

2
ABSTRACT:
Nowadays biometric systems are useful in recognizing a person’s identity, but criminals
change their appearance in behaviour and psychological to deceive recognition system. To
overcome this problem we are using a new technique called Deep Texture Features extraction
from images and then building train machine learning model using CNN (Convolution Neural
Networks) algorithm. This technique refers as LBPNet or NLBPNet as this technique is
heavily dependent on features extraction using LBP (Local Binary Pattern) algorithm.In this
project, we are designing LBP Based machine learning Convolution Neural Network called
LBPNET to detect fake face images. Here first we will extract LBP from images and then
train LBP descriptor images with Convolution Neural Network to generate a training model.
Whenever we upload a new test image then that test image will be applied to the training
model to detect whether the test image contains a fake image or a non-fake image. Below we
can see some details on LBP.

3
INTRODUCTION:
Local binary patterns (LBP) could be a variety of visual descriptors used for classification in
laptop vision and could be an easy nonetheless terribly economical texture operator that
labels the constituents of a picture by thresholding the neighbourhood of every pixel and
considers the result as a binary variety. Because of its discriminative power and machine
simplicity, the LBP texture operator has become a preferred approach in numerous
applications. It is often seen as a unifying approach to the historically divergent applied
mathematics and structural models of texture analysis. maybe the foremost necessary
property of the LBP operator in real-world applications is its hardiness to monotonic gray-
scale changes caused, as an example, by illumination variations. Another necessary property
is its machine simplicity, which makes it doable to research pictures in difficult
period settings.

4
OBJECTIVE:
The objective of this project is to identify fake images(Fake images are the images that are
digitally altered images). The problem with existing fake image detection system is that they
can be used detect only specific tampering methods like splicing, coloring etc. We
approached the problem using machine learning and neural network to detect almost all kinds
of tampering on images.Using latest image editing softwares, it is possible to make
alterations on image which are too difficult for human eye to detect. Even with a complex
neural network, it is not possible to determine whether an image is fake or not without
identifying a common factor across almost all fake images. So, instead of giving direct raw
pixels to the neural network, we gave error level analysed image.

5
MODULES:
Lighting
Composite images made of pieces from different photographs can display subtle differences
in the lighting conditions under which each person or object was originally photographed.
Such discrepancies will often go unnoticed by the naked eye.
Eyes and Positions
Because eyes have very consistent shapes, they can be useful for assessing whether a
photograph has been altered
Specular Highlights
Surrounding lights reflect in eyes to form small white dots called specular highlights. The
shape, color and location of these highlights tell us quite a bit about the lighting.
Send in the Clones
Cloning—the copying and pasting of a region of an image—is a very common and powerful
form of manipulation.
Camera Fingerprints
Digital retouching rarely leaves behind a visual trace. Because retouching can take many
forms, I wanted to develop an algorithm that would detect any modification of an image. The
technique my group came up with depends on a feature of how virtually all
digital cameras work.

6
SYSTEM ANALYSIS:
EXISTING SYSTEM:
When capturing an image, additional required hidden information is associated with it for
authentication and forgery protection purposes. The passive technique does not rely on extra
information, but it analyzes some features extracted from the digital content of the image
itself. Copy-move means coping a part of an image and pasting it into another place of the
same picture whereas splicing is about taking a part of an image and pasting it into another.
PROPOSED SYSTEM:
In this project, we have a tendency to area unit planning LBP primarily based machine
learning Convolution Neural Network known as LBPNET to sight pretend face pictures. Here
1st we'll extract LBP from pictures and so train LBP descriptor pictures with Convolution
Neural Network to get coaching model. Whenever we have a tendency to transfer new take a
look at the image then that take a look at image are going to be applied on coaching model to
sight whether or not take a look at image contains pretend image or non-fake image. Below
we will see some details on LBP.

7
BENEFITS:
Libraries help to analyze the data.

⮚ Statistical and prediction is very easy compared to existing technologies.

⮚ Results will be accurate compared to other methodologies.

8
SOFTWARE REQUIREMENTS:
Operating system : Windows 10
Coding Language : python
Tool : PyCharm
HARDWARE REQUIREMENTS:
System : Pentium IV 2.4 GHz.
Hard Disk : 40 GB.
 Ram : 512

9
SOURCE CODE:
import torch
from torch import nn, optim
from torch.utils.data import DataLoader
import tqdm

from torchvision.datasets import ImageFolder


from torchvision import transforms

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

train_imgs = ImageFolder(f"./Datafolder/LSUN/train(cat_StyleGAN2)/",
transform=transforms.Compose([
transforms.ToTensor()]
))
test_imgs = ImageFolder(f"./Datafolder/LSUN/test(cat_StyleGAN2)/",
transform=transforms.Compose([
transforms.ToTensor()]
))

# DataLoader
train_loader = DataLoader(
train_imgs, batch_size=64, shuffle=True)
test_loader = DataLoader(
test_imgs, batch_size=64, shuffle=False)

from Pelee import Model


net = Model(num_classes=2)

10
def test_net(net, data_loader, device="cpu"):
# Dropout
net.eval()
ys = []
ypreds = []
for x ,y in data_loader:

x = x.to(device)
y = y.to(device)

with torch.no_grad():
_, y_pred = net(x).max(1)
ys.append(y)
ypreds.append(y_pred)

ys = torch.cat(ys)
ypreds = torch.cat(ypreds)
acc = (ys == ypreds).float().sum() / len(ys)
return acc.item()

def train_net(net, train_loader, test_loader,


optimizer_cls=optim.Adam,
loss_fn=nn.CrossEntropyLoss(),
n_iter=10, device="cpu"):
train_losses = []
train_acc = []
val_acc = []
optimizer = optimizer_cls(net.parameters())

11
for epoch in range(n_iter):
running_loss = 0.001
net.train()
n=0
n_acc = 0

for i, (xx, yy) in tqdm.tqdm(enumerate(train_loader),


total=len(train_loader)):
xx = xx.to(device)
yy = yy.to(device)
h = net(xx)
loss = loss_fn(h,yy)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
n += len(xx)
_, y_pred = h.max(1)
n_acc += (yy == y_pred).float().sum().item()
train_losses.append(running_loss / i)
train_acc.append(n_acc / n)

val_acc.append(test_net(net, test_loader, device))

print("\n epoch {0} --> loss : {1} / train_acc : {2}% / val_acc : {3}%"
.format(epoch, train_losses[-1], train_acc[-1]*100, val_acc[-1]*100, flush=True))

torch.save(net.state_dict(),
f"./result/Experiments/StyleGAN2/Pelee-HPF/epoch_{n_iter-1}.pth")

12
def Evaluate_Networks(Net):
save_path = "./result/Experiments/StyleGAN2/Pelee-HPF/"
# data
Net.load_state_dict(torch.load(save_path + "epoch_49.pth"), strict=False)
Net = Net.to(device).eval()

test_data = test_loader

# Test
ys = []
ypreds = []
for X, Y in tqdm.tqdm(test_data):
X = X.to(device)
Y = Y.to(device)
with torch.no_grad():
# Value, Indices >> Get Indices
_, y_pred = Net(X).max(1)
ys.append(Y)
ypreds.append(y_pred)

ys = torch.cat(ys)
ypreds = torch.cat(ypreds)

acc_real = (ys[2000:] == ypreds[2000:]).float().sum() / len(ys[2000:])


acc_fake = (ys[:2000] == ypreds[:2000]).float().sum() / len(ys[:2000])
acc = (ys == ypreds).float().sum() / len(ys)

print('\n-----------------------------------------')

13
print('Real Accuracy : ', acc_real.item())
print('Fake Accuracy : ', acc_fake.item())
print('Total AVG : ', acc.item())

net.to("cuda:0")

# train_net(net, train_loader, test_loader, n_iter=50, device="cuda:0")

# Evaluation
Evaluate_Networks(net)

14
OUTPUT:

15
16
17
18
19
20
CONCLUSION:
In this study, we have proposed a novel common fake feature network-based pairwise
learning, to detect the fake face/general images generated by state-of-the-art GANs
successfully. The proposed CFFN can be used to learn the middle- and high-level and
discriminative fake features by aggregating the cross-layer feature representations into the
last fully connected layers. The proposed pairwise learning can be used to improve the
performance of fake image detection further. With the proposed pairwise learning, the
proposed fake image detector should be able to have theability to identify the fake image
generated by a new GAN. Our experimental results demonstrated that the proposed method
outperforms other state-of-the-art schemes in terms of precision and recall rate.For future,
work square measure for instance employing an additional complicated and deeper model for
unpredictable issues. Integration of deep neural networks with the idea of increased learning,
wherever the model is simpler. Neural network solutions seldom take under consideration
non-linear feature interactions and non-monotonous short-run serial patterns, that square
measure necessary to model user behavior in thin sequence information. A model is also
integrated with neural networks to unravel this downside. The dataset can be inflated and
another variety of images can be used for coaching, for instance, gray-scale pictures.

21

You might also like