You are on page 1of 36

ABSTRACT

Counterfeit computerized substance have multiplied as of late, because of the appearance of


man-made brainpower (AI) and profound learning methods. Profound phony (counterfeit film,
pictures, sounds, and recordings) can be a startling and perilous wonder that can mutilate reality
and disintegrate certainty by introducing a bogus truth. To conquer this issue reason Haar-
Cascade calculation is utilized. Haar falls are AI object recognition calculations. To distinguish
faces in video outlines, we utilize the Open-CV Cascade Classifier. We remove a decent measure
of casings from every video. The classifier gets each edge of video and gives a rundown of
potential bouncing boxes for faces alongside comparability scores to discover genuine or
counterfeit
1. PREAMBLE
1.1 INTRODUCTION
Man-made consciousness, profound learning, and picture preparing have all as of late ascended
in ubiquity, making ready for the improvement of profound phony pictures. Profound phony
recordings are hazardous, and can possibly sabotage truth, bewildering watchers, and precisely
pretending reality. With the ascent of online media, the spread of such substance has become
relentless, possibly compounding issues like disinformation and fear inspired notions. A critical
number of unmistakable political figures, entertainers, joke artists, and performers had their
pictures taken and meshed into sex recordings in early instances of profound fakes. Profound
phony recordings are significantly more sensible and more straightforward to make than ordinary
Hollywood-style counterfeit recordings, which are generally made by hand utilizing picture
altering programming like Adobe Photoshop. To accomplish face trading, Deep-counterfeit
recordings utilize profound learning procedures with huge examples of video pictures as
information. The more examples there are, the more sensible the outcome becomes. Over 56
hours of test accounts were taken care of into the Obama video to cause it to feel truly real and
trustworthy. Profound phony advanced substance might incorporate phony recordings, photos,
works of art, sounds, and then some, so having procedures to recognize, fight, and battle it is
basic. In case there is a sound, stable, and confided in approach to follow the past of advanced
data, accomplishing this objective is easy..
1.2 PROBLEM STATEMENT

2 NXIETY and depression can emerge


in children as young as
3 four years old, but symptoms are
often overlooked until
4 children can more clearly express
their discomfort given the
5 abstract emotions involved [1],
and communicate their
6 impairment with help-seeking adults.
The current gold standard
7 diagnostic assessment in young
children is to conduct a 60-90
8 minute semi-structured interview
with a trained clinician and
9 their primary caregiver. Limitations
such as waiting lists and
10 insurance burden may slow the
assessment process, and poor
11 parental report of internal child
emotions may also prevent
With the ascent of computerized reasoning (AI) and profound learning methods, counterfeit
advanced substance have multiplied as of late. Counterfeit film, sights, sounds, and recordings
can be a frightening and dangerous wonder, capable of altering reality and destroying confidence
by providing a false reality. To help eradicate the epidemic of fabricated substance, confirmation
of legitimacy (PoA) of sophisticated media is critical. Present arrangements do not have the
capacity to give history following and provenance of computerized media.
1.3OBJECTIVES

It is assessed that the exactness of our purposed framework will improve. As recently referenced,
it is hard to recognize genuine and counterfeit recordings. They are being utilized as proof to
convict and execute individuals. To balance such an answer, we are utilizing an assortment of
procedures to separate among phony and real recordings that are given as evidence of its
veracity. HaarCascade was used here, which is an AI-based technology that uses a vast number
of genuine and bogus images/recordings to train the classifier. Images that are genuine. These
photos contain the images that our classifier is supposed to see. Bogus Images - Images of
anything else that doesn't contain the article we're looking for..
1.4SCOPE

The primary point of the implemented arrangement is to help a client in following back a video
with various renditions to its starting point. On the off chance that a video can't be followed to its
unique distributer, it can't be trusted. Advanced Media has been an objective for content stealers
to etch off the work from others' endeavors and to get a name for themselves. The equivalent is
the rationale in numerous informal and obscure situations where the maker's work has been taken
for various purposes and the extension for the first item has declined because of the
overwhelming exhibition of the hoodwinked item. This isn't just an issue for the substance
makers yet in addition for the shoppers or the clients who are being cheated for the sake of brand
and may be affected over the reproduced and phony substance.
2. LITERATURE SURVEY
[7] A grin is the apex of normal visible sensations spread among people. While a few grins
emerge in the midst of exhilaration, overabundance is ridiculous. As a result, the expert has
suggested a framework for identifying them. Specialists work on overcoming the zygomatic
enormous and orbicularis oculi signs, which are important in determining if a grin is genuine or
not. The appearance of wrinkles on the cheekbones and at the side of the mouth indicates a
decline in the massive zygomatic muscle, while the elongation of the eye indicates a withdrawal
of the orbicularis oculi muscle. [8] A system was proposed that uses a convolutional neural
network (CNN) to choose properties at the outline level. These abilities are then used to prepare
current neural organisation (RNN), which determines if a movie is control-dependent. They put
the cycle to the test against a massive collection of false recordings from diverse sources. [12]
The artist's work. The credibility rule determines if something is a duplicate or a hoax. Because
of technological advancements, establishing the credibility of a work is now much easier. The
scientist demonstrates how false countenances are created by experts who use Artificial
Intelligence to place orders with distinct craftsmen. [13] Through the use of PDAs and the
development of informal communication over many years, digital photos and recordings have
become incredibly common advanced things. The growing popularity of computerised images
has coincided with an increase in methods for regulating image content, such as using photo
editing software like Photoshop. Scientists have developed a method for detecting facial control
in recordings that is extremely rapid and accurate, with a focus on two new techniques for
creating hyper-sensible falsified images: Deep-phony and Face2Face.
3. SYSTEM REQUIREMENTS
3.1 SYSTEMREQUIREMENT-SPECIFICATION
A Product Requirements Specification (SRS) is a document that outlines the characteristics of a
project, software, or application. In simple terms, an SRS document is a project manual that must
be completed before a project or application can begin. To begin our project, we required the
following functional and non-functional hardware and software needs.
3.2 HARDWARE REQUIREMENTS
Processor : Intel Pentium dual Core

RAM : 1 GB or above

HDD : 20 GB or above

Monitor : Color Monitor(15”).

Peripherals : Keyboard, Mouse, Multimedia Kit.

3.3 SOFTWARE REQUIREMENTS


Front End : Python

Back End : SQLite3

OS : Windows XP/7/8/10
4. SYSTEM-ANALYSIS
4.1 EXISTING-SYSTEM
Existingsystem provides a solution and a general framework for tracing and tracking the
provenance and history of digital content back to its original source using Ethereum smart
contracts, even if the digital item is replicated numerous times. The smart contract uses hashes
from the interplanetary file system (IPFS) to store digital data and metadata. Despite the fact that
this study's solution is centred on video content, the solution structure is sufficiently broad to be
used to any other sort of digital content. The strategy is founded on the idea that content can be
authentic and legitimate if it can be credibly linked to a reputable or trustworthy source.
4.2 PROPOSED SYSTEM
 The model starts by extracting face features from face recognition networks using a deep
learning network. The face traits are then fine-tuned to make them appropriate for real/fake
image recognition. The validation data from the contest yields good results using these
methods.
 Deep-fake is a technology that combines fake and deep-learning. Artificial intelligence's deep
learning function can be used to build and detect deep-fakes.
5. SYSTEM DESIGN
5.1 DATA FLOW DIAGRAM
A Data Flow Diagram (DFD) is a diagrammatic representation of the information streams within
a structure, indicating how data enters and departs the system, as well as where data is stored.
Data stream diagrams can be used to depict any business process in a practical way.

DFD's Benefits: These are critical archives that customers and others connected to the system
can access without interruption.

Benefits of DFD: These are critical archives that consumers and others connected with the
system may easily comprehend. Customers may be remembered for the DFD examination for
more precision. Customers who can look at charts and start avoiding mistakes almost
immediately may be able to avoid the structure's failure.

The notations used to draw DFD are as follows:

Name Symbol Table:5.1.1 symbols used in DFD

process

Data Store

Data Flow

External Entity
Fig 5.1.1: DFD Diagram
5.2 SEQUENCEDIAGRAM
UML Sequence Diagrams and other interaction diagrams describe how activities are carried out.
They capture the way things interact in a collaborative atmosphere. Sequence Diagrams are time-
focused and use the vertical axis of the diagram to represent time and the messages delivered and
received to visually portray the order of an interaction.

Fig 5.2: Sequence Diagram


5.4 USE-CASE -DIAGRAM
Capturing a system's dynamic behaviour is the most important aspect of modelling it.
Dynamic behaviour refers to a system's behaviour when it is running or operating.

Static behaviour is insufficient to model a system; dynamic behaviour is more


important than static behaviour. One of five UML diagrams that can be used to show the
dynamic nature of a system is the use case diagram.
Fig 5.4: Use-Case Diagram
6. SYSTEM IMPLEMENTATION
Introduction:
Execution is the process of having frameworks professors examine and implement new
equipment, teach clients, and introduce the new application. Depending on the scale of the
organisation that will be using the application and the risk associated with its use, framework
designers may choose to test the activity in only one area of the company, such as one division or
with just a few people.

Engineers, on the other hand, will stop using the old system one day and switch to the
new one the next. As we'll see, each execution system has its own set of justifications, which
vary depending on the business situation. Regardless of the execution method used, designers
strive to ensure that the framework's primary usage in a challenging environment is not
compromised..

In our project the conversion involves following steps:

o It is assessed that the precision of our purposed framework will improve. As


recently referenced, it is hard to recognize genuine and counterfeit recordings.
They are being utilized as proof to convict and execute individuals. To neutralize
such an answer, we are utilizing an assortment of strategies to separate among
phony and real recordings that are given as verification of its veracity.
o Here utilized Haar Cascade which is an AI based methodology where a great deal
of genuine and phony pictures/recordings are utilized to prepare the classifier.
Genuine pictures. These photographs contain the pictures that our classifier
should perceive. Bogus Images – Images of all the other things, which don't
contain the item we need to recognize.
o First and first, we should proclaim the ways of the preparation and test tests, just
as the metadata record.
o We have a reference to the first video in the metadata.
o Analyze the number of phony and genuine examples there are:

• HAAR utilizes falls to perceive the regions on the picture that contain faces..
6.1 ARCHITECTURE DIAGRAM

Fig 6.1: System Architecture

The framework, acknowledges video and converts outlines and apply Haar-course to
distinguish faces and thinks about outlines from testing video and predicts genuine or
counterfeit.
6.2 MODULES
Read Video

Above all else, a video is really an arrangement of pictures which gives the presence of
movement. Recordings can be viewed as an assortment of pictures (outlines). The capacity
cv2.VideoCapture and make a class case. As a contention we can indicate the info video
document name. Then again, to get to a video transfer, we will put the camera boundaries all
things being equal.

Face Detection

Face identification utilizing Haar falls is an AI based methodology where a course work is
prepared with a bunch of info information. OpenCV as of now contains numerous pre-
prepared classifiers for face, eyes, grins, and so on It is utilizing the face classifier.
6.4 ALGORITHM EXPLAINATION
Haar Cascade calculation

It's an ObjectDetectionAlgorithm for recognising faces in a photograph or a continuous video.


Viola and Jones introduced edge or line identification features in their study paper "Quick Object
Detection Using a Boosted Cascade of Simple Features," which was published in 2001..

The algorithm can be explained in four stages:

1. Calculating Haar Features.

2. Creating Integral Images.

3. Using Adaboost.

4. Implementing Cascading Classifiers.

Finding out Haar Features

The underlying strategy is to amass Haar characteristics. In a disclosure window, a Haar


incorporate is a set of assessments done on adjacent rectangular sections at a specified location.
Adding the pixel powers in each space and evaluating the differences between the aggregates are
part of the evaluation. Integral Image Creation

Without delving into a lot of the math behind it (if you're interested, check at the study), crucial
photos essentially speed up the estimation of these Haar highlights. Instead of registering each
pixel, it creates sub-square shapes and creates exhibit references for each of those sub-square
shapes. The Haar highlights are then processed using these..
Adaboost Training

Adaboost essentially selects the best components and instructs the classifiers to use them. It
combines "powerless classifiers" to create a "solid classifier" that the calculation may use to
recognise items. Moving a window over the info picture and processing Haar highlights for each
area of the picture creates powerless kids.. This is in contrast to a learnt edge, which distinguishes
non-objects from objects. Because these are "frail classifiers," a large number of Haar highlights
are necessary to construct a solid classifier with precision..

The last advance joins these powerless students into a solid student utilizing falling classifiers.

Executing Cascading Classifiers

The course classifier is comprised of a progression of stages, where each stage is an assortment of
powerless students. Powerless students are prepared utilizing boosting, which takes into account a
profoundly exact classifier from the mean forecast of every single frail student.
In view of this forecast, the classifier either chooses to demonstrate an article was discovered
(positive) or continue on to the following locale (negative). Stages are intended to dismiss
negative examples as quick as could be expected, in light of the fact that a greater part of the
windows don't contain anything of interest.
7. SYSTEM TESTING
7.1 TESTING
Introduction:

Programming testing is an important aspect of assuring programming quality, and


it comprises a detailed review of detail, planning, and coding. The process of putting software
into action in order to uncover problems is known as testing. During testing, the programme to
be tested is put through a series of tests, and the program's yield for the trials is assessed to see if
it's any good..

Testing Objectives:

1. Testing is the process of executing software in order to identify flaws.


2. A good experiment arrangement is one that has a good likelihood of revealing a previously
unknown problem.
3. A successful test is one that identifies a fault that was previously unnoticed.
These locations signify a change in perspective. Testing is unable to reveal product faults.
Coming up next are the Testing approaches:

UnitTesting:
Programming Testing Strategies:
The product designer can use a product testing procedure as a guidance. Testing is a
collection of exercises that can be planned ahead of time and led in a systematic manner. As
a result, a structure for programming testing should be defined as a set of phases into which
we may insert specified experiment plan strategies. Any product testing system should
include the following characteristics:
1. Testing begins at the module level and progresses "outward" to the entire PC-based
infrastructure.
2. Different testing procedures are appropriate for distinct scheduling focuses.
3. Testing is carried out by the product's creator and an independent experimental group.
4. While testing and debugging are two different activities, troubleshooting should be a part of
any testing process.
7.2 FEASIBILITY STUDY
Examining the basics Examine the feasibility of the project, as well as the likelihood that the
framework will be beneficial to the organisation. The plausibility study's main goal is to
determine the technical, operational, and financial feasibility of adding new modules and
debugging an existing running framework.. If you have unlimited resources and time, you can
complete any framework. The attainability research is a planned action by the administration. A
plausibility study's goal is to determine whether a data framework task is feasible and to suggest
suitable alternative arrangements.
There are perspectives in the plausibility study piece of the fundamental examination:
 Technical Feasibility

 Operational Feasibility

 Economical Feasibility
Technical Feasibility:
It refers to whether or not the currently available product fully supports the given application. It
looks at the benefits and drawbacks of using a specific programming for the turn of events and
its likelihood. It also looks into if further training should have been provided to personnel in
order for the application to work. The specialised qualifications are then compared to the specific
requirements.
Operational Feasibility:
It alludes to the item's capability to be operational. Some goods may perform brilliantly in terms
of planning and execution, but they may suffocate in the unchanging atmosphere. It includes a
review of any additional human assets that may be necessary, as well as their technical abilities.
It is dependent on the HR resources available for the project, and it includes predicting whether
the framework will be used if it is developed and implemented.
Economic Feasibility:
It refers to the benefits or outcomes we gain from an item when compared to the total cost of
maintaining the item. It's pretty much the same as the more established structure, thus it's not
viable to develop the item at that point. The term "financial assessment" can also refer to
"cost/benefit analysis." It is the most commonly used approach for assessing risk.
8. INTERPRETATION OF RESULTS

Figure 2: Home Screen


Figure 5:Dataset
Figure 6: Fake And Real videos
Figure 7: Face Detection (fake video)
Figure 8: Fake Detection
Figure 9: Fake Detection
Figure 10:Similarity Score
CONCLUSION
Portrayed a Haar-Cascade for face identification and a MSE(Mean square mistake (), SSIM
method for naturally identifying profound phony motion pictures in this investigation. The
primary strategy we utilize is SSIM, which is an examination method for looking at two edges,
assessing their closeness, and figuring a video likeness record dependent on visual constructions.
The Structural Similarity Index (SSIM) is a perceptual metric that evaluates picture quality
corruption brought about by preparing like information pressure or information transmission
misfortunes, and on account of profound phony films, a quality debasement in outlines is
actuated by a less all around prepared neural net. It's an exhaustive reference metric that requires
the utilization of two photographs taken simultaneously. The SSIM is dictated by luminance,
differentiation, and design. Our classifier is intended to perceive the pictures in these
photographs
FUTURE ENHANCEMENTS

Parkinson’s Disease (PD) automatic


identification in
early stages is one of the most challenging
medicine-related tasks
to date, since a patient may have a similar
behaviour to that of
a healthy individual at the very early stage
of the disease. In this
work, we cope with PD automatic
identification by means of a
Convolutional Neural Network (CNN),
which aims at learning
features from a signal extracted during the
individual’s exam by
means of a smart pen composed of a series
of sensors that can
extract information from handwritten
dynamics. We have shown
CNNs are able to learn relevant
information, thus outperforming
results obtained from raw data. Also, this
work aimed at building
a public dataset to be used by researchers
worldwide in order
to foster PD-related research.
As a future work, we are currently creating front-end DApps for clients to computerize the
foundation of confirmation of realness of distributed recordings. Likewise we intend to create a
pluggable DApp segment to give recognizability and build up legitimacy when playing or
showing recordings inside an internet browser. Additionally work is in progress for planning and
executing a completely useful and functional decentralized standing framework.
REFERENCES
1. Gennie Gebhart, “We’re Halfway to Encrypting the Entire Web,” https:
//www.eff.org/deeplinks/2017/02/were-halfway-encrypting-entire-web, 2017, online; accessed
02-07-2018.
2. When seeing is no longer believing: Inside the Pentagon’s race against deep fake videos.
January 2019. [Online]. Available: http://edition.cnn. com/interactive
/2019/01/business/pentagons-race-against- deep fakes
3. Lawmakers warn of ’deepfake’ videos ahead of 2020 elections. January 28, 2019. [Online].
4. How faking videos became easy and why that’s so scary. [Online]. Available:
http://fortune.com/2018/09/11/deep-fakes-obama-video/
5. Ying Li,” Deep Content: Unveiling Video Streaming Content from Encrypted WiFi
Traffic”,IEEE,2018.
6. B. Gipp, J. Kosti, and C. Breitinger, “Securing video integrity using decentralized trusted time
stamping on the bitcoin blockchain.” in MCIS, pp. 1-10, 2016.
7. N. Bhakt, P. Joshi and P. Dhyani, "A Novel Framework for Real and Fake Smile Detection
from Videos," 2018 Second International Conference on Electronics, Communication and
Aerospace Technology (ICECA), Coimbatore, 2018, pp. 1327-1330.
8. FiD. Güera and E. J. Delp, "Deepfake Video Detection Using Recurrent Neural Networks,"
2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance
(AVSS), Auckland, New Zealand, 2018, pp. 1-6.
9. Y. Li, M. Chang and S. Lyu, "In Ictu Oculi: Exposing AI Created Fake Videos by Detecting
Eye Blinking," 2018 IEEE International Workshop on Information Forensics and Security
(WIFS), Hong Kong, Hong Kong, 2018, pp. 1-7.
10. H. R. Hasan and K. Salah, "Combating Deepfake Videos Using Blockchain and Smart
Contracts," in IEEE Access, vol. 7, pp. 41596-41606, 2019.
11. S. Rana, S. Gaj, A. Sur and P. K. Bora, "Detection of fake 3D video using CNN," 2016 IEEE
18th International Workshop on Multimedia Signal Processing (MMSP), Montreal, QC, 2016,
pp. 1-5.
12. Luciano Floridi,” Artificial Intelligence, Deepfakes and a Future of Ectypes” Springer,
pp.317-321, 2018.
13. D. Afchar, V. Nozick, J. Yamagishi and I. Echizen, "MesoNet: a Compact Facial Video
Forgery Detection Network," 2018 IEEE International Workshop on Information Forensics and
Security (WIFS), Hong Kong, Hong Kong, 2018, pp. 1-7.
14. S. Bian, W. Luo and J. Huang, "Exposing fake bitrate video and its original bitrate," 2013
IEEE International Conference on Image Processing, Melbourne, VIC, 2013, pp. 4492-4496.
15. Bian, Shan & Luo, Weiqi & Huang, Jiwu,” Detecting video frame-rate up-conversion based
on periodic properties of inter-frame similarity” Multimedia Tools and Applications. 2014.

You might also like