You are on page 1of 4

Deepfake detection tool unveiled by

Microsoft
By Leo Kelion
Technology desk editor

Published
10 hours ago

IMAGE COPYRIGHTGETTY IMAGES


Microsoft has developed a tool to spot deepfakes - computer-manipulated images in
which one person's likeness has been used to replace that of another.
The software analyses photos and videos to give a confidence score about whether the
material is likely to have been artificially created.
The firm says it hopes the tech will help "combat disinformation".
One expert has said it risks becoming quickly outdated because of the pace at which
deepfake tech is advancing.
To address this, Microsoft has also announced a separate system to help content
producers add hidden code to their footage so any subsequent changes can be easily
flagged.
Finding face-swaps
Deepfakes came to prominence in early 2018 after a developer adapted cutting-edge
artificial intelligence techniques to create software that swapped one person's face for
another.
The process worked by feeding a computer lots of still images of one person and video
footage of another. Software then used this to generate a new video featuring the
former's face in the place of the latter's, with matching expressions, lip-synch and other
movements.
Since then, the process has been simplified - opening it up to more users - and now
requires fewer photos to work.
Some apps exist that require only a single selfie to substitute a film star's face for that of
the user within clips from Hollywood movies.
But there are concerns the process can also be abused to create misleading clips, in
which a prominent figure is made to say or act in a way that never happened, for
political or other gain.
Early this year, Facebook banned deepfakes that might mislead users into thinking a
subject had said something they had not. Twitter and TikTok later followed with similar
rules of their own.
Microsoft's Video Authenticator tool works by trying to detect giveaway signs that an
image has been artificially generated, which might be invisible to the human eye.

IMAGE COPYRIGHTMICROSOFT
image captionThe Video Authenticator tool gives a percentage-based confidence score
as to how likely a clip is to be a deepfake
These include subtle fading or greyscale pixels at the boundary of where the computer-
created version of the target's face has been merged with that of the original subject's
body.
To build it, the firm applied its own machine-learning techniques to a public dataset of
about 1,000 deepfaked video sequences and then tested the resulting model against
an even bigger face-swap database created by Facebook.
One technology advisor noted that deepfake videos remain relatively rare for now, and
that most manipulated clips involve cruder re-edits done by a human. Even so, she
welcomed Microsoft's intervention.
"The only really widespread use we've seen so far is in non-consensual pornography
against women," commented Nina Schick, author of the book Deep Fakes and the
Infocalypse.
"But synthetic media is expected to become ubiquitous in about three to five years, so
we need to develop these tools going forward.
"However, as detection capabilities get better, so too will the generation capability - it's
never going to be the case that Microsoft can release one tool that can detect all kinds
of video manipulation."
Fingerprinted news
Microsoft has acknowledged this challenge.
In the short term, it said it hoped its existing product might help identify deepfakes
ahead of November's US election.
Rather than release it to the public, however, it is only offering it via a third-party
organisation, which in turn will provide it to news publishers and political campaigns
without charge.
The reason for this is to prevent bad actors getting hold of the code and using it to teach
their deepfake generators how to evade it.
To tackle the longer-term challenge, Microsoft has teamed up with the BBC, among
other media organisations, to support Project Origin, an initiative to "mark" online
content in a way that makes it possible to spot automatically any manipulation of the
material.
The US tech firm will do this via a two-part process.
Firstly, it has created an internet tool to add a digital fingerprint - in the form of
certificates and "hash" values - to the media's metadata.
Secondly, it has created a reader, to check for any evidence that the fingerprints have
been affected by third-party changes to the content.
Microsoft says people will then be able to use the reader in the form of a browser
extension to verify a file is authentic and check who has produced it.

Photo and video manipulation is crucial to the spread of often quite convincing
disinformation on social media.
But right now complex or deepfake technology isn't always necessary. Simple editing
technology is more often than not the favoured option.
That was the case with a recent manipulated video of US Presidential candidate Joe
Biden, which has been viewed over two million times on social media.
The clip shows a TV interview during which Biden appeared to be falling asleep. But it
was fake - the clip of the host was from a different TV Interview and snoring effects had
been added.
Computer-generated photos of people's faces, on the other hand, have already become
common hallmarks of sophisticated foreign interference campaigns, used to make fake
accounts appear more authentic.
One thing is for sure, more ways to spot media that has been manipulated or changed
is not a bad thing in the fight against online disinformation.

You might also like