You are on page 1of 8

UNIVERSITY INSTITUTE OF TECHNOLOGY OF

INDUSTRIAL ADMINISTRATION
CAPITAL REGION

COMPUTERWORLD

TEACHER: HENRY RIVERO SECTION: 282 A1

STUDENT: De Ponte Génesis SUBJECT: English

2021
Experts call Apple's CSAM scheme 'a dangerous technology'
In a new report, an influential group of 14
internationally reputed security researchers have
said such plans represent a “dangerous technology”
that expands state surveillance powers. They warn
the client-side scanning system, if used “would be
much more privacy invasive than previous
proposals to weaken encryption. Rather than
reading the content of encrypted communications,
CSS gives law enforcement the ability to remotely
search not just communications, but information
stored on user devices.”

These voices join a chorus of similar voices,


including civil liberties campaigners, privacy
advocates, and tech industry critics who
have already warned that the plans threaten basic
human rights.

But first we must know what this "new system" is:


What is it?
This system will allow Apple to detect and report
iCloud users storing CSAM content. First, it will be
necessary that the user has enabled the storage of
images in iCloud. In case this option is enabled, these
are the steps that Apple will perform:
• Before uploading an image to iCloud, Apple
will generate a unique identifier for it, which
should not change if for example the image
has undergone size or quality modifications.
• Apple will compare, on the user's device, the
above unique identifier with a list of unique
identifiers of already known CSAM images.
• Apple will upload the image together with its
unique identifier, a visual derivative of it and
the result of the above comparison to the
cloud. All of this is encrypted.
This system will allow Apple to detect and report iCloud users storing CSAM content. First, it will
be necessary that the user has enabled the storage of images in iCloud. In case this option is
enabled, these are the steps that Apple will perform:
• Before uploading an image to iCloud, Apple will generate a unique identifier for it, which
should not change if for example the image has undergone size or quality modifications.
• Apple will compare, on the user's device, the above unique identifier with a list of unique
identifiers of already known CSAM images.
• Apple will upload the image together with its unique identifier, a visual derivative of it and
the result of the above comparison to the cloud. All of this is encrypted.
Apple indicates that the risk of false positives is extremely low. (“1 in 1 trillion”).
Potential Dangers
As with any technology, while its initial use may be well
intentioned, there is the possibility that it could be used
improperly or for other purposes, as I show in the following
examples:
• If you get a collision in the unique identifier generation
system, you could frame someone for a crime involving
minors. That is, if you get an image to generate the same
"unique" identifier as one of the known and listed CSAM
images, you could cause a problem. Some algorithms for
generating "unique" identifiers are already having these
problems, will it happen with the system Apple will use?
• How do we know this technology will not be used for other
purposes? It could be used to detect political orientations,
voting intentions, detect images of weapons, screenshots
with sensitive information, corporate and governmental
espionage... In short, if we get X images to match Y list,
technically it would be possible to access the content. In
the following section you can read opinions about the use
of this technology.

• Translated with www.DeepL.com/Translator (free version)
Warnings from experts
Apple has attempted to characterize the resistance it
encountered to its original proposals as little more than
messaging confusion. Apologists have tried to mask it
with arguments about how most actions can be
detected on the Internet (which undermines the use of
online payment systems).
Critics say both excuses appear to be flawed in a
company that prides itself on privacy, particularly in the
absence of an internationally agreed statement of
digital human rights. Many believe these proposals
represent a Pandora's box of horrors leading to
unfettered surveillance and state overreach.
One big problem the latest researchers warn about is
that the plan allows scanning of a person's devices
"without any probable cause for anything illegitimate to
be done"
One door opens, another one gets opened
But for many users, particularly business users, there
are greater threats lurking. “As most user devices have
vulnerabilities, the surveillance and control capabilities
provided by CSS can potentially be abused by many
adversaries, from hostile state actors through criminals
to users’ intimate partners,” the report warns.

“Moreover, the opacity of mobile operating systems


makes it difficult to verify that CSS policies target only
material whose illegality is uncontested.”

Effectively, once such a system is put in place, it’s only


a matter of time until criminal entities figure out how to
undermine it, extending it to detect valuable personal or
business data, or inserting false positives against
political enemies.
Bibliographical references
• Evans, J. (s. f.). Experts call Apple’s CSAM scheme «a dangerous technology». Computerworld.
Recuperado 27 de octubre de 2021, de https://www.computerworld.com/article/3637076/experts-
call-apples-csam-scheme-a-dangerous-technology.html
• Aparicio, E. S. (2021, 12 agosto). ¿Apple podrá ver nuestras imágenes? Nuevo sistema CSAM
Detection | by Enrique Serrano Aparicio | Aug, 2021 |. Medium. Recuperado 28 de octubre de
2021, de https://medium.com/@EnriqueITE/apple-podr%C3%A1-ver-nuestras-fotos-nuevo-
sistema-csam-detection-fc3e724bfdbe

You might also like