You are on page 1of 6

2021 6th International Conference for Convergence in Technology (I2CT)

Pune, India. Apr 02-04, 2021

Maintaining Privacy in Medical Imaging with


Federated Learning, Deep Learning, Differential
Privacy, and Encrypted Computation.
Unnati Shah Ishita Dave Jeel Malde
Dept. of Computer Engineering, Dept. of Computer Engineering, Dept. of Electronic Engineering,
Shah and Anchor Kutchhi Engineering Shah and Anchor Kutchhi Engineering Shah and Anchor Kutchhi Engineering
College, College, College,
Mumbai, India. Mumbai, India. Mumbai, India.
unnati.shah@sakec.ac.in ishita.dave@sakec.ac.in jeel.malde_19@sakec.ac.in

Jalpa Mehta Srikanth Kodeboyina


Dept. of Information Technology, Blue Eye Soft Corp,
Shah and Anchor Kutchhi Engineering Greenville, United States.
2021 6th International Conference for Convergence in Technology (I2CT) | 978-1-7281-8876-8/21/$31.00 ©2021 IEEE | DOI: 10.1109/I2CT51068.2021.9417997

College, sri@blueyesoft.com
Mumbai, India.
jalpa.meha@sakec.ac.in

simple high-quality images more quickly. Given the


Abstract— The availability of datasets for algorithm impressive progress in research on computer vision, it seems
training and evaluation is currently hampered due to medical that we are just beginning. Deep neural networks have shown
data privacy regulations. The lack of structured electronic the most promising results for the segmentation, analysis,
medical records and stringent legal criteria has made it and generation of images, such as human faces, created by
difficult for patient data to be collected and shared in a generative adversarial networks or GANs.
consolidated data lake. For training algorithms, such as
convolutional neural networks, this presents difficulties, often Such techniques require a lot of data for preparation. We
requiring vast training examples. To avoid the compromise of are also in data-poor settings in healthcare with low sample
patient privacy when encouraging clinical studies on broad sizes. Although millions of images can be used to train in the
datasets aimed at enhancing patient care, it is mandatory to typical object recognition project, our medical imaging
incorporate technological solutions to meet data security and datasets are on the order of hundreds of subjects. This has
usage criteria at the same time. We present an outline of been tackled by medical imaging researchers either by
current and cutting-edge techniques for secure and privacy- collecting or by generating massive high-quality datasets like
preserving artificial intelligence, with an accentuation on the UK Biobank, which is fantastic, but once completed,
medical imaging applications and potential opportunities. even the Biobank would not have 14 million subjects in it.
Keywords— Deep learning, Differential privacy, Federated We know, as researchers or clinicians, why we do not
learning, Medical data, Secure and Privacy-preserving AI. have as many training photos available. Our pictures contain
incredibly delicate and private material. It is not because we
I. INTRODUCTION do not obtain enough; while the purchase of diagnostic
images is costly, health centers generate millions of scans
Required privacy issues in medical imaging limit us from annually both for direct treatment and for study around the
completely exploiting the benefits of AI in our study. world. The photos are obtained, are not accessible (or should
Fortunately, three innovative technologies have been built not be) freely for study. Access to this data, also inside
that have immense potential for the future of machine organizations, is understandably extremely limited thanks to
learning in healthcare, including federated learning, essential regulations on patient privacy. The protection of
differential privacy, and encrypted computing with other this data privacy is also increasingly important: real
sectors constrained by private data regulation. With these anonymization of data is not easy, as it is not clear which
new data security strategies, we can train our models on information machines can be learned from apparently
encrypted data from a range of organizations and hospitals harmless data. For example, from some medical photos, it is
without sharing the patient data. Recently, the contributions possible to infer the age and sex of a patient, and we have
of scientists from Google, DeepMind, Apple, and many seen that multiple anonymized datasets can be combined in
others have made it easier for researchers to adopt these some instances to deanonymize them. Such privacy
techniques. considerations are important and necessary, but in our study,
Progress in segmentation, image analysis, texture this has prevented us from completely exploiting the benefits
analysis, and even image reconstruction from sensor-domain of artificial intelligence.
data shows that machine learning is an excellent method Fortunately, healthcare is not the only domain that needs
when implemented correctly. It is a tool that not only to use deep private data learning, and certain techniques
changes the way our images are analyzed but also changes would have a huge effect on machine learning's future. These
the way we collect and recreate them, enabling us to acquire advances in the protection of privacy will allow us to train
This work was supported in part by Blue Eye Soft Corp. our model on data from multiple institutions, hospitals, and

978-1-7281-8876-8/21/$31.00 ©2021 IEEE 1


Authorized licensed use limited to: Chaitanya Bharathi Institute of Tech - HYDERABAD. Downloaded on October 16,2023 at 06:16:03 UTC from IEEE Xplore. Restrictions apply.
clinics without sharing patient information. It enables the use requires greater exposure to issues of reproducibility than in
of information to be decoupled from data governance (or other fields of machine learning.
control). In other words, to use it in our statistical analysis,
we no longer must request a copy of a dataset. In a meta- F. Accountability
analysis of brain MRIs, the UK Biobank has recently been Reliable displays and metrics are needed for
combined with many other datasets. Together with Intel and accountability to help guide policymakers and control
19 other universities, the University of Pennsylvania is AI/ML systems. Accountability ties reproducibility and
currently leading the first real-world case of federated transparency together, but in the form of AI ethics
learning for medical use. committees, it needs effective oversight. Such committees
Without being privacy experts themselves, these tools will manage policy choices, decide what is important to
will make it easy for imaging scientists to train our models measure, and conduct fairness evaluations.
safely while protecting patient privacy. These new Such development of pillars requires certain strategies
possibilities must be known to our medical imaging that are efficient to draw a perspective of the framework.
community. These innovations could encourage new This strategy will require adequate governance from a future
organizational partnerships, allow previously considered point of view. Secondly, there needs to be enough data and
impossible meta-analysis, and enable us to make rapid technology present to train, prepare, and develop the data.
improvements to our current AI models as we can train them Subsequent to setting up the innovation and information, one
on more data. Not only would this enable us to have more requirement to chip away at training individuals and
training data, but it will also enable us to have more detailed planning the hierarchical help they need to unhesitatingly
training data. If we can train on data from other organizations contribute, oversee, and work close by AI applications.
around the world, we can adequately diversify our datasets to
ensure that our study fits the population of our world better.
Current voluntary datasets, for instance, may frequently III. METHODS
include an unreasonable number of young university student
subjects, resulting in training data that is not representative of A. Anonymization and Pseudonymization
our patient populations. We have been in a profound learning The most well-known techniques for privacy protection
revolution in medical imaging since 2016. Medical imaging of clinical information are to delete personal data from the
researchers are now able to implement reliable, safe, privacy- records and supplant delicate data with artificially generated
preserving AI more easily in 2020. data while reassigning them again. Anonymization involves
deletion of all applicable Digital Imaging and
II. PILLARS OF SECURE AND PRIVATE AI Communications in Medicine (DICOM) [15] metadata
entries (such as the name of the patient, Sex, and the like) in
A. Training Data Privacy medical imaging. The actual entries for pseudonymization
are substituted by synthetic data, and the safekeeping of the
This is done to prevent a malicious attacker from reverse- look-up table separately [9]. Simplicity is the principal
engineering the training data for any medical records. These advantage of both methods. In most clinical data archiving
medical training data are used for training, developing systems, anonymization software is built-in, making it the
machine learning algorithms. The training data affects the simplest in practice system. Pseudonymization adds to the
efficiency and accuracy of the algorithm used in developing complexity by involving data manipulation rather than just
AI-based applications. data erasure, as well as the security of search tables in order
to reverse the operation. Also, technological errors will make
B. Input Privacy the defense ineffective and potentially identifiable for an
The assurance that other people, including the model entire dataset.
designer, will not observe the input data of a user. There
must be no mishaps in the patient data for healthcare records. De-identification procedures are often typically used to
plan for the transfer or sharing of data. This raises troubles if
the patient pulls out their assent since information
C. Output Privacy administration is uncoupled from information proprietorship
Ensuring that the performance of a model is not or if the enactment changes. Finally, the criteria for the
accessible to someone other than the user whose knowledge deidentification process depend on the image data set: a leg
is inferred. Data must be available to the patient himself and X-ray is harder to interface back with an individual than an
no other parties in healthcare. automated tomography scan of the head which permits the
formation of the face to be reproduced from the picture
D. Model Privacy straightforwardly. As a result, de-identification by credulous
The guarantee that even a malicious party cannot misuse anonymization or pseudonymization alone should be a
the model. The data stored with the hospital’s system must hypothetically deficient advance against the deduction of
have an efficient model thus there is no activity for a leak personality [13].
within.
B. Differential Privacy
E. Reproducibility Differential Privacy guarantees that privacy is not
Machine learning algorithms designed to characterize, violated by statistical analysis. It ensures that the impact of
track, and interfere in human health (ML4H) are expected to data from a person on the overall performance of the model
work safely and efficiently while operating on a scale, is minimal. Differential privacy is often referred to as the
potentially beyond strict human supervision. This criterion technique of preserving a dataset's global statistical
distribution while reducing personally identifiable data.

2
Authorized licensed use limited to: Chaitanya Bharathi Institute of Tech - HYDERABAD. Downloaded on October 16,2023 at 06:16:03 UTC from IEEE Xplore. Restrictions apply.
Intuitively, a dataset is differentially confidential if a third Formally, it is not possible to aggregate the data due to
person cannot say whether a single entity was used to derive privacy, legal constraints, hospital policy, and user
a conclusion from it. Without understanding a patient's body discomfort to train a machine learning model on a dataset to
mass index (BMI), for example, a connection between make a prediction but based on the medical domain, where
obesity and heart disease may be established. Under a certain federated learning will solve this problem by being able to
variety of connections with the dataset, such as linkage or set predict and therefore not allowing users or an entity to share
separation, differential privacy resists re-identification their data. Not only because of its stable and private model,
attacks. By incorporating statistical noise, it typically Federated Learning is better than any other machine learning
operates at the model's input level (Local DP) or output level algorithm, but it can decrease the cost of uploading datasets
(Global DP). More noise means that individual contributions to the cloud by allowing training to take place within these
remain covered, but we gain insights into the general public devices locally. PySyft is a federated learning toolkit, an
at the same time without jeopardizing privacy. Depending on extension to the core toolkits of Deep Learning. There is a
a parameter called an epsilon (ε), the amount of noise popular toolkit for Pytorch to manage millions of devices on
included is dependent. The smaller the epsilon value, the the central server, so PySyft is used as a federated learning
more noise is applied, the greater the privacy it offers, and toolkit for a connection between the device to device and the
vice versa. As a result, in Differential Privacy, choosing the central server to find an acceptable interface between them.
right epsilon (ε) value is crucial.
Smarter simulations decreased latency and lower use of
Differential Privacy ensures that the data of the patient in resources are assisted by Federated Learning, all while
charge is kept private, making it ideal for healthcare retaining confidentiality. And this approach has another
applications. However, when it comes to image data, the immediate advantage: in addition to providing an update to
Differential Privacy methodology raises some difficulties. the shared model, the upgraded model on your machine can
Among the issues related to Differential Privacy, the one also be used automatically, empowering interactions
primary reason is the disruption of the dataset itself. Data customized to the way you use the phone.
manipulation will minimize data that can prove lethal to the
algorithm reliability, which is an area that has access to
comparatively limited data, for example, medical imaging
analysis. The method also poses difficulties concerning
plausibility checking, explaining the process to patients, i.e.,
data legibility [23] for the development and implementation
of algorithms, and increasing the need for a statistical person
to classify data representatives. Above all, the particulars of
carrying out Differential Privacy in imaging information are
indistinct. Tabular information can be shuffled easily, but
image disturbance can have unexpected effects, according to
analysis, which shows that this form of manipulation (e.g., Fig. 1. The procedure for Federated Learning
adversarial noise) can be used as both an algorithm attack
and a regularization method that increases robustness and First, the Federated Learning method operates by
tolerance against inversion attacks. Therefore, before the choosing a mathematical model to be trained by the central
implementation of Differential Privacy in medical imaging server. Then the central server transmits to several nodes the
[13][23], further research is needed. initial model. In addition, with their data, the nodes train the
data model locally. Finally, the central server pools the
C. Federated Learning effects of the model and produces one global model without
Federated learning is a technique for training machine any concept being accessed.
learning models with the knowledge that we do not have
access to [13]. Data is collected, processed into a dataset, and D. Homomorphic Encryption
taken to the central server to train the dataset into any model, Homomorphic encryption (HE) is a form of encryption
and we achieve a predictive output. It helps us to take the that permits encoded information to be registered as plain
algorithm to the data instead of doing this federated learning, content or decoded information. Homomorphism is a
and then carry the result to the central server. This implies mathematical principle that states that a computation's
that the user would not be asked to upload their individual composition is preserved. Since the algorithm only supports
information. Predictive maintenance is given by Federated a few mathematical operations, such as addition and
Learning. According to the outcomes in the central server, multiplication, it cannot be combined with traditional
predictive maintenance allows a forecast of when the system encryption algorithms like the Advanced Encryption
will need maintenance. In the healthcare domain, federated Standard (AES). However, a homomorphic algorithm has
learning use cases for devices would allow the user to learn a been successfully applied to convolutional neural networks,
model of machine learning that will help patients improve and its benefits can be used in a 'machine learning as a
certain aspects of their health without having to upload their service' situation, in which data is sent across the network to
data to a central cloud. Federated learning entails using a be processed on a cloud. The algorithm can be used to
wide corpus of high-quality decentralized data distributed encrypt and decrypt medical images until the benefits of
through several client devices for instruction. Since the homomorphic encryption in providing effective protection to
model is trained on client computers, no data from the user is original data are understood. When a model has a sole
expected to be submitted. Keeping the client's personal data owner, homomorphic encryption allows the owner to encrypt
on their computer gives them clear and physical control of their model such that untrustworthy third parties cannot train
their information. or use it without stealing it.

3
Authorized licensed use limited to: Chaitanya Bharathi Institute of Tech - HYDERABAD. Downloaded on October 16,2023 at 06:16:03 UTC from IEEE Xplore. Restrictions apply.
sensitive data without uncovering client personality or
gambling data spillage. There are certain SMPC limitations,
for example, the conditions for proceeding with information
transmission among parties and their online accessibility
[13][25][26].

F. Encrypted Deep Learning


Deep learning systems are now commonly used in
several applications, including facial recognition, medical
diagnosis, and many others. However, these devices are
subject to privacy attacks, which can jeopardize data
security, which may be dangerous and unethical insensitive
use cases including medical diagnosis. On account of their
distinction, the biological characteristics of the human body
are widely utilized in personality recognizable proof and
different fields. The objective data put away in the picture is
separated to perform data preparation as per the requirements
Fig. 2. Homomorphic Framework of the client through feature extraction. The principal
disadvantage of customary AI is that many parameters must
The patient's evidence, such as documents and X-ray be manually set during the learning process. This flaw is
images, is collected first. After that, the images are saved in especially perceptible when managing large information and
the DICOM regular format. The picture is transformed into a high-dimensional complex data. Machine learning requires
matrix in a third stage to execute the encryption process on deep learning technology, which uses an artificial neural
this data. These matrix elements are sent one by one to the network to extract data attributes. The original information is
proposed formula after being transformed into a matrix. The represented as a feature vector through feature learning,
main generation procedure is carried out in the next phase. which is input into a sub-learning system, and the sample is
The public and private keys are now created based on then defined or detected. Feature learning is a technique for
homomorphic property. The encryption operation is then defining and grading input samples.
performed using these parameters. The encrypted image can Deep learning employs a nonlinear neural network model
then be uploaded to the server. The image is then turned into to interpret raw data into elevated level conceptual portrayals
a matrix. The matrix is then transferred to the decryption via nonlinear transformations. Deep learning is based on an
formula one by one. The picture is generated from the artificial neural network architecture [6]. Complex functions
decrypted matrix. The initial picture can be searched. These are constructed after several linear transformations and
encrypted files can be used by both researchers and nonlinear transformations. The hypothesis of test
pharmacists. characterization is to improve the capacity to separate among
information and debilitate unimportant factors, shown by a
E. Secure Multi-Party Computation delineation of undeniable level articulation of unique
Secure Multi-Party Computation (SMPC) is a generic information: the picture is a multi-pixel network. The
cryptographic protocol for manipulating encrypted data features of the edge and position of the picture are addressed
shares and transmitting them among parties in such a way by the principal layer of the model. In view of the features
that no single party can disclose all of their private data. separated by the primary layer to recognize the example, the
Without an individual even having seen the data itself the subsequent layer shows highlights like edges, while the
computing result may be announced, which can only be obstruction factor is overlooked. To show the part that has
retrieved by consent. Safe computing helps data scientists clear recognition or arrangement help for the objective, the
and analysts to compute safely and privately on distributed third layer can additionally join and consolidate the
data without ever revealing it or transferring it. Consider, for distinguished pictures [8].
example, the issue of comparing the DNA of a person
Image encryption has higher security requirements.
against the DNA of a cancer patient database to find out
Traditional image encryption technology, which implements
whether the person is in a speculative category for a certain
a feature extraction technique based on a machine-learning
form of cancer. The DNA details of an individual are highly
algorithm, has a range of disadvantages, including image
delicate and should not be exposed to private organizations.
pre-processing concerns, a superior grade of picture quality,
So, by using a robust and reliable multi-party computing
and the need to rehash the image procurement during the
protocol that just tells the cancer community that the
decoding cycle to accomplish legitimate unscrambling.
individual's DNA is close to this case, the problem can be
solved. The secure multi-party computation secures that only We say that the data is continuously encrypted during the
the cancer group and nothing else about the DNA of whole process by private forecasts with deep learning. At no
someone is disclosed. SMPC research has recently improved point is the user exchanging raw data, just encrypted (that is,
due to its ability to enable sensitive sharing in semi-trusted hidden shared data). Syft Keras utilizes a library called TF
and low-trust settings. In the background of genetic profiling Encrypted under the hood to provide these private forecasts.
and diagnostics, SMPC has been used without disclosing the TF Encrypted incorporates cutting-edge methods with
patient's genetic data. In the medical imaging domain, SMPC cryptographic and machine learning, but you do not have to
can be used to run experiments on datasets completely in the think about anything and can work on the program for
encrypted domain, without affecting the results. SMPC will machine learning. Private predictions for deep learning mean
thus help to maximize the successful volume of available that data is continuously encrypted throughout the entire

4
Authorized licensed use limited to: Chaitanya Bharathi Institute of Tech - HYDERABAD. Downloaded on October 16,2023 at 06:16:03 UTC from IEEE Xplore. Restrictions apply.
phase. Patient-related clinical statistics are available in the ACKNOWLEDGMENT
facility, but not for the researcher to identify correlations in We thank Jalpa Mehta, Assistant Professor, Shah and
the data. To grasp the development of cancer. Also, privacy Anchor Kutchhi Engineering College, and Srikanth
issues can limit the availability of data. Data owners may Kodeboyina, CEO, Blue Eye Soft Corp who provided insight
also lack the skills and abilities to create deep learning and mastery that enormously helped the exploration, in spite
models on their own to reap the benefits from their data. of the fact that they may not concur with every one of the
Encrypted Deep learning predicts encrypted data while still understandings/conclusions of this paper.
encrypting the model used for prediction.
With its outstanding learning capabilities, deep learning REFERENCES
has solved many unresolved problems in the field of artificial [1] A. Vizitiu, C. I. Niţă, A. Puiu, C. Suciu and L. M. Itu, "Towards
intelligence in recent years and has seen exponential Privacy-Preserving Deep Learning-based Medical Imaging
progress. Deep learning is ideal for image encryption Applications," 2019 IEEE International Symposium on Medical
because it has a high learning power, can manage massive Measurements and Applications (MeMeA), Istanbul, Turkey, 2019,
volumes of data, extracts key features correctly, and meets pp. 1-6, doi: 10.1109/MeMeA.2019.8802193.
the high-security criteria of image encryption. [2] Maximin Coavoux, Shashi Narayan, Shay B. Cohen Privacy-
preserving neural representations of text (2018), arXiv preprint
arXiv:1808.09408.
IV. SECURE IMPLEMENTATION [3] Aslett, Louis JM, Pedro M. Esperança, and Chris C. Holmes,
Encrypted statistical machine learning: new privacy preserving
Medical imaging has arguably seen some of the most methods (2015), arXiv preprint arXiv:1508.06845.
significant developments in AI technologies due to parallel [4] Graepel, Thore, et al., Machine Learning on Encrypted Data (2012),
improvements in machine vision. Security and privacy ICISC 2012, LNCS 7839.
concerns, however, are not limited to medical imaging [27], [5] Hesamifard, Ehsan, Hassan Takabi, and Mehdi Ghasemi, CryptoDL:
Deep neural networks over encrypted data (2017), arXiv preprint
as seen, for example, in the 2019/2020 SARS-CoV2 arXiv:1711.05189.
pandemic, which ignited global concern about the effects of [6] Hesamifard, Ehsan, et al., Privacy-preserving machine learning as a
large-scale automated contact detection and motion tracking, service (2018), Proceedings on Privacy Enhancing Technologies.
setting political, ethical, and legal precedents, creating a need [7] Gilad-Bachrach, Ran, et al., CryptoNets: Applying neural networks to
for technological implementation of their secure and privacy- encrypted data with high throughput and accuracy (2016),
International Conference on Machine Learning.
protecting. A theoretical/mathematical guarantee of privacy [8] Surendra, H. & Mohan, H. S. A review of synthetic data generation
is given by encryption. However, there are also hardware- methods for privacy preserving data publishing. Int. J. Sci. Technol.
level privacy protections, for instance in the form of Res. 6, 95–101 (2017).
protected processors or enclaves implemented in mobile [9] Mohassel, Payman, and Yupeng Zhang, SecureML: A system for
devices [28]. For example, federated learning work processes scalable privacy-preserving machine learning (2017), 2017 IEEE
Symposium on Security and Privacy (SP).
can preserve data and algorithm privacy despite the fact that
[10] Bonawitz, Keith, et al., Practical secure aggregation for privacy-
the working framework piece is disregarded. Since preserving machine learning (2017), Proceedings of the 2017 ACM
equipment level deep learning algorithms, tensor preparing SIGSAC Conference on Computer and Communications Security.
units, or AI explicit guidance sets are turning out to be more [11] Wang, Yue, Cheng Si, and Xintao Wu, Regression model fitting
significant, such framework-based security will in all under differential privacy and model inversion attack (2015), Twenty-
guarantee that dependable execution conditions will turn out Fourth International Joint Conference on Artificial Intelligence.
[12] Lambin, P. et al. Radiomics: the bridge between medical imaging and
to be more predominant incorporated into edge equipment personalized medicine. Nat. Rev. Clin. Oncol. 14, 749–762 (2017).
like phones. [13] Kaissis, G.A., Makowski, M.R., Rückert, D. et al. Secure, privacy-
preserving, and federated machine learning in medical imaging. Nat
Mach Intell 2, 305–311 (2020).
V. CONCLUSION AND FUTURE WORK [14] Abadi, M., et al.: Deep Learning with Differential Privacy. SIGSAC
The e-health care system is the most emerging and Conference on Computer and Communications Security pp. 308–318
developing system for the protection of personal health (2016).
[15] DICOM reference guide. Health Dev. 30, 5–30 (2001).
records. A few problems in the privacy security of medical [16] Al-Rubaie, M. & Chang, J. M. Privacy-preserving machine learning:
data need to be addressed in the current scenario. To achieve threats and solutions. IEEE Secur. Priv. 17, 49–58 (2019).
proper incoming data storage, indexing maintenance, and [17] Price, W. N. & Cohen, I. G. Privacy in the age of medical big data.
accessing power, effective service is required. Furthermore, Nat. Med. 25, 37–43 (2019).
finding an acceptable set of sanitization attributes will reduce [18] HIPAA. US Department of Health and Human Services
ttps://www.hhs.gov/hipaa/index.html (2020).
the side effects, especially when sensitive information [19] Shokri, R., Stronati, M., Song, C. & Shmatikov, V. Membership
overlaps with non-sensitive information. Medical records inference attacks against machine learning models. In Proc. 38th
need to be held in big data storage in the healthcare cloud to IEEE Symp. Security and Privacy https://doi.org/10.1109/SP.2017.41
enable mobility and easy access for both patients and health (IEEE, 2017).
professionals. The researchers seek a major goal in [20] Konečný, J. et al. Federated learning: strategies for improving
maintaining medical data privacy, which is to build a reliable communication efficiency. Preprint https://arxiv.org/abs/1610.05492
(2016).
system to overcome the disadvantage of computational time [21] Rieke, N. et al. The future of digital health with federated learning.
and expense when encrypting and decrypting data. However, Preprint at https://arxiv.org/abs/2003.08119 (2020).
there is a great deal of work to be done to protect the privacy [22] R. Miotto, F. Wang, S. Wang, X. Jiang, and J. T. Dudley, “Deep
of health care data to provide improved data protection. This learning for healthcare: review, opportunities and challenges,”
review paper offers an overview of health care data privacy Briefings in Bioinformatics, vol. 19, no. 6, pp. 1236–1246, 05 2017.
[Online]. Available: https://dx.doi.org/10.1093/bib/bbx044
preservation and examines current methodologies by [23] Maintaining Privacy in Medical Data with Differential Privacy.
advantages, restriction, and performance measurement. It Available at: https://blog.openmined.org/maintaining-privacy-in-
will help readers appreciate the state-of-the-art protection of medical-data-with-differential-privacy/
medical data in terms of privacy.

5
Authorized licensed use limited to: Chaitanya Bharathi Institute of Tech - HYDERABAD. Downloaded on October 16,2023 at 06:16:03 UTC from IEEE Xplore. Restrictions apply.
[24] A.M. Vengadapurvaja, G. Nisha, R. Aarthy, N. Sasikaladevi: An Available at: https://www.inpher.io/technology/what-is-secure-
Efficient Homomorphic Medical Image Encryption Algorithm For multiparty-computation
Cloud Storage Security, 7th International Conference on Advances in [27] Qayyum, A., Qadir, J., Bilal, M. & Al-Fuqaha, A. Secure and robust
Computing & Communications, ICACC-2017, 22-24 August 2017, machine learning for healthcare: a survey. Preprint at
Cochin, India Available at: www.sciencedirect.com https://arxiv.org/ abs/2001.08103 (2020).
[25] What is Secure Multiparty Computation (MPC)? [28] Apple Platform Security
Available at: https://www.unboundtech.com/blog/secure-multiparty- https://support.apple.com/guide/security/secure-enclave-overview-
computation-mpc/ sec59b0b31f/web (2020).
[26] What is Secure Multiparty Computation?

6
Authorized licensed use limited to: Chaitanya Bharathi Institute of Tech - HYDERABAD. Downloaded on October 16,2023 at 06:16:03 UTC from IEEE Xplore. Restrictions apply.

You might also like