You are on page 1of 49

A Project Report on

SeSPHR : A Methodology for Secure


Sharing of Personal Health Records in
the Cloud
submitted in partial fulfillment of the requirement for the award of the Degree of
BACHELOR OF TECHNOLOGY

by

B.Ajay (18AT1A0506)
P.Ismail (18AT1A0551)
M.Ashok Reddy (18AT1A0521)
P.Janaki Ram Vamshi (18AT1A0552)

Under the Guidance of

M.Sri Raghavendra M.Tech, (Ph.D)


Assistant Professor

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


G. PULLAIAH COLLEGE OF ENGINEERING AND TECHNOLOGY
(Autonomous)
(Approved by AICTE | NAAC Accreditation with ‘A’ Grade | Accredited by NBA (ECE,CSE & EEE) |
Permanently Affiliated to JNTUA)

2018-2022
G.PULLAIAH COLLEGE OF ENGINEERING AND TECHNOLOGY
(Autonomous)
(Approved by AICTE | NAAC Accreditation with ‘A’ Grade | Accredited by NBA (ECE, CSE & EEE) |
Permanently Affiliated to JNTUA)

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE
This is to certify that the project report entitled “SesPHR : A Methodology for Secure
Sharing of Personal Health Records in the Cloud” being submitted by M.Ashok Reddy
(18AT1A0521) P.Ismail (18AT1A0551) B.Ajay (18AT1A0506) P.Janaki Ram Vamshi
(18AT1A0552) in partial fulfillment of the requirement for the award of the degree of
Bachelor of Technology in Computer Science and Engineering of G.Pullaiah College of
Engineering and Technology, Kurnool is a record of bonafide work carried out by them under
my guidance and supervision.
The results embodied in this project report have not been submitted to any other university or
institute for the award of any Degree or Diploma.

M. Sri Raghavendra M.Tech., Ph.D M. Sri Lakshmi M.Tech., Ph.D


Project Supervisor Head of the Department

Date of Viva-Voce

INTERNAL EXAMINER EXTERNAL EXAMINER

ii
ACKNOWLEDGEMENT

We are extremely grateful to Chairman, Sri G.V.M.Mohan Kumar, of G.Pullaiah College of


Engineering and Technology, Kurnool, Andhra Pradesh for their good blessings.
We owe indebtedness to our Principal Dr.C.SrinivasaRao, M.E., Ph.D.G.Pullaiah College of
Engineering and Technology, Kurnool for providing us the required facilities.
We would like to express our deep sense of gratitude and our sincere thanks to HOD M.Sri
Lakshmi, M.Tech, Ph.D. Department of Computer Science and Engineering, G.Pullaiah College
of Engineering and Technology, Kurnool for providing the necessary facilities and
encouragement towards the project work.
We thank our project supervisor, M.Sri Raghavendra. for his guidance, valuable
suggestions and support in the completion of the project.
We gratefully acknowledge and express our thanks to teaching and non-teaching staff of CSE
Department.
We would like to express our love and affection to our parents for their encouragement
throughout this project work.

Project Associates
M.Ashok Reddy (18AT1A0521)
P.Ismail (18AT1A0551)
B.Ajay (18AT1A0506)
P.Janaki Ram Vamshi (18AT1A0552)

iii
ABSTRACT
The widespread acceptance of cloud based services in the healthcare sector has
resulted in cost effective and convenient exchange of Personal Health Records (PHRs)
among several participating entities of the e-Health systems. Nevertheless, storing the
confidential health information to cloud servers is susceptible to revelation or theft
and calls for the development of methodologies that ensure the privacy of the PHRs.
Therefore, we propose a methodology called SeSPHR for secure sharing of the PHRs
in the cloud. The SeSPHR scheme ensures patient-centric control on the PHRs and
preserves the confidentiality of the PHRs. The patients store the encrypted PHRs on
the un-trusted cloud servers and selectively grant access to different types of users on
different portions of the PHRs. A semi-trusted proxy called Setup and Re-encryption
Server (SRS) is introduced to set up the public/private key pairs and to produce the re-
encryption keys. Moreover, the methodology is secure against insider threats and also
enforces a forward and backward access control. Furthermore, we formally analyze
and verify the working of SeSPHR methodology through the High Level Petri Nets
(HLPN). Performance evaluation regarding time consumption indicates that the
SeSPHR methodology has potential to be employed for securely sharing the PHRs in
the cloud.

iv
Contents
Contents Page no.
CHAPTER 1 INTRODUCTION
1.1 What is Cloud Computing? 1
1.1.1 Types of Cloud Computing 2
1.1.2 Limitations of Traditional Technology 5
1.1.3 Cloud Security 8
1.2 Objective 8
1.3 Scope 8
1.4 Thesis Contribution 9
CHAPTER 2 LITERATURE REVIEW
2.1 Cloud Computing 10
2.2 Security in Cloud 12
CHAPTER 3 SeSPHR MODEL
3.1 System Model 18
3.2 Cloud Maintenance 18
3.3 Encryption Module 19
3.4 Interaction Module 20
3.4.1 Admin Module 20
3.4.2 User Module 21
3.5 Proposed Partitioning Methodology 22
3.6 SeSPHR Services 23
3.7 SeSPHR Data Flow 24
3.8 SeSPHR Sequence Diagram 25
CHAPTER 4 RESULTS ANALYSIS
4.1 Experimental Methodology 27
4.1.1 System Requirements 27
4.1.2 Software Requirements 27
4.1.3 Required data for Analysis 27
4.2 Screenshots 29
CHAPTER 5 CONCLUSION AND FUTURE WORK
5.1 Conclusion 36
5.2 Future Work 36

v
6 References 38

vi
LIST OF FIGURES AND TABLES
Figures/Tables Page no.

Table 1.1 Cloud Computing vs Traditional Computing 8


Fig 1.1 Thesis Contribution 9
Fig 3.1 Architecture of SeSPHR 18
Fig 3.2 Cloud Maintenance 19
Fig 3.3 Admin Module 21
Fig 3.4 User Module 22
Fig 3.5 Data Flow diagram 25
Fig 3.6 SeSPHR Sequence diagram 26
Fig 4.1 Required Data for Analysis 28

vii
CHAPTER 1
INTRODUCTION

1.1 What is Cloud Computing?


Cloud computing is the delivery of computing services including servers, storage, databases,
networking, software, analytics, and intelligence over the Internet (“the cloud”) to offer faster
innovation, flexible resources, and economies of scale.
Cloud computing has emerged as an important computing paradigm to offer pervasive and
on demand availability of various resources in the form of hardware, software, infrastructure,
and storage. Consequently, the cloud computing paradigm facilitates organizations by relieving
them from the protracted job of infrastructure development and has encouraged them to trust on
the third-party Information Technology (IT) services. Additionally, the cloud computing model
has demonstrated significant potential to increase coordination among several healthcare
stakeholders and also to ensure continuous availability of health information, and scalability.
Furthermore, the cloud computing also integrates various important entities of healthcare
domains, such as patients, hospital staff including the doctors, nursing staff, pharmacies, and
clinical laboratory personnel, insurance providers, and the service providers. Therefore, the
integration of aforementioned entities results in the evolution of a cost effective and
collaborative health ecosystem where the patients can easily create and manage their Personal
Health Records (PHRs).
Advantages
1)Backup and Restore data :
Once the data is stored in the cloud, it is easier to get back-up and restore that data using the
cloud.
2)Improved Collaboration :
Cloud applications improve collaboration by allowing groups of people to quickly and easily
share information in the cloud via shared storage.
3)Excellent Accessibility :
Cloud allows us to quickly and easily access store information anywhere, anytime in the whole
world, using an internet connection. An internet cloud infrastructure increases organization
productivity and efficiency by ensuring that our data is always accessible.
4)Low Maintenance cost :
Cloud computing reduces both hardware and software maintenance costs for organizations.

Department of CSE, GPCET, Kurnool Page 1


5)Data Security :
Data security is one of the biggest advantages of cloud computing. Cloud offers many advanced
features related to security and ensures that data is securely stored and handled.
6)Unlimited storage capacity :
Cloud offers us a huge amount of storing capacity for storing our important data such as
documents, images, audio, video, etc. in one place.
1.1.1 Types of Cloud Computing
Cloud computing consists of 2 Models :
1) Deployment Model
2) Service model
1)Deployment model :
Deployment model represents a specific type of cloud environment , primarily distinguished by
ownership , size & access.
Types of Deployment models :
1)Public Cloud
2)Private cloud
3)Hybrid Cloud
1)Public Cloud :
Public clouds are managed by third parties which provide cloud services over the internet to the
public, these services are available as pay-as-you-go. They offer solutions for minimizing IT
infrastructure costs and become a good option for handling peak loads on the local
infrastructure. Public clouds are the go-to option for small enterprises, which are able to start
their businesses without large upfront investments by completely relying on public infrastructure
for their needs. The fundamental characteristics of public clouds are multitenancy. A public
cloud is meant to serve multiple users, not a single customer. A user requires a virtual
computing environment that is separated, and most likely isolated, from other users.

Advantages :
1)Low cost
2)Location Independent
3)Quick and Easily setup
4)Scalability and Reliability
Example : AWS , Windows azure

Department of CSE, GPCET, Kurnool Page 2


General example : for Travelling busses are public transportation .
2)Private Cloud :
Private clouds are distributed systems that work on private infrastructure and provide the users
with dynamic provisioning of computing resources. Instead of a pay-as-you-go model in private
clouds, there could be other schemes that manage the usage of the cloud and proportionally
billing of the different departments or sections of an enterprise.

Advantages :
1)More Control
2)Security & Privacy
3)Improved Perfromance
Example : HP Data Centers , Ubuntu OS.
General Example : in Travelling Personal Vehicle is private Transportation.
3)Hybrid Cloud :
A hybrid cloud is a heterogeneous distributed system formed by combining facilities of public
cloud and private cloud. For this reason, they are also called heterogeneous clouds.
A major drawback of private deployments is the inability to scale on-demand and efficiently
address peak loads. Here public clouds are needed. Hence, a hybrid cloud takes advantage of
both public and private clouds.

Advantages :
1)Flexible & Secure
2)Cost Effective
3)Security
4)Risk Management
Examples : Amazon , Google , Cisco
General Examples : in Travelling Taxi is considered as Hybrid.
2)Service Model :
Based on the type of service provided by the service provider the Service model is divided into 3
categories.
1)IAAS (Infrastructure As A Service)
2)PAAS (Platform As A Service)
3)SAAS (Software As A Service)

Department of CSE, GPCET, Kurnool Page 3


1)Infrastructure As A Service :
Infrastructure as a service (IaaS) is a service model that delivers computer infrastructure on an
outsourced basis to support various operations. Typically IaaS is a service where infrastructure
is provided as outsourcing to enterprises such as networking equipment, devices, database, and
web servers. It is also known as Hardware as a Service (HaaS). IaaS customers pay on a per-user
basis, typically by the hour, week, or month. Some providers also charge customers based on the
amount of virtual machine space they use. It simply provides the underlying operating systems,
security, networking, and servers for developing such applications, and services, and deploying
development tools, databases, etc.

Advantages

o Cost-Effective: Eliminates capital expense and reduces ongoing cost and IaaS customers
pay on a per-user basis, typically by the hour, week, or month.
o Website hosting: Running websites using IaaS can be less expensive than traditional
web hosting.
o Security: The IaaS Cloud Provider may provide better security than your existing
software.
o Maintenance: There is no need to manage the underlying data center or the introduction
of new releases of the development or underlying software. This is all handled by the
IaaS Cloud Provider.
Example : AWS
2)Platform As A Service :
PaaS is a category of cloud computing that provides a platform and environment to allow
developers to build applications and services over the internet. PaaS services are hosted in the
cloud and accessed by users simply via their web browser. A PaaS provider hosts the hardware
and software on its own infrastructure. As a result, PaaS frees users from having to install in-
house hardware and software to develop or run a new application. Thus, the development and
deployment of the application take place independent of the hardware. The consumer does not
manage or control the underlying cloud infrastructure including network, servers, operating
systems, or storage, but has control over the deployed applications and possibly configuration
settings for the application-hosting environment. To make it simple, take the example of an
annual day function, you will have two options either to create a venue or to rent a venue but the
function is the same.

Department of CSE, GPCET, Kurnool Page 4


Advantages
o Simple and convenient for users: It provides much of the infrastructure and other IT
services, which users can access anywhere via a web browser.

o Cost-Effective: It charges for the services provided on a per-use basis thus eliminating
the expenses one may have for on-premises hardware and software.
o Efficiently managing the lifecycle: It is designed to support the complete web
application lifecycle: building, testing, deploying, managing, and updating.
o Efficiency: It allows for higher-level programming with reduced complexity thus, the
overall development of the application can be more effective.
Example : Windows Azure
3)Software As A Service :
Software-as-a-Service (SaaS) is a way of delivering services and applications over the Internet.
Instead of installing and maintaining software, we simply access it via the Internet, freeing
ourselves from the complex software and hardware management. It removes the need to install
and run applications on our own computers or in the data centers eliminating the expenses of
hardware as well as software maintenance. SaaS provides a complete software solution that you
purchase on a pay-as-you-go basis from a cloud service provider. Most SaaS applications can be
run directly from a web browser without any downloads or installations required. The SaaS
applications are sometimes called Web-based software, on-demand software, or hosted software.

Advantages
o Cost-Effective: Pay only for what you use.
o Reduced time: Users can run most SaaS apps directly from their web browser without
needing to download and install any software. This reduces the time spent in installation
and configuration and can reduce the issues that can get in the way of the software
deployment.
o Accessibility: We can Access app data from anywhere.
o Automatic updates: Rather than purchasing new software, customers rely on a SaaS
provider to automatically perform the updates.
o Scalability: It allows the users to access the services and features on-demand.
Example : Go To Meeting

1.1.2 Limitations of Traditional Technology


Cloud computing is far more abstract as a virtual hosting solution. Instead of being accessible
Department of CSE, GPCET, Kurnool Page 5
via physical hardware, all servers, software and networks are hosted in the cloud, off premises.
It’s a real-time virtual environment hosted between several different servers at the same time. So
rather than investing money into purchasing physical servers in-house, you can rent the data
storage space from cloud computing providers on a more cost effective pay-per-use basis.
The main differences between cloud hosting and traditional web hosting are:
1)Resilience and Elasticity
The information and applications hosted in the cloud are evenly distributed across all the
servers, which are connected to work as one. Therefore, if one server fails, no data is lost and
downtime is avoided. The cloud also offers more storage space and server resources, including
better computing power. This means your software and applications will perform faster.
Traditional IT systems are not so resilient and cannot guarantee a consistently high level of
server performance. They have limited capacity and are susceptible to downtime, which can
greatly hinder workplace productivity.
2)Flexibility and Scalability
Cloud hosting offers an enhanced level of flexibility and scalability in comparison to traditional
data centres. The on-demand virtual space of cloud computing has unlimited storage space and
more server resources. Cloud servers can scale up or down depending on the level of traffic your
website receives, and you will have full control to install any software as and when you need to.
This provides more flexibility for your business to grow. With traditional IT infrastructure, you
can only use the resources that are already available to you. If you run out of storage space, the
only solution is to purchase or rent another server. If you hire more employees, you will need to
pay for additional software licences and have these manually uploaded on your office hardware.
This can be a costly venture, especially if your business is growing quite rapidly.
3)Automation
A key difference between cloud computing and traditional IT infrastructure is how they are
managed. Cloud hosting is managed by the storage provider who takes care of all the necessary
hardware, ensures security measures are in place, and keeps it running smoothly. Traditional
data centres require heavy administration in-house, which can be costly and time consuming for
your business. Fully trained IT personnel may be needed to ensure regular monitoring and
maintenance of your servers – such as upgrades, configuration problems, threat protection and
installations.
4)Running Costs
Cloud computing is more cost effective than traditional IT infrastructure due to methods of payment for
the data storage services. With cloud based services, you only pay for what is used – similarly to how

Department of CSE, GPCET, Kurnool Page 6


you pay for utilities such as electricity. Furthermore, the decreased likelihood of downtime means
improved workplace performance and increased profits in the long run. With traditional IT infrastructure,
you will need to purchase equipment and additional server space upfront to adapt to business growth. If

this slows, you will end up paying for resources you don’t use. Furthermore, the value of physical servers
decreases year on year, so the return on investment of investing money in traditional IT infrastructure is
quite low.
Difference between cloud computing and traditional computing is shown below
Cloud Computing Traditional Computing
1) It refers to delivery of different services 1) It refers to delivery of different services on
such as data and programs through internet local server.
on different servers.
2) It takes place on third-party servers that 2) It takes place on physical hard drives and
is hosted by third-party hosting companies. website servers.
3) It is ability to access data anywhere at 3) User can access data only on system in
any time by user. which data is stored.
4) It is more cost effective as compared to 4) It is less cost effective as compared to cloud
tradition computing as operation and computing because one has to buy expensive
maintenance of server is shared among equipment’s to operate and maintain server.
several parties that in turn reduce cost of
public services.
5) It is more user-friendly as compared to 5) It is less user-friendly as compared to cloud
traditional computing because user can computing because data cannot be accessed
have access to data anytime anywhere anywhere and if user has to access data in
using internet. another system, then he need to save it in
external storage medium.
6) It requires fast, reliable and stable 6) It does not require any internet connection to
internet connection to access information access data or information.
anywhere at any time.
7) It provides more storage space and 7) It provides less storage as compared to cloud
servers as well as more computing power computing.
so that applications and software run must
faster and effectively.

Department of CSE, GPCET, Kurnool Page 7


8) It also provides scalability and elasticity 8) It does not provide any scalability and
i.e., one can increase or decrease storage elasticity.
capacity, server resources, etc., according
to business needs.
9) Cloud service is served by provider’s 9) It requires own team to maintain and
support team. monitor system that will need a lot of time and
efforts.
10) Software is offered as an on-demand 10) Software in purchased individually for
service (SaaS) that can be accessed through every user and requires to be updated
subscription service. periodically.
Table 1.1 Cloud Computing vs Traditional Computing
1.1.3 Cloud Security
PHRs contain information, such as: (a) demographic information, (b) patients’ medical history
including the diagnosis, allergies, past surgeries, and treatments, (c) laboratory reports, (d)
data about health insurance claims, and (e) private notes of the patients about certain important
observed health conditions. More formally, the PHRs are managed through the Internet based
tools to permit patients to create and manage their health information as lifelong records that
can be made available to those who need the access.
1.2 Objective
o A major reason for patients’ apprehensions regarding the confidentiality of PHRs is the
nature of the cloud to share and store the PHRs.
o The implement forward and backward access control.
o Formal analysis and verification of the proposed methodology is performed to validate
its working according to the specifications.
o Design to reduce the increasing rate of stolen properties.

1.3 Scope
It is important to mention that the scope of this project is limited to securing the PHR itself.
Moreover, it is assumed that communication between the client and SRS is secured by use of
standard protocols like, IPSec or SSL. These protocols are widely used over the internet and
are fully capable of securing communication. However, communication security is beyond the
scope of these protocols. The setup, key generation, and re-encryption phases are carried out at
SRS(Setup and Re-encryption Server).

Department of CSE, GPCET, Kurnool Page 8


1.4 Thesis Contribution

Fig 1.1 Thesis Contribution

Department of CSE, GPCET, Kurnool Page 9


CHAPTER 2
LITERATURE REVIEW

2.1 Cloud Computing


Cloud computing is emerging as a new computing paradigm in the healthcare sector besides
other business domains. Large numbers of health organizations have started shifting the
electronic health information to the cloud environment. Introducing the cloud services in the
health sector not only facilitates the exchange of electronic medical records among the hospitals
and clinics, but also enables the cloud to act as a medical record storage center. Moreover,
shifting to the cloud environment relieves the healthcare organizations of the tedious tasks of
infrastructure management and also minimizes development and maintenance costs.
Nonetheless, storing the patient health data in the third-party servers also entails serious threats
to data privacy. Because of probable disclosure of medical records stored and exchanged in the
cloud, the patients' privacy concerns should essentially be considered when designing the
security and privacy mechanisms. Various approaches have been used to preserve the privacy of
the health information in the cloud environment. This survey aims to encompass the state-of-the-
art privacy-preserving approaches employed in the e-Health clouds. Moreover, the privacy-
preserving approaches are classified into cryptographic and noncryptographic approaches and
taxonomy of the approaches is also presented. Furthermore, the strengths and weaknesses of the
presented approaches are reported and some open issues are highlighted.
Existing cloud storage systems lack privacy aware architectures that meet accessibility goals
for complex collaboration. This deficiency is fully realized in the healthcare industry, where
cloud-enabling technology blurs the ownership boundary of health and wellness information.
Whether among traditional `stovepiped' data silos, health information exchanges or personally
controlled health information repositories, various forms of privacy neglect are common
practice. We propose a paradigm shift in the interaction of users with cloud services that
removes unwarranted trust in the cloud service provider and provisions accessibility for
collaborators. To realize the paradigm shift, it is necessary to provide anonymity in data storage
and separate the administration of access policy and authorization from the mechanisms used for
enforcement. The dispensation of authorizations in the SAPPHIRE architecture bootstraps a
traditional Kerberos ticket-based approach with `trust verifications'. In our evaluation, we prove
the security properties of the SAPPHIRE architecture and implement a small scale prototype.
Our analysis shows that SAPPHIRE is a viable extension of collaborative health information

Department of CSE, GPCET, Kurnool Page 10


systems through the provision of anonymity and enhanced policy administration for the primary
data owner.
While using the cloud storage services on resource constraint mobile device, the mobile user
needs to ensure the confidentiality of the critical data before uploading on the cloud storage. The
resource limitation of mobile devices restricts mobile users for executing complex security
operations using computational power of mobile devices. To make security schemes suitable for
mobile devices, large volume of existing security schemes execute complex security operations
remotely on cloud or trusted third party. Alternatively, few of the existing security schemes
focus on the reduction of the computational complexity of the cryptographic algorithms.
Keeping in view the resource limitation of mobile devices, this paper, introduces an incremental
cryptographic version of the existing security schemes, such as encryption-based scheme,
coding-based scheme, and sharing-based scheme, for improving the block(s) modification
operations in term of resource utilization on mobile device. The experimental results show
significant improvement in resource utilization on mobile device while performing block
insertion, deletion, and modification operations as compared to the original version of the
aforementioned schemes.
Cloud computing has recently emerged as a new paradigm for hosting and delivering services
over the Internet. Cloud computing is attractive to business owners as it eliminates the
requirement for users to plan ahead for provisioning, and allows enterprises to start from the
small and increase resources only when there is a rise in service demand. However, despite the
fact that cloud computing offers huge opportunities to the IT industry, the development of cloud
computing technology is currently at its infancy, with many issues still to be addressed. In this
paper, we present a survey of cloud computing, highlighting its key concepts, architectural
principles, state-of-the-art implementation as well as research challenges. The aim of this paper
is to provide a better understanding of the design challenges of cloud computing and identify
important research directions in this increasingly important area.
This paper presents C-CLOUD, a democratic cloud infrastructure for renting computing
resources including non-cloud resources (i.e. computing equipment not part of any cloud
infrastructure, such as, PCs, laptops, enterprise servers and clusters). C-CLOUD enables
enormous amount of surplus computing resources, in the range of hundreds of millions, to be
rented out to cloud users. Such a sharing of resources allows resource owners to earn from idle
resources, and cloud users to have a cost-efficient alternative to large cloud providers. Compared
to existing approaches to sharing surplus resources, C-CLOUD has two key challenges: ensuring
Service Level Agreement (SLA) and reliability of reservations made over heterogeneous

Department of CSE, GPCET, Kurnool Page 11


resources, and providing appropriate mechanism to encourage sharing of resources. In this
context, C-CLOUD introduces novel incentive mechanism that determines resource rents
parametrically based on their reliability and capability.
Scientific computing applications have benefited greatly from high performance computing
infrastructure such as supercomputers. However, we are seeing a paradigm shift in the
computational structure, design, and requirements of these applications. Increasingly, data-
driven and machine learning approaches are being used to support, speed-up, and enhance
scientific computing applications, especially molecular dynamics simulations. Concurrently,
cloud computing platforms are increasingly appealing for scientific computing, providing
“infi-nite” computing powers, easier programming and deployment models,
and access to computing accelerators such as TPUs (Tensor Processing Units). This confluence
of machine learning (ML) and cloud computing represents exciting opportunities for cloud and
systems researchers. ML-assisted molecular dynamics simulations are a new class of workload,
and exhibit unique computational patterns. These simulations present new challenges for low-
cost and high-performance execution. We argue that transient cloud resources, such as low-cost
preemptible cloud VMs, can be a viable platform for this new workload. Finally, we present
some low-hanging fruits and long-term challenges in cloud resource management, and the
integration of molecular dynamics simulations into ML platforms (such as TensorFlow).
2.2 Security in Cloud
Continually disclosed vulnerabilities reveal that traditional computer architecture lacks the
consideration of security. This article proposes a security-first architecture, with an Active
Security Processor (ASP) integrated to conventional computer architectures. To reduce the
attack surface of ASP and improve the security of the whole system, the ASP is physically
isolated from Computation Processor Units (CPU) with an asymmetric address space, which
enables both ASP and CPU to run their operating system and applications independently in their
own memory space. Furthermore, the ASP, which has the highest privilege (Super Root) of the
whole system, possesses two advantageous features. First, the ASP can efficiently access all
CPU resources and collect multi-dimensional information to monitor malicious behaviors,
meanwhile, the CPU cannot access the ASP's private resources in any way. Second, instead of
being scheduled by CPUs, the ASP can actively manage the security mechanisms employed in
either CPUs or the ASP. Based on the security-first architecture, we introduce several typical
security tasks running on ASP. With different considerations in terms of system overhead,
complexity and performance, we also explore four typical system-level implementations for
integrating the ASP to the security-first architecture. The first-generation ASP was designed and
Department of CSE, GPCET, Kurnool Page 12
implemented based on the 40nm technology, and a security computer system was implemented
based on it. Evaluations on this real hardware platform demonstrate that the security-first
architecture can protect the system effectively with minor performance impacts on computing
workloads.
Caches pose a significant challenge to formal proofs of security for code executing on
application processors, as the cache access pattern of security-critical services may leak secret
information. This paper reveals a novel attack vector, exposing a low-noise cache storage
channel that can be exploited by adapting well-known timing channel analysis techniques. The
vector can also be used to attack various types of security-critical software such as hypervisors
and application security monitors. The attack vector uses virtual aliases with mismatched
memory attributes and self-modifying code to misconfigure the memory system, allowing an
attacker to place incoherent copies of the same physical address into the caches and observe
which addresses are stored in different levels of cache. We design and implement three different
attacks using the new vector on trusted services and report on the discovery of an 128-bit key
from an AES encryption service running in Trust Zone on Raspberry Pi 2. Moreover, we subvert
the integrity properties of an ARMv7 hypervisor that was formally verified against a cache-less
model. We evaluate well-known countermeasures against the new attack vector and propose a
verification methodology that allows to formally prove the effectiveness of defence mechanisms
on the binary code of the trusted software.
As networks expand in size and complexity, they pose greater administrative and
management challenges. Software-defined networks (SDNs) offer a promising approach to
meeting some of these challenges. In this paper, we propose a policy-driven security architecture
for securing end-to-end services across multiple SDN domains. We develop a language-based
approach to design security policies that are relevant for securing SDN services and
communications. We describe the policy language and its use in specifying security policies to
control the flow of information in a multi-domain SDN. We demonstrate the specification of
fine-grained security policies based on a variety of attributes, such as parameters associated with
users and devices/switches, context information, such as location and routing information, and
services accessed in SDN as well as security attributes associated with the switches and
controllers in different domains. An important feature of our architecture is its ability to specify
path- and flow-based security policies that are significant for securing end-to-end services in
SDNs. We describe the design and the implementation of our proposed policy-based security
architecture and demonstrate its use in scenarios involving both intra- and inter-domain
communications with multiple SDN controllers. We analyze the performance characteristics of

Department of CSE, GPCET, Kurnool Page 13


our architecture as well as discuss how our architecture is able to counteract various security
attacks. The dynamic security policy-based approach and the distribution of corresponding
security capabilities intelligently as a service layer that enables flow-based security enforcement
and protection of multitude of network devices against attacks are important contributions of
this paper.
The North American Electric Reliability Corporation (NERC) uses transmission equipment
inventory and outage data to analyze outage trends to assist in identifying significant reliability
risks to the bulk power system (BPS). These risk areas enable NERC to identify priority projects
that provide coordinated and effective solutions to these identified risks to the reliability of the
BPS. This paper presents the framework of statistical analysis of transformer outage data to
identify outage trends across voltage levels. This paper presents the first study of the North
American combined inventory and outage data for both automatic and non-automatic outages of
transformer. The results of this analysis can be used as input to value-based transmission and
distribution reliability planning, as well as for probabilistic based operations and equipment
maintenance of transmission systems.
Epistemic uncertainty exists in system availability evaluation due to the lack of data and
information. To address it, this article proposes a series of definitions of the uncertainty theory-
based availability, called belief availability, expanding the scope of belief reliability by
introducing uncertainty-measured logistics and maintenance into belief reliability. Based on an
uncertain alternating renewal process, we construct a belief availability model for repairable
systems subject to the epistemic uncertainty. From the model, we derive formulas of several
belief availability metrics, including belief availability (inherent, achieved, and operational),
delay time ratio, maintenance time ratio, and belief failure frequency. We find an interesting
property that the states order or the initial state in the model will not influence these metrics. A
case study about the oxygen generation system (OGS) on the international space station was
conducted to analyze the impact of its working, logistics, and maintenance time on the OGS
belief operational availability. The results show a potential application in belief availability
tradeoff between the OGS's availability-related parameters. In addition, we compared the
proposed availability with probability one based on the time distributions of the OGS states,
illustrating our method can effectively reduce the deviation of availability evaluation with
insufficient data.
As higher degrees of operational autonomy are sought in engineered systems, resilient
command and control (C2) capabilities are essential for mitigating performance degradation and
disruptions. Autonomous C2 systems typically excel in performance optimization and failure-

Department of CSE, GPCET, Kurnool Page 14


prevention within known or anticipated environments, but can exhibit poor efficacy given a lack
of preemptive knowledge of needs and environments. This paper presents a context-driven
decision engine for resilient C2 of engineered systems in emergent operating environments. The
solution integrates diagnostic and prognostic heuristics to 1) assess system state of health (SoH)
based on operational availability; 2) establish situational awareness; and 3) drive C2 decisions
based on scenario specific constraints and priorities. The solution is applied to an air traffic
management problem for autonomous aircraft operations based on a Federal Aviation
Administration Next Generation Air Transportation System (NextGen) concept. Agent-based
modeling and Monte Carlo simulation are used to evaluate the incident rate (λ) and mission
reliability (RM) of the proposed context-based solution against rule-based and utility-based C2
solutions, including the Radio Technical Commission for Aeronautics DO-185B standard and
Dynamic A-star Lite (D* Lite) algorithm. Paired t-tests are used to compare approaches based
on λ and RM given identical test cases. Results suggest that the proposed context-based solution
can concurrently enhance threat avoidance and adaptation capabilities in emergent
environments, whereas unilaterally optimized performance is observed via the rule-based and
utility-based C2 solutions
Adaptive aiding, a concept that involves tailoring the time and nature of operator aid to
variation of tasks, operators, and environments, is examined. Aiding possibilities are discussed
from the perspective of application domains and the need to integrate adaptive aiding with other
intelligent systems. Particular attention is given to the attributes affecting the specification
process. ADAPT, a design tool for assisting designers in conceptualizing and specifying
functionality of adaptive aiding systems, is described. Emphasis is placed on a proposed
scenario analysis facility design and analysis of the specification process. ADAPT's
shortcomings are briefly discussed.
In examining the role of time in mental workload, the authors present a different perspective
from which to view the problem of assessment. Mental workload is plotted in three dimensions,
whose axes represent effective time for action, perceived distance from desired goal state, level
of effort required to achieve the time-constrained goal. This representation allows the generation
of isodynamic workload contours that incorporate the factors of operator skill and equifinality of
effort. An adaptive interface for dynamic task reallocation is described that uses this form of
assessment to reconcile the joint aims of stable operator loading and acceptable primary task
performance by the total.
The existing works that relate to the proposed work are presented. The authors in (j pecarina, s.
pu, j c liu ,2012) used public key encryption based approach to uphold the anonymity and

Department of CSE, GPCET, Kurnool Page 15


unlinkability of health information in semitrusted cloud by separately submitting the Personally
Identifiable Information (PII). The patients encrypt the PHRs by the patients through the public
key of the Cloud Service Provider (CSP) and the CSP decrypts the record using the private key,
stores the health record and the location of the file (index), and subsequently encrypts them
through the symmetric key encryption. The administrative control of the patient on the PHRs is
maintained by pairing the location and the master key.
However, a limitation of the approach is that it allows the CSP to decrypt the PHRs that in
turn may act maliciously. On the other hand, we introduced a semi-trusted authority called the
SRS that re-encrypts the ciphertext generated by the PHR owner and issues keys to the users that
request access to the PHRs.
(Chen et al.,2012) introduced a method to exercise the access control dynamically on the
PHRs in the multi-user cloud environment through the Lagrange Multiplier using the SKE.
Automatic user revocation is the key characteristics of the approach. To overcome the
complexities of the key management, a partial order relationship among the users is maintained.
However, the scheme requires the PHR owners to be online when the access is to be granted or
revoked. Contrary to the scheme presented in (T.S.Chen, C.H. Liu, T.L. Chen, C.S. Chen, J.G
babu and T.C lin,2012) our proposed approach does not require the PHR owners to be online to
grant the access over PHRs. Instead the semi-trusted authority determines the access privileges
for users and after successful authorization, calculates the re-encryption keys for the users
requesting the access.
The authors in (M.jafari, R.S. Naini, and N.P. Sheppard,2011) used a Digital Right
Management (DRM) based approach to offer patient-centric access control. The authors
employed the Content Key Encryption (CKE) for encryption and the users with the lawful
license are permitted to access the health-data. First proxy re-encryption methodology was
proposed in (X Liang, Zhenfu cao, Huang Lin, and jun Shao, 2009). The policy in (X Liang,
Zhenfu cao, Huang Lin, and jun Shao, 2009) is based on ciphertext and the size of the ciphertext
increases linearly with multi-use use whereas our policy of our technique is based on keys and it
doesn’t affect the size of the ciphertext. This is due to the fact that the (X Liang, Zhenfu cao,
Huang Lin, and jun Shao, 2009) it requires the re-encryption step that is lacking in our
methodology. An approach to securely share the PHRs in multi-owner setting, which is divided
into diverse domains using the Attribute Based Encryption (ABE) is presented by (Li et
al,2013). The proposed methodology is based on the methodology originally presented in (X
Liang, Zhenfu cao, Huang Lin, and jun Shao, 2009).. The approach uses proxy re-encryption
technique to re-encrypt the PHRs after the revocation of certain user(s). In the approach, the

Department of CSE, GPCET, Kurnool Page 16


intricacies and cost of key management have been effectively minimized and the phenomenon of
on demand user revocation has been improved. Despite its scalability, the approach is unable to
efficiently handle the circumstances that require granting the access rights on the basis of users’
identities. (Xhafa et al,2014) also used Ciphertext Policy ABE (CPABE) to ensure the user
accountability. Besides protecting the privacy of the users, the proposed approach is also
capable of identifying the users that malfunction and distribute the decryption keys to other
users illegitimately. An approach to concurrently ensure the fine-grained access and
confidentiality of the healthcare data subcontracted to the cloud servers is presented in (S.yu, C.
Wang, K.Ren, and W.Lou,2010). The expensive tasks of data files re-encryption, update of
secret keys, and restricting the revoked users to learn the data contents are addressed through the
proxy re-encryption, Key Policy ABE (KP-ABE), and lazy re-encryption. The cloud servers are
delegated the tasks of re-encryption of data files and subsequent storage to the cloud
environment. However, in the proposed framework the data owner is also assumed as a trusted
authority that manages the keys for multiple owners and multiple users. Therefore, the
inefficiencies would occur at the PHR owners’ end to manage multiple keys for different
attributes for multiple owners. Our approach avoids the overhead because the tasks of key
generation and key distribution to different types of users are performed by the semi-trusted
authority. The one group of authors (C.leng, H.yu, J.huang,2013) and another group of authors
(D.H Tran, N.H.long, Z.wei, W.Keong,2011) also used the proxy re-encryption based
approaches to offer fine-grained access control. Our proposed framework permits the PHR
encryption by the owners before storing at the cloud and introduces a semi-trusted authority that
re-encrypts the ciphertext without learning about the contents of the PHRs. Only the authorized
users having the decryption keys issues by the semi-trusted authorities can decrypt the PHR’s.

Department of CSE, GPCET, Kurnool Page 17


CHAPTER 3
METHODOLOGY

3.1 System Model


The proposed scheme employs proxy re-encryption for providing confidentiality and secure
sharing of PHRs through the public cloud. The architecture of the proposed SeSPHR
methodology is presented in Fig 3.1

Fig 3.1 Architecture of SeSPHR

The proposed methodology to share the PHRs in the cloud environment involves three entities
namely: (a) the cloud, (b) Setup and Re-encryption Server (SRS), and (c) the users. Brief
description about each of the entities is presented below.

3.2 Cloud Maintainance


The scheme proposes the storage of the PHRs on the cloud by the PHR owners for subsequent
sharing with other users in a secure manner. The cloud is assumed as un-trusted entity and the
users upload or download PHRs to or from the cloud servers. As in the proposed methodology
the cloud resources are utilized only to upload and download the PHRs by both types of users,
therefore, no changes pertaining to the cloud are essential.

Department of CSE, GPCET, Kurnool Page 18


Fig 3.2 Cloud Maintainance

3.3 Encryption Module


The SRS is a semi-trusted server that is responsible for setting up public/private key pairs for
the users in the system. The SRS also generates the re-encryption keys for the purpose of secure
PHR sharing among different user groups. The SRS in the proposed methodology is considered
as semitrusted entity. Therefore, we assume it to be honest following the protocol generally but
curious in nature. The keys are maintained by the SRS but the PHR data is never transmitted to
the SRS. Encryption and decryption operations are performed at the users’ ends. Besides the key
management, the SRS also implements the access control on the shared data. The SRS is
independent server that cannot be deployed over a public cloud because of cloud being
untrusted entity. The SRS can be maintained by a trusted third-party organization or by a group
of hospitals for convenience of the patients. It can also be maintained by a group of connected
patients. However, SRS maintained by hospitals or by a group of patients will generate more
trust due to involvement of health professionals and/or self-control over SRS by patients.

Department of CSE, GPCET, Kurnool Page 19


3.4 Interaction Module
Generally, the system has two types of users:

(a) The patients (owners of the PHR who want to securely share the PHRs with others)

(b) The family members or friends of patients, doctors and physicians, health insurance
companies’ representatives, pharmacists, and researchers.

In SeSPHR methodology, the friends or family members are considered as private domain users
whereas all the other users are regarded as the public domain users. The users of both the private
and public domain may be granted various levels of access to the PHRs by the PHR owners. For
example, the users that belong to private domain may be given full access to the PHR, whereas
the public domain users, such as physicians, researchers, and pharmacists may be granted access
to some specific portions of the PHR. Moreover, the aforementioned users may be allowed full
access to the PHRs if deemed essential by the PHR owner. In other words, the SeSPHR
methodology allows the patients to exercise the fine-grained access control over the PHRs. All
of the users in the system are required to be registered with the SRS to receive the services of
the SRS. The registration is based on the roles of the users, for instance, doctor, researcher, and
pharmacist.

3.4.1 Admin Module

PHR Owner need to create an account to login into PHR owner account. If account already exist
he can directly login using username and password. After login PHR owner can access options
like view profile, add patient details, view key requests, view clinical reports, logout. If he enters
incorrect username and password he will not allowed to access the PHR owner account he is
specifying. The flowchart is shown in Fig 3.3.

Department of CSE, GPCET, Kurnool Page 20


Fig 3.3 Admin Module

3.4.2 User Module


PHR user need to create an account to login into PHR user account. If account already exist he
can directly login using username and password. After login PHR user can access options like
view user profile, request key, view access control, view clinical reports, view patient details,
logout. If he enters incorrect username and password he will not allowed to access the PHR user
account he is specifying. The flowchart is shown in Fig 3.4.

Department of CSE, GPCET, Kurnool Page 21


Fig 3.4 User Module
3.5 Proposed Partitioning methodology
The PHR is logically partitioned into the following four portions:

o Personal Information;
o Medical information;
o Insurance related information;
o Prescription information;
However, it is noteworthy that the above said partitioning is not inflexible. It is at the discretion
of the user to partition the PHR into lesser or more number of partitions. The PHRs can be
conveniently partitioned and can be represented in formats, for example XML. Moreover, the
PHR owner may place more than one partition into same level of access control. Any particular
user might not be granted a full access on the health records and some of the PHR partitions
may be restricted to the user. For example, a pharmacist may be given access to prescription and
insurance related information whereas personal and medical information may be restricted for a
pharmacist. Likewise, family/friend may be given full access to the PHR. A researcher might
only need the access to the medical records while deidentifying the personal details of the

Department of CSE, GPCET, Kurnool Page 22


patients. The access rights over different PHR partitions are determined by the PHR owner and
are delivered to the SRS at the time of data uploading to the cloud.

3.6 SeSPHR Services


The proposed methodology provides the following services for the PHRs shared over the public
cloud.

o Confidentiality

o Secure PHR sharing among the groups of authorized users

o Securing PHRs from unauthorized access of valid insiders

o Backward and forward access control


In the proposed methodology, the cloud is not considered a trusted entity. The features of
cloud computing paradigm, such as shared pool of resources, multi tenancy, and virtualization
might generate many sorts of insider and outsider threats to the PHRs that are shared over the
cloud. Therefore, it is important that the PHRs should be encrypted before storing at the
thirdparty cloud server. The PHR is first encrypted at the PHR owner’s end and is subsequently
uploaded to the cloud. The cloud merely acts as a storage service in the proposed methodology.
The encryption keys and other control data are never stored on the cloud. Therefore, at the
cloud’s end the confidentiality of the data is well achieved. Even if the unauthorized user at the
cloud by some means obtains the encrypted PHR file, the file cannot be decrypted because the
control data does not reside at the cloud and the confidentiality of the PHR is ensured.
The uploaded PHRs are encrypted by the owner and the rest of the users obtain the plain data
by utilizing the re-encryption key that is computed by the SRS. The SRS generates the re-
encryption parameters only for the allowed partitions corresponding to the requesting user.
Therefore, the privacy of the entire system is not disturbed by a compromised legitimate group
member.
The ACL specifies all the rights pertaining to each of the users and are specified by the PHR
owner. The rights are specified based on the categories of the users and are extended/limited by
the approval of the PHR owner. The SRS calculates and sends the re-encryption parameters
based on the specified rights on the partitions. Therefore, even the legitimate users cannot
access the unauthorized partition.
The newly joining member obtains the keys from the SRS. The shared data is encrypted by
the keys of the owner only. The access to the data for newly joining member is granted by the
approval of the SRS. Moreover, introducing a new key in the system does not require
Department of CSE, GPCET, Kurnool Page 23
reencryption of the whole data. Similarly, a departing user is removed from the ACL and the
corresponding keys are deleted. The deletion of the user keys and removal from the ACL results
in denial of access to the PHR for any illegitimate access attempts afterwards. Therefore, the
proposed methodology is effectively secure because it restricts the access of departing users
(forward access control) and permits the new users to access the past data (backward access
control).
The SRS is considered a semi-trusted authority that is honest but curious. In general, the SRS
is assumed to follow the protocol honestly. Although the SRS generates and stores the key pair
for each of the users, the data whether encrypted or plain is never transmitted to the SRS. The
SRS is only responsible for key management and re-encryption parameters generation.
Moreover, the access control is also enforced by the SRS. However, maintenance of the SRS is
the limitation and challenge of the proposed methodology.

3.7 SeSPHR Data Flow


The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to
represent a system in terms of input data to the system, various processing carried out on this
data, and the output data is generated by this system.
o The data flow diagram (DFD) is one of the most important modeling tools. It is used
to model the system components. These components are the system process, the data
used by the process, an external entity that interacts with the system and the
information flows in the system.

o DFD shows how the information moves through the system and how it is modified by
a series of transformations. It is a graphical technique that depicts information flow
and the transformations that are applied as data moves from input to output.DFD is
also known as bubble chart. A DFD may be used to represent a system at any level of
abstraction. DFD may be partitioned into levels that represent increasing information
flow and functional detail.

Department of CSE, GPCET, Kurnool Page 24


Fig 3.5 Data Flow diagram
3.8 SeSPHR Sequence Diagram
Cloud server can view all PHR owner and PHR user and can authorize them and can
view control request and give access while PHR Owner can add patient details, view
patient details, view key requests, view clinical reports. While PHR user can request key,
view access control. The following will be shown in below Fig 3.8

Department of CSE, GPCET, Kurnool Page 25


Fig 3.6 SeSPHR Sequence Diagram

Department of CSE, GPCET, Kurnool Page 26


CHAPTER 4
RESULT ANALYSIS

4.1 Experimental Methodology


4.1.1 System requirements
Minimum hardware requirements are very dependent on particular software being developed
by a given Enthought Python / Canopy / VS Code user. Applications that need to store large
arrays / objects in memory will require more RAM, whereas applications that need to perform
numerous calculations or tasks more quickly will require a faster processor.
o System : Pentium Dual Core.

o Hard Disk : 120 GB.

o Monitor : 15’’ LED

o Input Devices : Keyboard, Mouse

o Ram : 1 GB

4.1.2 Software requirements


The functional requirements or the overall description documents include the product
perspective and features, operating system and operating environment, graphics requirements,
design constraints and user documentation.The appropriation of requirements and
implementation constraints gives the general overview of the project in regards to what the
areas of strength and deficit are and how to tackle them.
o Operating System : Windows 7.

o Coding Language : JAVA/J2EE

o Tool : Netbeans 7.2.1

o Database : MYSQL

4.1.3 Required Data for Analysis


We need patient details as the input and the details we collected from the patient is shown in
given fig 4.1

Department of CSE, GPCET, Kurnool Page 27


Fig 4.1 Required Data for Analysis

Department of CSE, GPCET, Kurnool Page 28


4.2 Screenshots
Step 1 : Open any browser and search “ http://localhost:9090 ”

The page will be directed to Apache tomcat server shown in the below figure.
Step 2 : Click on Tomcat Manager and Enter username as admin and no password required
click enter.

Department of CSE, GPCET, Kurnool Page 29


Step 3 : You will be redirected to manager page where you need to select SeSPHR project
and then you will be redirected to main interface which is called as “ Home “ which contains
Cloud server, PHR Owner, PHR user modules.

Step 4 : We need to login to cloud server using username as “ Server “ and password as “
Server “.

Department of CSE, GPCET, Kurnool Page 30


Step 5 : In Cloud Server you can have features like Authorize PHR user, Authorize PHR data
owner, Client report, View patient details, Access control request, Encryption key requests,
View key transactions, View result in chart, Logout.

Step 6 : if you are a PHR owner you can login or if you are a new user you need to register
your account and login.

Department of CSE, GPCET, Kurnool Page 31


Step 7 : In PHR owner interface you will have view profile, add patient details, view patient
details, view key requests, view clinical reports, logout.

Step 8 : You want to add patient details so click on add patient details option and you need to
send request for encryption key to cloud.

Step 9 : In Cloud server they will generate symmetric public key to encrypt patient details
and give permission to enter patient details by PHR owner and allow to store the encrypted
PHR data in Cloud server.
Department of CSE, GPCET, Kurnool Page 32
Entering patient details after generating symmetric public key.
Step 10 : In PHR user you need to register as doctor or nurse. After that you need to login
and get the encrypted PHR in cloud server the you need to send key request to PHR owner by
specify PHR owner name and patient name.

Department of CSE, GPCET, Kurnool Page 33


Step 11 : PHR owner will generate key for you if he wants to give access to you.

Step 12 : In PHR user section you can have decrypted patient details of requested patient and
you can view it.

Department of CSE, GPCET, Kurnool Page 34


Decrypted patient details are shown below.

Department of CSE, GPCET, Kurnool Page 35


CHAPTER 5
CONCLUSION AND FUTURE WORK

5.1 Conclusion
We proposed a methodology to securely store and transmission of the PHRs to the authorized entities
in the cloud. The methodology preserves the confidentiality of the PHRs and enforces a patient-centric
access control to different portions of the PHRs based on the access provided by the patients. We
implemented a fine-grained access control method in such a way that even the valid system users
cannot access those portions of the PHR for which they are not authorized. The PHR owners store the
encrypted data on the cloud and only the authorized users possessing valid re-encryption keys issued
by a semi-trusted proxy are able to decrypt the PHRs. The role of the semi-trusted proxy is to generate
and store the public/private key pairs for the users in the system. In addition to preserving the
confidentiality and ensuring patient-centric access control over the PHRs, the methodology also
administers the forward and backward access control for departing and the newly joining users,
respectively. Moreover, we formally analyzed and verified the working of SeSPHR methodology
through the HLPN, SMT-Lib, and the Z3 solver. The performance evaluation was done on the on the
basis of time consumed to generate keys, encryption and decryption operations, and turnaround time.
The experimental results exhibit the viability of the SeSPHR methodologyto securely share the PHRs
in the cloud environment.

5.2 Future Work


o Increasing Technologies make more easier to Hack the data so with the help of this
method we can combine different methodologies to SeSPHR to get the more
security in sharing of personal health records(PHR).

o This paper presented a comprehensive hierarchical modeling and analysis of DCNs.


The systems are based on treebased switch-centric network topologies (three-tier and
fattree), that consist of three layers of switching switches accompanying sixteen
physical servers. We attempted to construct hierarchical models for the system
consisting of three layers, including an RG at the system layer, a faulttree at the
subsystem layer, and SRN at the component layer.

o We also conducted a number of comprehensive analyses regarding reliability and


availability. The results showed that the distribution of active nodes in the network can
enhance the availability/reliability of cloud computing systems. Furthermore, the
MTTF and MTTR of physical servers are the major impacting factors, whereas those

Department of CSE, GPCET, Kurnool Page 36


of links are important in maintaining high availability for the system. The results of this
study can facilitate the development and management of practical cloud computing
centers.

Department of CSE, GPCET, Kurnool Page 37


6 REFERENCES
1. K. Gai, M. Qiu, Z. Xiong, and M. Liu, “Privacy-preserving multi-channel communication
in Edge-of-Things,” Future Generation Computer Systems, 85, 2018, pp. 190-200.
2. K. Gai, M. Qiu, and X. Sun, “A survey on FinTech,” Journal of Network and Computer
Applications, 2017, pp. 1-12.
3. A. Abbas, K. Bilal, L. Zhang, and S. U. Khan, “A cloud based health insurance plan
recommendation system: A user centered approach, “Future Generation Computer Systems,
vols. 4344, pp. 99-109, 2015.
4. A. N. Khan, ML M. Kiah, S. A. Madani, M. Ali, and S. Shamshirband, “Incremental proxy
re-encryption scheme for mobile cloud computing environment,” The Journal of
Supercomputing, Vol. 68, No. 2, 2014, pp. 624-651.
5. R. Wu, G.-J. Ahn, and H. Hu, “Secure sharing of electronic health records in clouds,” In
8th IEEE International Conference on Collaborative Computing: Networking, Applications
and Work IEEE Transactions on Cloud Computing, Issue date:10.July.2018 sharing
(CollaborateCom), 2012, pp. 711-718.
6. A. Abbas and S. U. Khan, “A Review on the State-of-the-Art Privacy Preserving
Approaches in E-Health Clouds,” IEEE Journal of Biomedical and Health Informatics, vol.
18, no. 4, pp. 1431-1441, 2014.
7. M. H. Au, T. H. Yuen, J. K. Liu, W. Susilo, X. Huang, Y. Xiang, and Z. L. Jiang, “A
general framework for secure sharing of personal health records in cloud system,” Journal of
Computer and System Sciences, vol. 90, pp, 46-62, 2017.
8. J. Li, “Electronic personal health records and the question of privacy,” Computers, 2013,
DOI: 10.1109/MC.2013.225.
9. D. C. Kaelber, A. K. Jha, D. Johnston, B. Middleton, and D. W. Bates, “A research agenda
for personal health records (PHRs),” Journal of the American Medical Informatics
Association, vol. 15, no. 6, 2008, pp. 729-736.
10. S. Yu, C. Wang, K. Ren, and W. Lou, “Achieving secure, scalable and fine-grained data
access control in cloud computing,” in Proceedings of the IEEE INFOCOM, March 2010.
11. S. Kamara and K. Lauter, “Cryptographic cloud storage,” Financial Cryptography and
Data Security, vol. 6054, pp. 136–149, 2010.
12. T. S. Chen, C. H. Liu, T. L. Chen, C. S. Chen, J. G. Bau, and T.C. Lin, “Secure Dynamic
access control scheme of PHR in cloud computing,” Journal of Medical Systems, vol. 36, no.
6, pp. 4005– 4020, 2012.

Department of CSE, GPCET, Kurnool Page 38


13. K. Gai, M. Qiu, “Blend arithmetic operations on tensor-based fully homomorphic
encryption over real numbers,” IEEE Transactions on Industrial Informatics, 2017, DOI:
10.1109/TII.2017.2780885..

14. M. Li, S. Yu, Y. Zheng, K. Ren, and W. Lou, “Scalable and secure sharing of personal
health records in cloud computing using attribute-based encryption,” IEEE Transactions on
Parallel and Distributed Systems, 2013, vol. 24, no. 1, pp. 131–143.

15.“Health Insurance Portability and Accountability,”


<http://www.hhs.gov/ocr/privacy/hipaa/administrative/pr ivacyrule/>, accessed on October 20,
2014.

16.Z. Xiao and Y. Xiao, “Security and privacy in cloud computing,” IEEE Communications
Surveys and Tutorials, vol. 15, no. 2, pp. 1–17, Jul. 2012.

17.T. ElGamal, “A public key cryptosystem and a signature scheme based on discrete
logarithms,” in Proceedings of CRYPTO 84 on Advances Cryptology, 1985, pp. 10-18.

18.W. Diffie, and M. E. Hellman, “New directions in cryptography,” IEEE Transactions on


Information Theory, vol. 22, no. 6, 1976, pp. 644-654.

19.D. Thilakanathan, S. Chen, S. Nepal, R. Calvo, and L. Alem, “A platform for secure
monitoring and sharing of generic health data in the Cloud,” Future Generation Computer
Systems, vol. 35, 2014, pp. 102-113.

20.S. U. R. Malik, S. U. Khan, and S. K. Srinivasan, “Modeling and Analysis of State-of-the-


art VM-based Cloud Management Platforms,” IEEE Transactions on Cloud Computing, vol. 1,
no. 1, pp. 50-63, 2013.

21.T. Murata, “Petri Nets: Properties, Analysis and Applications, “Proceedings of the IEEE,
vol. 77, no. 4, pp. 541-580, Apr. 1989.

22.L. D. Moura and N. Bjørner. "Satisfiability modulo theories: An appetizer." In Formal


Methods: Foundations and Applications, Springer Berlin Heidelberg, 2009, pp. 23-36.

23.S. U. R. Malik, S. K. Srinivasan, S. U. Khan, and L. Wang, “A Methodology for OSPF


Routing Protocol Verification,” in 12th International Conference on Scalable Computing and
Communications (ScalCom), Changzhou, China, Dec. 2012.

Department of CSE, GPCET, Kurnool Page 39


24.A. Biere, A. Cimatti, E. Clarke, O. Strichman, and Y. Zhu, “Bounded Model Checking,”
Advances in Computers, vol. 58, 2003, pp. 117-148.

25.“Amazon S3,” http://aws.amazon.com/s3/, accessed on November 14, 2014.

26.A. D. Caro, and V. Iovino, “jPBC: Java pairing based cryptography,” in IEEE Symposium
on Computers and Communications (ISCC), 2011, pp. 850-855.

27.L. Ibraimi, M. Asim, and M. Petkovic, Secure management of personal health records by
applying attribute-based encryption, Technical Report, University of Twente, 2009.

28.J. Pecarina, S. Pu, and J.-C. Liu, “SAPPHIRE: Anonymity for enhanced control and private
collaboration in healthcare clouds,” in Proceedings of the 4th IEEE International Conference
on Cloud Computing Technology and Science (CloudCom), 2012, pp. 99–106.

29.M. Jafari, R. S. Naini, and N. P. Sheppard, “A rights management approach to protection of


privacy in a cloud of electronic health records,” in 11th annual ACM workshop on Digital
rights management, October 2011, pp. 23-30.

30.F. Xhafa, Fatos, J. Feng, Y. Zhang, X. Chen, and J. Li, “Privacyaware attribute-based PHR
sharing with user accountability in cloud computing,” The Journal of Supercomputing, 2014,
pp. 113.

31.C. Leng, H. Yu, J.Wang, and J. Huang, “Securing personal health records in the cloud by
enforcing sticky policies,” Telkomnika Indonesian Journal of Electrical Engineering, vol. 11,
no. 4, pp. 2200–2208, 2013.

32.D. H Tran, N. H.-Long, Z. Wei, N. W. Keong, “Towards security in sharing data on cloud-
based social networks,” in 8th International Conference on Information, Communications and
Signal Processing (ICICS), 2011, pp. 1-5.

33.X. Liang, Zhenfu Cao, Huang Lin, and Jun Shao. "Attribute based proxy re-encryption with
delegating capabilities." In Proceedings of the 4th International Symposium on Information,
Computer, and Communications Security, pp. 276286. ACM, 2009.

34.A. N. Khan, M.L. M. Kiah, S. U. Khan, Sajjad A. Madani, and Atta R. Khan. "A study of
incremental cryptography for security schemes in mobile cloud computing environments." In
Wireless Technology and Applications (ISWTA), 2013 IEEE Symposium on, pp. 62-67. IEEE,
2013.

Department of CSE, GPCET, Kurnool Page 40


35. Sushmita Ruj, Milos Stojmenovic and Amiya Nayak, “Decentralized Access Control with
Anonymous Authentication of Data Stored in Clouds”, IEEE Transactions on Parallel and
Distributed Systems, vol. 25, no. 2, pp. 384-394, February 2014.
36. Kan Yang and Xiaohua Jia, “An Efficient and Secure Dynamic Auditing Protocol for Data
Storage in Cloud Computing”, IEEE Transactions on Parallel and Distributed Systems, vol. 24,
no. 9, pp. 17171726, September 2013.
37. Sharrukh Zaman and Daniel Grosu, “A Combinatorial Auction-Based Mechanism for
Dynamic VM Provisioning and Allocation in Clouds”, IEEE Transactions on Cloud
Computing, vol. 1, no. 2, pp. 129-141, July-December 2013.
38. Arshdeep Bahga and Vijay K. Madisetti, “Analyzing Massive Machine Maintenance
Data in a Computing Cloud”, IEEE Transactions on Parallel and Distributed Systems, vol.
23, no. 10, pp. 18311843, October 2012.
39.Kushang Parikh, Nagesh Hawanna, Haleema . P . K, Jaya subalakshmi. R and
N.Ch.S.N.Iyengar, “Virtual Machine Allocation Policy in Cloud Computing Using CloudSim
in Java”, International Journal of Grid Distribution Computing, vol. 8, no. 1, pp. 145-158, 12
2015.
40. Juan Du, Daniel J. Dean, Yongmin Tan, Xiaohui Gu and Ting Yu, “Scalable Distributed
Service Integrity Attestation for Software-as-aService Clouds”, IEEE Transactions on Parallel
and Distributed Systems, vol. 25, no. 3, pp. 730-739, March 2014.
41. P.Krithika, G.Linga Dilipan and M.Shobana, “Enhancing Cloud Computing Security for
Data Sharing within Group Members”, IOSR Journal of Computer Engineering (IOSR-JCE),
vol. 17, no. 2, pp. 110114, Ver. V, MarApr. 2015.
42.Boyang Wang, Baochun Li and Hui Li, “Oruta: Privacy-Preserving Public Auditing for
Shared Data in the Cloud”, IEEE transactions on cloud computing, vol. 2, no. 1, pp. 43-56,
January-March 2014.
43.B. Wang, S.S.M. Chow, M. Li and H. Li, “Storing Shared Data on the Cloud via Security-
Mediator”, IEEE 33rd International Conference on Distributed Computing Systems (ICDCS),
pp. 124-133, 2013.
44.Hui Zhang, Guofei Jiang, Kenji Yoshihira and Haifeng Chen, “Proactive Workload
Management in Hybrid Cloud Computing”, IEEE Transactions on Network and Service
Management, vol. 11, no. 1, pp. 90-100, 2014.

Department of CSE, GPCET, Kurnool Page 41


45.LUO Yuchuan, FU Shaojing, XU Ming and WANG Dongsheng, “Enable Data Dynamics
for Algebraic Signatures Based Remote Data Possession Checking in the Cloud Storage”,
IEEE Communications, vol. 11, no. 11, pp. 114-124, 2014.
46. Bharath K Shamanthula, Yousef Elmehdwi, Gerry Howser and Sanjay Madria, “A Secure
Data Sharing and Query Processing Framework via Federation of Cloud Computing”, Elsevier
Information System journal, vol. 48, pp. 196-212, 2015
47.Jiaxin, Dongsheng Li, Yuming Ye and Xicheng Lu, “Efficient MultiTenant Virtual Machine
Allocation in Cloud Data Centers”, Tsinghua Science and Technology, vol. 20, no. 1, pp. 81-
89, 2015
48.Swarupkshatriya and Dr.Sandip M Chaware, “A Survey on Data Sharing Using
Encryption Technique in Cloud Computing”, International Journal of Computer
Science and Information Technologies (IJCSIT), vol. 5, no. 4, pp. 5351-5354, 2014.
49. Elham Abd Al Latif A Badawi and Ahmed Kayed, “Survey on Enhancing the Data Security
of the Cloud Computing Environment by Using Data Segregation Technique”, Academic
Research Publishing Agency:(arapapress) International Journal of Research and Reviews in
Applied Sciences, vol. 23, no. 2, pp. 136, May 2015
50. Zhihua Xia, Xinhui Wang, Xingming Sun and Qian Wang, “A Secure and Dynamic Multi-
Keyword Ranked Search Scheme over Encrypted Cloud Data”, IEEE Transactions on Parallel
and Distributed Systems, vol. 27, no. 2, pp. 340-352, February 2016.

Department of CSE, GPCET, Kurnool Page 42

You might also like