You are on page 1of 26

FLEXIBLE DATA ACCESS CONTROL BASED ON TRUST AND

REPUTATION IN CLOUD COMPUTING

Abstract

Cloud computing offers a new way of services and has become a popular service platform.
Storing user data at a cloud data center greatly releases storage burden of user devices and brings
access convenience. Due to distrust in cloud service providers, users generally store their crucial
data in an encrypted form. But in many cases, the data need to be accessed by other entities for
fulfilling an expected service, e.g., an eHealth service. How to control personal data access at
cloud is a critical issue. Various application scenarios request flexible control on cloud data
access based on data owner policies and application demands. Either data owners or some trusted
third parties or both should flexibly participate in this control. However, existing work hasn't yet
investigated an effective and flexible solution to satisfy this demand. On the other hand, trust
plays an important role in data sharing. It helps overcoming uncertainty and avoiding potential
risks. But literature still lacks a practical solution to control cloud data access based on trust and
reputation. In this paper, we propose a scheme to control data access in cloud computing based
on trust evaluated by the data owner and/or reputations generated by a number of reputation
centers in a flexible manner by applying Attribue-Based Encryption and Proxy Re-Encryption.
We integrate the concept of context-aware trust and reputation evaluation into a cryptographic
system in order to support various control scenarios and strategies. The security and performance
of our scheme are evaluated and justified through extensive analysis, security proof, comparison
and implementation. The results show the efficiency, flexibility and effectiveness of our scheme
for data access control in cloud computing.

1
Chapter I

Introduction

CLOUD computing offers a new way of services by re-arranging various resources (e.g., storage,
computing and services) and providing them to users based on their demands. Cloud computing
provides a big resource pool by linking various network resources together. It has desirable
properties, such as scalability, elasticity, fault-tolerance, and pay-per-use. Therefore, it becomes
a promising service platform, rearranging the structure of Information Technologies. In many
situations, different Cloud Service Providers (CSPs) are requested to collaborate together in
order to provide an expected service to a user. On the other hand, reputation and trust
relationships in different contexts can be assessed based on social networking activities,
behaviors and experiences, as well as performance evaluation. With the rapid growth of mobile
communications and the wide usage of mobile devices, people nowadays perform various social
activities with their mobile devices, e.g., calling, chatting, sending short messages, and
conducting instant social activities via various wireless network connections. Social trust
relationships can be established and assessed in a digital world, as illustrated in [29]. Reputation
can be assessed based on feedback and Quality of Service (QoS) of an entity [30]. Trust and
reputation play a decisive role in cyber security.

A. Motivation

One important issue in cloud computing is data security. Private data of users are stored in the
data center of CSP to release the storage and computing burden of personal devices. Typical data
examples are social security records and health statistical data monitored by wearable sensors.
However, the CSP could be curious on personal privacy [38]. Thus, critical personal data stored
in CSP are generally encrypted and their access is controlled. Obviously, these personal data
could be accessed by other entities in order to fulfill a cloud service. How to control personal
data access at CSP is a practical issue. Rationally, the data owner should control own data
access. But in many situations, the data owner is not available or has no idea how to do the
control, when, for example, personal health records should be accessed by several medical

2
experts in order to figure out a treatment solution or by a foreign physician when the data owner
is traveling and falling into a health problem abroad. Therefore, an access control agent is
expected in this situation in order to reduce the risk of the data
owner in personal data management. Considering both above
demands in practice, a heterogeneous scheme that can flexibly
control data access in cloud computing is expected.

Chapter II

System Analysis

System analysis is the overall analysis of the system before implementation and for arriving at a
precise solution. Careful analysis of a system before implementation prevents post
implementation problems that might arise due to bad analysis of the problem statement. Thus the
necessity for systems analysis is justified. Analysis is the first crucial step, detailed study of the
various operations performed by a system and their relationships within and outside of the
system. Analysis is defining the boundaries of the system that will be followed by design and
implementation.

Existing System

A number of solutions have been proposed for protecting data access in the cloud.
Access Control List (ACL) based solutions suffer from the drawback that computation
complexity grows linearly with the number of data-groups [4] or the number of users in the ACL
[5]. Role Based Access Control (RBAC) cannot flexibly support various data access demands
that rely on trust [31]. In recent years, access control schemes based on Attribute-Based
Encryption (ABE) were proposed for controlling cloud data access based on attributes in order to
enhance flexibility [13-15, 18, 19, 22, 24]. However, the computation cost of these solutions is
generally high due to the complexity of attribute structure. The time spent on data encryption,
decryption and key management is more than symmetric key or asymmetric key encryptions.
Critically, most of existing schemes cannot support controlling cloud data access by either the
data owner or access control agents or both. This fact greatly influences the practical deployment

3
of existing schemes. Current research is still at the stage of academic study. Little work was
proposed to control the access of data stored at the CSP based on the trust evaluated by the data
owner or the reputation generated by a trusted third party (e.g., a reputation center) or both in a
uniform design

Proposed System

In this paper, we propose a scheme to flexibly control data access based on trust and reputation
in cloud computing. We propose multi-dimensional controls on cloud data access based on the
policies and strategies set by the data owner. Concretely, the data owner encrypts its data with a
symmetric secret key 𝐾. This encryption key can be divided into multiple parts in order to
support various control strategies. For example, the data owner can control its data access based
on either individual trust evaluation or reputation generated by multiple RCs or according to both
above in order to highly ensure data security and privacy in various situations. In more details,
the secret encryption key 𝐾 can be divided into multiple parts 𝐾!, 𝐾!, … , 𝐾! (𝑛 ≥ 0). The data
owner encrypts different parts of 𝐾 with different encryption keys that are respectively managed
by the data owner and a number of reputation centers (RCs) regarding different reputation
properties in different contexts (e.g., user feedback, brand rank and credibility in medical
treatment). Later on, the data owner and/or RCs can control the data access following the way of
data encryption handled by the data owner. Specifically, the contributions of this paper can be
summarized as below: 1) We motivate securing cloud data by controlling its access based on the
trust and reputation evaluated by the data owner and/or multiple reputation centers in a flexible
manner. 2) To the best of our knowledge, our scheme is one of the first to flexibly control cloud
data access in an efficient way by integrating the concept of trust and reputation evaluation into a
ryptographic system. It can support various control policies and strategies in different scenarios.
3) We prove and justify the security and advanced performance of our scheme through extensive
analysis, security proof and implementation.

4
Chapter III

Feasibility Study

The preliminary investigation examines project feasibility; the likelihood the system will be
useful to the organization. The main objective of the feasibility study is to test Technical,
Operational and Economical feasibility for adding new modules and debugging oldest running
system. All systems are feasible if they are given unlimited resources and infinite time. There are
aspects in the feasibility study portion of the preliminary investigation:

Operational Feasibility

The application smart audit does not require additional manual involvement or labor towards
maintenance of the system. The Cost of training is minimized due to the user friendliness of the
developed application. Recurring expenditures on consumables and materials are minimized.

Technical Feasibility

Keeping in mind the existing system network, software & Hardware, already available the audit
application generated in java have provided an executable file that requires tomcat that provides
compatibility from windows98 without having to load java software. No additional hardware or
software is required which makes smart audit technically feasible.

Economic Feasibility

The system is economically feasible keeping in mind:

 Lesser investment towards training.


 One time investment towards development.
 Minimizing recurring expenditure towards training, facilities offered and Consumables.

5
 The system as a whole is economically feasible over a period of time.

Chapter IV

System Design

System design concentrates on moving from problem domain to solution domain. This
important phase is composed of several steps. It provides the understanding and procedural
details necessary for implementing the system recommended in the feasibility study. Emphasis
is on translating the performance requirements into design specification.

The design of any software involves mapping of the software requirements into Functional
modules. Developing a real time application or any system utilities involves two processes. The
first process is to design the system to implement it. The second is to construct the executable
code.

Software design has evolved from an intuitive art dependent on experience to a science, which
provides systematic techniques for the software definition. Software design is a first step in the
development phase of the software life cycle.

Before design the system user requirements have been identified, information has been gathered
to verify the problem and evaluate the existing system. A feasibility study has been conducted to
review alternative solution and provide cost and benefit justification. To overcome this proposed
system is recommended. At this point the design phase begins.

The process of design involves conceiving and planning out in the mind and making a drawing.
In software design, there are three distinct activities: External design, Architectural design and
detailed design. Architectural design and detailed design are collectively referred to as internal
design. External design of software involves conceiving and planning out and specifying the
externally observable characteristics of a software product.

6
INPUT DESIGN:

Systems design is the process of defining the architecture, components, modules, interfaces, and

data for a system to satisfy specified requirements. Systems design could be seen as the
application of systems theory to product development. There is some overlap with the disciplines
of systems analysis, systems architecture and systems engineering.

Input Design is the process of converting a user oriented description of the inputs to a computer-
based business system into a programmer-oriented specification.

• Input data were found to be available for establishing and maintaining master and
transaction files and for creating output records

• The most suitable types of input media, for either off-line or on-line devices, where
selected after a study of alternative data capture techniques.

INPUT DESIGN CONSIDERATIONS

• The field length must be documented.

• The sequence of fields should match the sequence of the fields on the source document.

• The data format must be identified to the data entry operator.

Design input requirements must be comprehensive. Product complexity and the risk associated
with its use dictate the amount of detail

• These specify what the product does, focusing on its operational capabilities and the
processing of inputs and resultant outputs.

7
• These specify how much or how well the product must perform, addressing such issues
as speed, strength, response times, accuracy, limits of operation, etc.

OUTPUT DESIGN:

A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs.

In output design it is determined how the information is to be displaced for immediate need and
also the hard copy output. It is the most important and direct source information to the user.
Efficient and intelligent output design improves the system’s relationship to help user decision-
making.

1. Designing computer output should proceed in an organized, well thought out manner;
the right output must be developed while ensuring that each output element is designed so
that people will find the system can use easily and effectively. When analysis design
computer output, they should Identify the specific output that is needed to meet the
requirements.

2. Select methods for presenting information.

3. Create document, report, or other formats that contain information produced by the
system.

The output form of an information system should accomplish one or more of the following
objectives.

• Convey information about past activities, current status or projections of the

• Future.

8
• Signal important events, opportunities, problems, or warnings.

• Trigger an action.

• Confirm an action.

Architecture Diagram

Dataflow Diagram

The Data Flow Diagram is a graphical model showing the inputs, processes, storage & outputs of
a system procedure in structure analysis. A DFD is also known as a Bubble Chart

The Data flow diagram provides additional information that is used during the analysis of the
information domain, and server as a basis for the modeling of functions. The description of each
function presented in the DFD is contained is a process specification called as PSPEC.

DFD Symbols

9
Data Flow

Arrows marking the movement of data through the system indicate data flows. It is the pipeline
carrying packets of data from an identified point of origin to specific destination.

Process

Bubbles or circles are used to indicate where incoming data flows are processed and then
transformed into outgoing data flows. The processes are numbered and named to indicate the
occurrence in the system flow.

External Entity

A rectangle indicates any source or destination of data. The entity can be a class of people, an

Organization or even another system. The function of the external entity is to, supply data to

Receive data from the system. They have no interest in how to transform the data.

Data Store

A data store denoted as open rectangles. It is observed that programs and sub systems have
complex interdependencies including flow of data, flow of control and interaction with data
stores. It is used to identify holding points.

Dataflow diagram

10
Chapter V

System Requirements

The hardware and software specification specifies the minimum hardware and software required
to run the project. The hardware configuration specified below is not by any means the optimal
hardware requirements. The software specification given below is just the minimum
requirements, and the performance of the system may be slow on such system.

Hardware Requirements

 System : Pentium IV 2.4 GHz


 Hard Disk : 40 GB
 Floppy Drive : 1.44 MB
 Monitor : 15 VGA color
 Mouse : Logitech.
 Keyboard : 110 keys enhanced
 RAM : 256 MB

11
Software Requirements

 Operating System : Windows


 Front End : Dot Net
 Back End : MySQL

Chapter VI

System Implementation

Implementation is the stage in the project where the theoretical design is turned into a working
system. The implementation phase constructs, installs and operates the new system. The most
crucial stage in achieving a new successful system is that it will work efficiently and effectively.

There are several activities involved while implementing a new project.

• End user Training

• End user Education

• Training on the application software

Modules

Access Control on Encrypted Data Different cryptographic mechanisms are applied to realize
access control on encrypted data. By adopting a traditional symmetric key cryptographic system,

12
the data owner can classifies data with similar ACLs into a data-group before outsourcing to
CSP, and then encrypts each data-group with a symmetric key. The symmetric key will be
distributed to the users in the ACL, so that only the users in the ACL can access the
corresponding group of data [4]. The main drawback of this approach is that the number of keys
managed by the data owner grows linearly with the number of data-groups. The change of trust
relationship between one user and the data owner could make the symmetric key revoked, which
impacts other users in the same ACLs and increases the burden of key management. Thus, this
solution is impractical in many real application scenarios.

User Revocation

User revocation is not a trivial task. The key problem is that the revoked users still retain the
keys issued earlier, and thus can still decrypt ciphertexts. Therefore, whenever a user is revoked,
the re-keying and re-encryption operations need to be executed by the data owner to prevent the
revoked user from accessing the future data. For example, when ABE is adopted to encrypt data,
the work in [10] proposed to require the data owner to periodically re-encrypt the data, and re-
distribute new keys to authorized users. This approach is very inefficient due to the heavy
workload introduced to the data owner.

Reputation

System Building a mutual trust relationship between users and cloud platform is the key to
implement new access control methods in cloud. There are many reputation management
systems available nowadays [16]. Some work proposes to compose a number of services together
based on trust and select services based on reputation [17]. Lin et al. proposed a mutual trust
based access control (MTBAC) model [36]. It takes both user's behavior trust and cloud service
credibility into consideration. Trust relationships between users and CSPs are established by
mutual trust mechanism. However, most existing reputation management systems didn’t consider
how to control personal data access based on reputation over the cloud. Access control at CSP
based on trust and reputation was seldom studied.

Computation Complexity

13
We analyze the computation complexity of the following operations: setup, CP-ABE user key
generation, PRE user key generation, individual trust public key and secret key generation, re-
encryption key generation and re-encryption, encryption and decryption, as well as revocation

Chapter VII

Software Description

Introduction

A programming infrastructure created by Microsoft for building, deploying, and


running applications and services that use .NET technologies, such as desktop applications
and Web services.

The .NET Framework contains three major parts:

 The Common Language Runtime


 The Framework Class Library

ASP.NET

Microsoft started development of the .NET Framework in the late 1990s, originally under the
name of Next Generation Windows Services (NGWS). By late 2000 the first beta versions
of .NET 1.0 were released. The .NET Framework (pronounced dot net) is a software
framework developed by Microsoft that runs primarily on Microsoft Windows. It includes a
large library and provides language interoperability (each language can use code written in other
languages) across several programming languages. Programs written for the .NET Framework
execute in a software environment (as contrasted to hardware environment), known as
the Common Language Runtime (CLR), an application virtual machine that provides services
such as security, memory management, and exception handling. The class library and the CLR
together constitute the .NET Framework. An application software platform from Microsoft
introduced in 2002 and commonly called .NET ("dot net"). The .NET platform is similar in
purpose to the Java EE platform, and like Java's JVM runtime engine, .NET's runtime engine
must be installed in the computer in order to run .NET applications.

14
.NET Programming Languages

.NET is similar to Java because it uses an intermediate bytecode language that can be
executed on any hardware platform that has a runtime engine. It is also unlike Java, as it provides
support for multiple programming languages. Microsoft languages are C# (C Sharp), J# (J
Sharp), Managed C++, JScript.NET and Visual Basic.NET. Other languages have been
reengineered in the European version of .NET, called the Common Language Infrastructure.

.NET Versions

.NET Framework 1.0 introduced the Common Language Runtime (CLR) and .NET
Framework 2.0 added enhancements. .NET Framework 3.0 included the Windows programming
interface (API) originally known as "WinFX," which is backward compatible with the Win32
API. .NET Framework 3.0 added the following four subsystems and was installed with
Windows, starting with Vista. .NET Framework 3.5 added enhancements and introduced a
client-only version (see .NET Framework Client Profile). .NET Framework 4.0 added parallel
processing and language enhancements.

The User Interface (WPF)

Windows Presentation Foundation (WPF) provides the user interface. It takes advantage of
advanced 3D graphics found in many computers to display a transparent, glass-like appearance.

Messaging (WCF)

Windows Communication Foundation (WCF) enables applications to communicate with


each other locally and remotely, integrating local messaging with Web services.

Workflow (WWF)

Windows Workflow Foundation (WWF) is used to integrate applications and automate


tasks. Workflow structures can be defined in the XML Application Markup Language.

User Identity (WCS)

15
Windows CardSpace (WCS) provides an authentication system for logging into a Web
site and transferring personal information.

DESIGN FEATURES

Interoperability

Because computer systems commonly require interaction between newer and older applications,
the .NET Framework provides means to access functionality implemented in newer and older
programs that execute outside the .NET environment. Access to COM components is provided in
the System.Runtime.InteropServices and System.EnterpriseServices namespaces of the
framework; access to other functionality is achieved using the P/Invoke feature.

Common Language Runtime engine

The Common Language Runtime (CLR) serves as the execution engine of the .NET Framework.
All .NET programs execute under the supervision of the CLR, guaranteeing certain properties
and behaviors in the areas of memory management, security, and exception handling.

Language independence

The .NET Framework introduces a Common Type System, or CTS. The


CTS specification defines all possible datatypes and programming constructs supported by the
CLR and how they may or may not interact with each other conforming to the Common

16
Language Infrastructure (CLI) specification. Because of this feature, the .NET Framework
supports the exchange of types and object instances between libraries and applications written
using any conforming .NET language.

Base Class Library

The Base Class Library (BCL), part of the Framework Class Library (FCL), is a library of
functionality available to all languages using the .NET Framework. The BCL
provides classes that encapsulate a number of common functions, including file reading and
writing, graphic rendering, database interaction, XML document manipulation, and so on. It
consists of classes, interfaces of reusable types that integrates with CLR(Common Language
Runtime).

Simplified deployment

The .NET Framework includes design features and tools which help manage the installation of
computer software to ensure it does not interfere with previously installed software, and it
conforms to security requirements.

Security

The design addresses some of the vulnerabilities, such as buffer overflows, which have been
exploited by malicious software. Additionally, .NET provides a common security model for all
applications.

Portability

While Microsoft has never implemented the full framework on any system except Microsoft
Windows, it has engineered the framework to be platform-agnostic and cross-platform
implementations are available for other operating systems. Microsoft submitted the
specifications for the Common Language Infrastructure , the C# language and the C++/CLI
language to both ECMA and the ISO, making them available as official standards. This makes it
possible for third parties to create compatible implementations of the framework and its
languages on other platforms.

17
Chapter VIII

System Testing

Software Testing

Software testing is an investigation conducted to provide stakeholders with information about


the quality of the product or service under test. Software testing can also provide an objective,
independent view of the software to allow the business to appreciate and understand the risks of
software implementation. Test techniques include, but are not limited to the process of executing
a program or application with the intent of finding software bugs (errors or other defects).The
purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub-assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.

Software testing is the process of evaluation a software item to detect differences between given
input and expected output. Also to assess the feature of a software item. Testing assesses the
quality of the product. Software testing is a process that should be done during the development
process. In other words software testing is a verification and validation process.

Types of testing

There are different levels during the process of Testing .Levels of testing include the different
methodologies that can be used while conducting Software Testing. Following are the main
levels of Software Testing:

 Functional Testing.

18
 Non-Functional Testing.

Step Description
s

I The determination of the functionality that the intended application is meant


to perform.

II The creation of test data based on the specifications of the application.

III The output based on the test data and the specifications of the application.

IV The writing of Test Scenarios and the execution of test cases.

V The comparison of actual and expected results based on the executed test
cases.

Functional Testing

Functional Testing of the software is conducted on a complete, integrated system to evaluate the
system's compliance with its specified requirements. There are five steps that are involved when
testing an application for functionality.

An effective testing practice will see the above steps applied to the testing policies of every
organization and hence it will make sure that the organization maintains the strictest of standards
when it comes to software quality.

19
Unit Testing

This type of testing is performed by the developers before the setup is handed over to the testing
team to formally execute the test cases. Unit testing is performed by the respective developers on
the individual units of source code assigned areas. The developers use test data that is separate
from the test data of the quality assurance team. The goal of unit testing is to isolate each part of
the program and show that individual parts are correct in terms of requirements and
functionality.

Limitations of Unit Testing

Testing cannot catch each and every bug in an application. It is impossible to evaluate every
execution path in every software application. The same is the case with unit testing.

There is a limit to the number of scenarios and test data that the developer can use to verify the
source code. So after he has exhausted all options there is no choice but to stop unit testing and
merge the code segment with other units.

Integration Testing

The testing of combined parts of an application to determine if they function correctly together is

Integration testing. There are two methods of doing Integration Testing Bottom-up Integration
testing and Top- Down Integration testing.

S.N. Integration Testing Method

1 Bottom-up integration
This testing begins with unit testing, followed by tests of progressively
higher-level combinations of units called modules or builds.

2 Top-Down integration
This testing, the highest-level modules are tested first and progressively
lower-level modules are tested after that.

20
In a comprehensive software development environment, bottom-up testing is usually done first,
followed by top-down testing. The process concludes with multiple tests of the complete
application, preferably in scenarios designed to mimic those it will encounter in customers'
computers, systems and network.

System Testing

This is the next level in the testing and tests the system as a whole. Once all the components are
integrated, the application as a whole is tested rigorously to see that it meets Quality Standards.
This type of testing is performed by a specialized testing team. System testing is so important
because of the following reasons:

 System Testing is the first step in the Software Development Life Cycle, where the
application is tested as a whole.

 The application is tested thoroughly to verify that it meets the functional and technical
specifications.

 The application is tested in an environment which is very close to the production


environment where the application will be deployed.

 System Testing enables us to test, verify and validate both the business requirements as
well as the Applications Architecture.

Regression Testing

Whenever a change in a software application is made it is quite possible that other areas within
the application have been affected by this change. To verify that a fixed bug hasn't resulted in
another functionality or business rule violation is Regression testing. The intent of Regression
testing is to ensure that a change, such as a bug fix did not result in another fault being uncovered
in the application. Regression testing is so important because of the following reasons:

21
 Minimize the gaps in testing when an application with changes made has to be tested.

 Testing the new changes to verify that the change made did not affect any other area of
the application.

 Mitigates Risks when regression testing is performed on the application.

 Test coverage is increased without compromising timelines.

 Increase speed to market the product.

Acceptance Testing

This is arguably the most importance type of testing as it is conducted by the Quality Assurance
Team who will gauge whether the application meets the intended specifications and satisfies the
client requirements. The QA team will have a set of pre written scenarios and Test Cases that
will be used to test the application.

More ideas will be shared about the application and more tests can be performed on it to gauge
its accuracy and the reasons why the project was initiated. Acceptance tests are not only intended
to point out simple spelling mistakes, cosmetic errors or Interface gaps, but also to point out any
bugs in the application that will result in system crashers or major errors in the application.

By performing acceptance tests on an application the testing team will deduce how the
application will perform in production. There are also legal and contractual requirements for
acceptance of the system.

Alpha Testing

This test is the first stage of testing and will be performed amongst the teams (developer and QA
teams). Unit testing, integration testing and system testing when combined are known as alpha
testing. During this phase, the following will be tested in the application:

 Spelling Mistakes

 Broken Links

22
 Cloudy Directions

 The Application will be tested on machines with the lowest specification to test loading
times and any latency problems.

Beta Testing

This test is performed after Alpha testing has been successfully performed. In beta testing a
sample of the intended audience tests the application. Beta testing is also known as pre-release
testing. Beta test versions of software are ideally distributed to a wide audience on the Web,
partly to give the program a "real-world" test and partly to provide a preview of the next release.
In this phase the audience will be testing the following:

 Users will install, run the application and send their feedback to the project team.

 Typographical errors, confusing application flow, and even crashes.

 Getting the feedback, the project team can fix the problems before releasing the software
to the actual users.

 The more issues you fix that solve real user problems, the higher the quality of your
application will be.

 Having a higher-quality application when you release to the general public will increase
customer satisfaction.

23
Chapter IX

CONCLUSION

In this paper, we proposed a scheme to control cloud data access based on trust and reputation.
The scheme incorporates with a trust/reputation management framework for securing cloud
computing by applying ABE, PRE and a reputation-based revocation mechanism. Our scheme
can flexibly support controlling cloud data access based on trust and reputation in order to
support various access strategies and scenarios. Meanwhile, it also achieves low communication
and computation costs. The cloud service can be automatically secured since the related
cryptographic keys can be automatically generated, issued and managed based on trust
evaluation and/or reputation generation. We formally proved the security of the proposed scheme
based on the security of ABE and PRE. Extensive analysis, comparison with existing work, and
scheme implementation further show that our scheme is highly efficient, very flexible and
provably secure under the existing security model.

FUTURE ENHANCEMENT

Regarding the future work, we plan to apply our scheme in real applications of cloud data
protection in eHealth services and Internet of Things.

Reference

[1] R. Chow, et al., “Controlling data in the cloud: outsourcing computation without outsourcing
control”, Proc. of the ACM Workshop on Cloud Computing Security, 2009, pp. 85–90.

[2] S. Kamara, K. Lauter, “Cryptographic cloud storage”, Proc. of Financial Cryptography and
Data Security (FC), 2010, pp. 136–149.

[3] Q. Liu, C. Tan, J. Wu, G. Wang, “Efficient information retrieval for ranked queries in cost-
effective cloud environments”, Proc. of INFOCOM, 2012, pp. 2581-2585.

[4] M. Kallahalla, et al., “Plutus: Scalable secure file sharing on untrusted storage”, Proc. of the
USENIX Conference on File and Storage Technologies (FAST), 2003, pp. 29–42.

24
[5] E. Goh, H. Shacham, N. Modadugu, D. Boneh, “Sirius: Securing remote untrusted storage”,
Proc. of NDSS, 2003, pp. 131–145.

[6] J. Bethencourt, A. Sahai, B. Waters, “Ciphertext-policy attribute based encryption”, Proc. of


IEEE S&P, 2007, pp. 321–334.

[7] V. Goyal, O. Pandey, A. Sahai, B. Waters, “Attribute-based encryption for fine-grained


access control of encrypted data”, Proc. of the 13th ACM CCS, 2006, pp. 89–98.

[8] S. Muller, S. Katzenbeisser, C. Eckert, “Distributed attribute-based encryption”, Proc. of the


11th Annual Int. Conf. on Information Security and Cryptology, 2008, pp. 20–36.

[9] A. Sahai, B. Waters, “Fuzzy identity-based encryption”, Proc. of 24th International


Conference on the Theory and Application of Cryptographic Techniques, 2005, pp. 457–473.

[10] M. Pirretti, P. Traynor, P. McDaniel, B. Waters, “Secure attribute based systems”, Journal
of Computer Security, vol. 18, no. 5, pp. 799–837, 2010.

[11] M. Blaze, G. Bleumer, M. Strauss, “Divertible protocols and atomic proxy cryptography”,
Proc. of EUROCRYPT, 1998, pp. 127–144.

[12] M. Green, G. Ateniese, “Identity-based proxy re-encryption”, Proc. of ACNS, 2007, pp.
288–306.

[13] S. Yu, C. Wang, K. Ren, W. Lou, “Achieving secure, scalable, and fine-grained data access
control in cloud computing”, Proc. of the IEEE INFOCOM, 2010, pp. 534–542.

[14] G. Wang, Q. Liu, J. Wu, M. Guo, “Hierarchical attribute-based encryption and scalable
user revocation for sharing data in cloud servers”, Computers & Security, vol. 30, no. 5, pp. 320–
331, 2011.

[15] S. Yu, C. Wang, K. Ren, W. Lou, “Attribute based data sharing with attribute revocation”,
Proc. of the ACM ASIACCS, 2010, pp. 261–270.

[16] Z. Yan (ed.), Trust Modeling and Management in Digital Environments: from Social
Concept to System Development, IGI Global, Hershey, Pennsylvania, 2010.

25
[17] Z. Yan, “A Comprehensive Trust Model for Component Software”, IEEE SecPerU’08,
Italy, 2008, pp. 1-6.

[18] G. Wang, Q. Liu, J. Wu, “Hierarchical attribute-based encryption for fine-grained access
control in cloud storage services”, Proc. of the 17th ACM CCS, 2010, pp. 735–737.

[19] M. Zhou, Y. Mu, W. Susilo, J. Yan, “Piracy-preserved access control for cloud computing”,
Proc. of IEEE TrustCom11, 2011, pp. 83-90.

[20] G. Ateniese, K. Fu, M. Green, S. Hohenberger, “Improved proxy re-encryption schemes


with applications to secure distributed storage,” Proc. of the 12th Annual Network and
Distributed System Security Symposium, 2005, pp. 29–43.

[21] Z. Yan, M. Wang, V. Niemi, R. Kantola, “Secure pervasive social networking based on
multi-dimensional trust levels”, IEEE CNS 2013, 2013, pp.100-108.

[22] Z. Wan, J. Liu, R.H. Deng, “HASBE: a hierarchical attribute-based solution for flexible and
scalable access control in cloud computing”, IEEE Trans. on Info. Forensics & Security, vol. 7,
no. 2, pp. 743-754, 2012.

26

You might also like