You are on page 1of 84

A PROJECT REPORT ON

ANONYMOUS AND TRACEABLE GROUP DATA SHARING IN CLOUD


COMPUTING

Submitted to OSMANIA UNIVERSITY in


partial fulfillment for the Award of the Degree of

M.C.A
(2017-20)

By

Mr.CHEPURI SRAVAN KUMAR

(1053-17-862-030)

Department of Computer Science


A.V. College P.G Centre
Gaganmahal, Hyderabad.
(Affiliated to Osmania University)
Tel: 040-65760591
Fax: 040-27610241
M.C.A
POST GRADUATE CENTRE
A.V. College of Arts Science & Commerce
GAGANMAHAL, HYDERABAD -500 029.
(Affiliated to Osmania University)
__________________________________________________________

DATE:-

CERTIFICATE

This is to certify that the project work entitled “ANONYMOUS AND


TRACEABLE GROUP DATA SHARING IN CLOUD COMPUTING”, is
being submitted by Mr.CHEPURI SRAVAN KUMAR bearing H.T. No.
1053-17-862-030 as a part of their curriculum in the partial fulfillment for the
award of M.C.A in A.V. College P.G. Centre for the academic year 2017-20.
This has not been submitted to any other University or Institution for the Award of
any Degree.

H.O.D External Examiner


(N.BhaskaraRao)
ACKNOWLEDGEMENT

I would like to express my deep gratitude and respect to all those people behind
the scene who guided, inspired and helped me in the completion of this project
work.

I attribute my sincere thanks towards B & I Tech Solutions Pvt.Ltd, Hyderabad


for giving me the opportunity to perform project work and to sharpen my
knowledge and improve the skills. I wish to place my deep sense of gratitude
towards my project guide Mr. VARMA , B & I Tech Solutions Pvt.Ltd,
Hyderabad., for his constant motivation and valuable help throughout the
project work.

I wish to express my heartful of thanks to my Internal Guide Mr.T.Vamshi


Krishna Reddy and HOD, Mr.N.Bhaskar Rao for their valuable suggestions
and advices throughout the course and Prof. P.GOPI KRISHNA, Director
of A.V.College P.G.Centre for the indispensable support and encouragement .
I also extend my thanks to other faculties and supporting staff for their
co-operation during my course.

It would be void unless I salute to my parents; I am greatly indebted to my parents


and family members for their tireless support to me in all my endeavors, without
whose unsustained support , I could not have made this career in computer
science.

Last but not the least, I would like to thank my friends for being a constant source
of encouragement all the time throughout my career.

Mr. CHEPURI SRAVAN KUMAR (1053-17-862-030)


DECLARATION

I hereby declare that the project work entitled “ANONYMOUS AND


TRACEABLE GROUP DATA SHARING IN CLOUD COMPUTING” has
been developed by me under the supervision of Mr. VARMA, B & I Tech
Solutions Pvt.Ltd, Hyderabad and submitted to Osmania University for the
award of the degree of M.C.A.

The work done is original and has not been submitted to in part or in full to any
other University or Institution for the award of any Degree

Mr.CHEPURI SRAVAN KUMAR

(1053-17-862-030)
INDEX

CONTENTS Page No

1.INTRODUCTION 1

1.1 Introduction
1.2 Existing System
1.3Proposed System

2. REQUIREMENTS AND SPECIFICATIONS 6


2.1 Hardware Requirements
2.2 Software Requirements

3. SYSTEM FEASIBILITY 7
3.1 Economical feasibility
3.2 Technical feasibility
3.3 Social feasibility

4. SYSTEM ANALYSIS 9

5.SYSTEM DESIGN 27
5.1 UML Diagrams

6.IMPLEMENTATION AND SAMPLE CODE 34

7.SCREEN SHOTS
43

8. TESTING AND VALIDATION 67

9.CONCLUSION 74

10.BIBLIOGRAPHY AND REFERENCES 75


ANONYMOUS AND TRACEABLE GROUP DATA SHARING IN
CLOUD COMPUTING

Abstract:

Group data sharing in cloud environments has become a hot topic in


recent decades. With the popularity of cloud computing, how to achieve
secure and efficient data sharing in cloud environments is an urgent
problem to be solved. In addition, how to achieve both anonymity and
traceability is also a challenge in the cloud for data sharing. This paper
focuses on enabling data sharing and storage for the same group in the
cloud with high security and efficiency in an anonymous manner. By
leveraging the key agreement and the group signature, a novel traceable
group data sharing scheme is proposed to support anonymous multiple
users in public clouds. On the one hand, group members can
communicate anonymously with respect to the group signature, and the
real identities of members can betraced if necessary. On the other hand,
a common conference key is derived based on the key agreement to
enable group members to share and store their data securely. Note that a
symmetric balanced incomplete block design is utilized for key
generation, which substantially reduces the burden on members to derive
a common conference key. Both theoretical and experimental analyses
demonstrate that the proposed scheme is secure and efficient for group
data sharing in cloud computing.

INTRODUCTION:

Moreover, various forms of data information can freely flow with


respect to the cloud storage service, for instance, social networks, video
editing and home networks. However, little attention has been given to
group data sharing in the cloud, which refers to the situation in which
multiple users want to achieve information sharing in a group manner
for cooperative purposes. There are two ways to share data in cloud
storage. The first is a one-to-many pattern, which refers to the scenario
where one client authorizes access to his/her data for many
clients . The second is a many-to-many pattern, which refers to a
situation in which many clients in the same group authorize access to
their data for many clients at the same time

Our goal is to achieve anonymous data sharing under a cloud computing


environment in a group manner with high security and efficiency A
desired scheme not only supports the participation of any number of
users but also supports efficient key and data updating. Secondly, the
confidentiality of the outsourced data should be preserved. Since the
uploaded data may be sensitive and confidential business plans or
scientific research achievements, data leakages may cause significant
losses or serious consequences. Without the guarantee of confidentiality,
users would not like to be involved in the cloud to share data.
Thirdly, the way that data are shared should follow the many- to-many
pattern, which makes the information sharing more convenient and
efficient.

EXISTING SYSTEM:

 Previous works proposed a proxy re-encryption scheme to manage


distributed file systems that attempt to achieve secure data storage
in the semi-trusted party.

 Other works were proposed in which attempts to protect the


outsourced data from attackers and revoked malicious users.With
respect to the key policy attribute-based encryption (KA-ABE)
technique, it provides effective access control with fine
grainedness, scalability and data confidentiality simultaneously.
DISADVANTAGES:

 Proxy re-encryption scheme provides a stronger concept of


security compared with [14], it is still vulnerable under collusion
attacks and revoked malicious users.

 In KA-ABE scheme if and only if the attribute of the data satisfies


the access structure can the outsourced data be decrypted.
However, the scheme is designed only for a general one-to-many
communication system, which makes it inapplicable for the many-
to-many pattern.

PROPOSED SYSTEM:

 To address the above challenges, we present a novel


traceablegroup data sharing scheme for cloud computing
withtraceability and anonymity. The main contributions of
thispaper include the following.

 Arbitrary Number of Users and Dynamic Changes


AreSupported:To address an arbitrary number of users in
realapplications, we introduce the concept of the volunteer,
whichis used to satisfy the specific structure of the SBIBD suchthat
the number of users can be arbitrary rather than restrictedby the
value of a prime k. Moreover, in real cloud storageapplications,
users may join or leave freely. The proposedscheme can efficiently
support dynamic changes of users withrespect to the access control
and the many-to-many datasharing pattern.

 The Confidentiality of the Outsourced Data Is Preserved:


In our scheme, the outsourced data are encrypted with acommon
conference key prior to being uploaded. Attacks or
the cloud having no access to the common conference keycannot
reveal any information of the data stored in the cloud.The security
of the encryption key is based on elliptic curvecryptography (ECC)
and the bilinear Diffie-Hellman (BDH)assumption. Consequently,
users can safely exchange data withothers under the semi-trusted
cloud.

 Traceability Under an Anonymous Environment Is


Achieved by Our Scheme: With respect to the key agreement,
every user in the cloud is able to freely share data with other users.
Moreover, users can exchange information in the cloud
anonymously with respect to the group signature. Note that the
group manager can reveal the real identity of the data owner based
on the group signature bound with the data when a dispute occurs.

 Authentication Services and Fault-Tolerant Property Are


Provided: During the key agreement, each member exchange
messages along with the group signature to declare that their
identity is valid. Furthermore, the uploaded data file will be bound
with the group signature such that the cloud can verify the validity
of the file. In addition, the fault-tolerant property is supported in
our scheme, which guarantees that malicious users can be
identified and removed such that a unique common conference key
can be derived.

ADVANTAGES:

 The proposed approach can generate a common conference key


efficiently, which can be used to protect the security of the
outsourced data and support secure group data sharing in the cloud
at the same time.

 Authentication services and efficient access control are achieved


with respect to the group signature technique. In addition, our
scheme can support the traceability of user identity in an
anonymous environment.
ARCHITECTURE
2. SYSTEM REQUIREMENTS

HARDWARE REQUIREMENTS:

 System : Pentium Dual Core.


 Hard Disk : 120 GB.
 Monitor : 15’’ LED
 Input Devices : Keyboard, Mouse
 Ram : 1GB.

SOFTWARE REQUIREMENTS:

 Operating system : Windows 7.


 Coding Language : JAVA/J2EE
 Tool : Netbeans 7.4
 Database : MYSQL
3. FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business

proposal is put forth with a very general plan for the project and some

cost estimates. During system analysis the feasibility study of the

proposed system is to be carried out. This is to ensure that the proposed

system is not a burden to the company. For feasibility analysis, some

understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY

 TECHNICAL FEASIBILITY

 SOCIAL FEASIBILITY

3.1 ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system

will have on the organization. The amount of fund that the company can

pour into the research and development of the system is limited. The

expenditures must be justified. Thus the developed system as well within


the budget and this was achieved because most of the technologies used

are freely available. Only the customized products had to be purchased.

3.2 TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the

technical requirements of the system. Any system developed must not

have a high demand on the available technical resources. This will lead

to high demands on the available technical resources. This will lead to

high demands being placed on the client. The developed system must

have a modest requirement, as only minimal or null changes are required

for implementing this system.

3.3 SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by

the user. This includes the process of training the user to use the system

efficiently. The user must not feel threatened by the system, instead must

accept it as a necessity. The level of acceptance by the users solely

depends on the methods that are employed to educate the user about the

system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which

is welcomed, as he is the final user of the system.

4. SYSTEM ANALYSIS

SYSTEM ANALYSIS
3.1 THE STUDY OF THE SYSTEM
• To conduct studies and analyses of an operational and
technological nature, and
• To promote the exchange and development of methods and tools
for operational analysis as applied to defense problems.

Logical design
The logical design of a system pertains to an abstract
representation of the data flows, inputs and outputs of the system. This is
often conducted via modeling, using an over-abstract (and sometimes
graphical) model of the actual system. In the context of systems design
are included. Logical design includes ER Diagrams i.e. Entity
Relationship Diagrams
Physical design
The physical design relates to the actual input and output processes
of the system. This is laid down in terms of how data is input into a
system, how it is verified / authenticated, how it is processed, and how it
is displayed as output. In Physical design, following requirements about
the system are decided.
• Input requirement,
• Output requirements,
• Storage requirements,
• Processing Requirements,
• System control and backup or recovery.
Put another way, the physical portion of systems design can generally be
broken down into three sub-tasks:
• User Interface Design
• Data Design
• Process Design
User Interface Design is concerned with how users add
information to the system and with how the system presents information
back to them. Data Design is concerned with how the data is represented
and stored within the system. Finally, Process Design is concerned with
how data moves through the system, and with how and where it is
validated, secured and/or transformed as it flows into, through and out of
the system. At the end of the systems design phase, documentation
describing the three sub-tasks is produced and made available for use in
the next phase.

Physical design, in this context, does not refer to the tangible


physical design of an information system. To use an analogy, a personal
computer's physical design involves input via a keyboard, processing
within the CPU, and output via a monitor, printer, etc. It would not
concern the actual layout of the tangible hardware, which for a PC
would be a monitor, CPU, motherboard, hard drive, modems,
video/graphics cards, USB slots, etc. It involves a detailed design of a
user and a product database structure processor and a control processor.
The H/S personal specification is developed for the proposed system.

3.2 INPUT & OUTPUT REPRESENTATION


Input Design
The input design is the link between the information system and
the user. It comprises the developing specification and procedures for
data preparation and those steps are necessary to put transaction data in
to a usable form for processing can be achieved by inspecting the
computer to read data from a written or printed document or it can occur
by having people keying the data directly into the system. The design of
input focuses on controlling the amount of input required, controlling the
errors, avoiding delay, avoiding extra steps and keeping the process
simple. The input is designed in such a way so that it provides security
and ease of use with retaining the privacy. Input Design considered the
following things:
• What data should be given as input?
• How the data should be arranged or coded?
• The dialog to guide the operating personnel in providing input.
• Methods for preparing input validations and steps to follow when
error occur.

Objectives
Input Design is the process of converting a user-oriented
description of the input into a computer-based system. This design is
important to avoid errors in the data input process and show the correct
direction to the management for getting correct information from the
computerized system.
It is achieved by creating user-friendly screens for the data entry to
handle large volume of data. The goal of designing input is to make data
entry easier and to be free from errors. The data entry screen is designed
in such a way that all the data manipulates can be performed. It also
provides record viewing facilities.
When the data is entered it will check for its validity. Data can be
entered with the help of screens. Appropriate messages are provided as
when needed so that the user will not be in maize of instant. Thus the
objective of input design is to create an input layout that is easy to
follow
Output Design
A quality output is one, which meets the requirements of the end
user and presents the information clearly. In any system results of
processing are communicated to the users and to other system through
outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the
most important and direct source information to the user. Efficient and
intelligent output design improves the system’s relationship to help user
decision-making.
• Designing computer output should proceed in an organized, well
thought out manner; the right output must be developed while
ensuring that each output element is designed so that people will
find the system can use easily and effectively. When analysis
design computer output, they should Identify the specific output
that is needed to meet the requirements.
• Select methods for presenting information.
• Create document, report, or other formats that contain information
produced by the system.
3.3 Java Technology
JDBC: Java Data Base Connectivity
In an effort to set an independent database standard API for Java;
Sun Microsystems developed Java Database Connectivity, or JDBC.
JDBC offers a generic SQL database access mechanism that provides a
consistent interface to a variety of RDBMSs. This consistent interface is
achieved through the use of “plug-in” database connectivity modules, or
drivers. If a database vendor wishes to have JDBC support, he or she
must provide the driver for each platform that the database and Java run
on.
To gain a wider acceptance of JDBC, Sun based JDBC’s
framework on ODBC. As you discovered earlier in this chapter, ODBC
has widespread support on a variety of platforms. Basing JDBC on
ODBC will allow vendors to bring JDBC drivers to market much faster
than developing a completely new connectivity solution.
JDBC was announced in March of 1996. It was released for a 90
day public review that ended June 8, 1996. Because of user input, the
final JDBC v1.0 specification was released soon after.
The remainder of this section will cover enough information about
JDBC for you to know what it is about and how to use it effectively.
This is by no means a complete overview of JDBC. That would fill an
entire book.
JDBC Goals
Few software packages are designed without goals in mind. JDBC
is one that, because of its many goals, drove the development of the API.
These goals, in conjunction with early reviewer feedback, have finalized
the JDBC class library into a solid framework for building database
applications in Java.
The goals that were set for JDBC are important. They will give you
some insight as to why certain classes and functionalities behave the
way they do. The eight design goals for JDBC are as follows:

• SQL Level API


The designers felt that their main goal was to define a SQL
interface for Java. Although not the lowest database interface level
possible, it is at a low enough level for higher-level tools and APIs to
be created. Conversely, it is at a high enough level for application
programmers to use it confidently. Attaining this goal allows for
future tool vendors to “generate” JDBC code and to hide many of
JDBC’s complexities from the end user.
• SQL Conformance
SQL syntax varies as you move from database vendor to database
vendor. In an effort to support a wide variety of vendors, JDBC will
allow any query statement to be passed through it to the underlying
database driver. This allows the connectivity module to handle non-
standard functionality in a manner that is suitable for its users.
Networking
TCP/IP stack
The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol


IP datagram’s
The IP layer provides a connectionless and unreliable delivery
system. It considers each datagram independently of the others.
Any association between datagram must be supplied by the
higher layers. The IP layer supplies a checksum that includes its
own header. The header includes the source and destination
addresses. The IP layer handles routing through an Internet. It is
also responsible for breaking up large datagram into smaller ones
for transmission and reassembling them at the other end.
UDP
UDP is also connectionless and unreliable. What it adds to IP
is a checksum for the contents of the datagram and port numbers.
These are used to give a client/server model - see later.
TCP
TCP supplies logic to give a reliable connection-oriented
protocol above IP. It provides a virtual circuit that two processes
can use to communicate.
Internet addresses
In order to use a service, you must be able to find it. The
Internet uses an address scheme for machines so that they can be
located. The address is a 32 bit integer which gives the IP
address. This encodes a network ID and more addressing. The
network ID falls into various classes according to the size of the
network address.
Network address
Class A uses 8 bits for the network address with 24 bits left
over for other addressing. Class B uses 16 bit network
addressing. Class C uses 24 bit network addressing and class D
uses all 32.
Subnet address
Internally, the UNIX network is divided into sub networks.
Building 11 is currently on one sub network and uses 10-bit
addressing, allowing 1024 different hosts.
Host address
8 bits are finally used for host addresses within our subnet.
This places a limit of 256 machines that can be on the subnet.
Port addresses
A service exists on a host, and is identified by its port. This is
a 16 bit number. To send a message to a server, you send it to the
port for that service of the host that it is running on. This is not
location transparency! Certain of these ports are "well known".
Sockets
A socket is a data structure maintained by the system to handle
network connections. A socket is created using the call socket. It
returns an integer that is like a file descriptor. In fact, under
Windows, this handle can be used with Read File and Write File
functions.
Spiral Model

Advantages:

 Estimates(i.e. budget, schedule etc .) become more relistic as work


progresses, because important issues discoved earlier.

 It is more able to cope with the changes that are software


development generally entails.

 Software engineers can get their hands in and start woring on the
core of a project earlier.

SDLC is nothing but Software Development Life Cycle. It is a standard


which is used by software industry to develop good software.

Stages in SDLC:

 Requirement Gathering
 Analysis
 Designing
 Coding
 Testing
 Maintenance

Requirements Gathering stage:


The requirements gathering process takes as its input the goals
identified in the high-level requirements section of the project plan. Each
goal will be refined into a set of one or more requirements. These
requirements define the major functions of the intended application,
define operational data areas and reference data areas, and define the
initial data entities. Major functions include critical processes to be
managed, as well as mission critical inputs, outputs and reports. A user
class hierarchy is developed and associated with these major functions,
data areas, and data entities. Each of these definitions is termed a
Requirement. Requirements are identified by unique requirement

identifiers and, at minimum, contain a requirement title and textual


description.

These requirements are fully described in the primary deliverables


for this stage: the Requirements Document and the Requirements
Traceability Matrix (RTM). The requirements document contains
complete descriptions of each requirement, including diagrams and
references to external documents as necessary. Note that detailed listings
of database tables and fields are not included in the requirements
document.

The title of each requirement is also placed into the first version of
the RTM, along with the title of each goal from the project plan. The
purpose of the RTM is to show that the product components developed
during each stage of the software development lifecycle are formally
connected to the components developed in prior stages.

In the requirements stage, the RTM consists of a list of high-level


requirements, or goals, by title, with a listing of associated requirements
for each goal, listed by requirement title. In this hierarchical listing, the
RTM shows that each requirement developed during this stage is
formally linked to a specific product goal. In this format, each
requirement can be traced to a specific product goal, hence the term
requirements traceability.

The outputs of the requirements definition stage include the


requirements document, the RTM, and an updated project plan.

 Feasibility study is all about identification of problems in a project.


 No. of staff required to handle a project is represented as Team
Formation, in this case only modules are individual tasks will be
assigned to employees who are working for that project.
 Project Specifications are all about representing of various possible
inputs submitting to the server and corresponding outputs along with
reports maintained by administrator.
Analysis Stage:
The planning stage establishes a bird's eye view of the intended
software product, and uses this to establish the basic project structure,
evaluate feasibility and risks associated with the project, and describe
appropriate management and technical approaches.

The most critical section of the project plan is a listing of high-level


product requirements, also referred to as goals. All of the software
product requirements to be developed during the requirements definition
stage flow from one or more of these goals. The minimum information
for each goal consists of a title and textual description, although
additional information and references to external documents may be
included. The outputs of the project planning stage are the configuration
management plan, the quality assurance plan, and the project plan and
schedule, with a detailed listing of scheduled activities for the upcoming
Requirements stage, and high level estimates of effort for the out stages.

Designing Stage:

The design stage takes as its initial input the requirements identified
in the approved requirements document. For each requirement, a set of
one or more design elements will be produced as a result of interviews,
workshops, and/or prototype efforts. Design elements describe the
desired software features in detail, and generally include functional
hierarchy diagrams, screen layout diagrams, tables of business rules,
business process diagrams, pseudo code, and a complete entity-
relationship diagram with a full data dictionary. These design elements
are intended to describe the software in sufficient detail that skilled
programmers may develop the software with minimal additional input.
When the design document is finalized and accepted, the RTM is
updated to show that each design element is formally associated with
specific requirement. The outputs of the design stage are the design
document, an updated RTM, and an updated project plan.

Development (Coding) Stage:

The development stage takes as its primary input the design


elements described in the approved design document. For each design
element, a set of one or more software artifacts will be produced.
Software artifacts include but are not limited to menus, dialogs, data
management forms, data reporting formats, and specialized procedures
and functions. Appropriate test cases will be developed for each set of
functionally related software artifacts, and an online help system will be
developed to guide users in their interactions with the software.
The RTM will be updated to show that each developed artifact is
linked to a specific design element, and that each developed artifact has
one or more corresponding test case items. At this point, the RTM is in
its final configuration. The outputs of the development stage include a
fully functional set of software that satisfies the requirements and design
elements previously documented, an online help system that describes
the operation of the software, an implementation map that identifies the
primary code entry points for all major system functions, a test plan that
describes the test cases to be used to validate the correctness and
completeness of the software, an updated RTM, and an updated project
plan.
Integration & Test Stage:

During the integration and test stage, the software artifacts, online
help, and test data are migrated from the development environment to a
separate test environment. At this point, all test cases are run to verify
the correctness and completeness of the software. Successful execution
of the test suite confirms a robust and complete migration capability.
During this stage, reference data is finalized for production use and
production users are identified and linked to their appropriate roles. The
final reference data (or links to reference data source files) and
production user list are compiled into the Production Initiation Plan.

The outputs of the integration and test stage include an integrated set
of software, an online help system, an implementation map, a production
initiation plan that describes reference data and production users, an
acceptance plan which contains the final suite of test cases, and an
updated project plan.

Installation & Acceptance Test:

During the installation and acceptance stage, the software artifacts,


online help, and initial production data are loaded onto the production
server. At this point, all test cases are run to verify the correctness and
completeness of the software. Successful execution of the test suite is a
prerequisite to acceptance of the software by the customer.

After customer personnel have verified that the initial production data
load is correct and the test suite has been executed with satisfactory
results, the customer formally accepts the delivery of the software.
The primary outputs of the installation and acceptance stage include
a production application, a completed acceptance test suite, and a
memorandum of customer acceptance of the software. Finally, the PDR
enters the last of the actual labor data into the project schedule and locks
the project as a permanent project record. At this point the PDR "locks"
the project by archiving all software items, the implementation map, the
source code, and the documentation for future reference.

Maintenance:

Outer rectangle represents maintenance of a project, Maintenance


team will start with requirement study, understanding of documentation
later employees will be assigned work and they will under go training on
that particular assigned category.For this life cycle there is no end, it will
be continued so on like an umbrella (no ending point to umbrella sticks).
5. SYSTEM DESIGN

5.1 INTRODUCTION:

Introduction:

Systems design is the process or art of defining the architecture,


components,

modules, interfaces, and data for a system to satisfy specified


requirements. One could see it as the application of systems theory to
product development.

There is some overlap and synergy with the disciplines of systems


analysis, systems -architecture and systems engineering.
UML DIAGRAMS

Use Case
2. Sequence
3. Activity
4. Class

6. IMPLEMENTATION AND SAMPLE CODE


Enabling cloud storage auditing with key-exposure resistance

Cloud storage auditing is viewed as an important service to verify the


integrity of the data in public cloud. Current auditing protocols are all
based on the assumption that the client's secret key for auditing is
absolutely secure. However, such assumption may not always be held,
due to the possibly weak sense of security and/or low security settings at
the client. If such a secret key for auditing is exposed, most of the
current auditing protocols would inevitably become unable to work. In
this paper, we focus on this new aspect of cloud storage auditing. We
investigate how to reduce the damage of the client's key exposure in
cloud storage auditing, and give the first practical solution for this new
problem setting. We formalize the definition and the security model of
auditing protocol with key-exposure resilience and propose such a
protocol. In our design, we employ the binary tree structure and the
preorder traversal technique to update the secret keys for the client. We
also develop a novel authenticator construction to support the forward
security and the property of blockless verifiability. The security proof
and the performance analysis show that our proposed protocol is secure
and efficient.

New publicly verifiable databases with efficient updates


The notion of verifiable database (VDB) enables a resource-constrained
client to securely outsource a very large database to an untrusted server
so that it could later retrieve a database record and update it by assigning
a new value. Also, any attempt by the server to tamper with the data will
be detected by the client. Very recently, Catalano and Fiore [17]
proposed an elegant framework to build efficient VDB that supports
public verifiability from a new primitive named vector commitment. In
this paper, we point out Catalano-Fiore's VDB framework from vector
commitment is vulnerable to the so-called forward automatic update
(FAU) attack. Besides, we propose a new VDB framework from vector
commitment based on the idea of commitment binding. The construction
is not only public verifiable but also secure under the FAU attack.
Furthermore, we prove that our construction can achieve the desired
security properties.

New algorithms for secure outsourcing of modular exponentiations


With the rapid development of cloud services, the techniques for
securely outsourcing the prohibitively expensive computations to
untrusted servers are getting more and more attention in the scientific
community. Exponentiations modulo a large prime have been considered
the most expensive operations in discrete-logarithm-based cryptographic
protocols, and they may be burdensome for the resource-limited devices
such as RFID tags or smartcards. Therefore, it is important to present an
efficient method to securely outsource such operations to (untrusted)
cloud servers. In this paper, we propose a new secure outsourcing
algorithm for (variable-exponent, variable-base) exponentiation modulo
a prime in the two untrusted program model. Compared with the state-
of-the-art algorithm, the proposed algorithm is superior in both
efficiency and checkability. Based on this algorithm, we show how to
achieve outsource-secure Cramer-Shoup encryptions and Schnorr
signatures. We then propose the first efficient outsource-secure
algorithm for simultaneous modular exponentiations. Finally, we
provide the experimental evaluation that demonstrates the efficiency and
effectiveness of the proposed outsourcing algorithms and schemes.

Secure attribute-based data sharing for resource-limited users in


cloud computing
Data sharing becomes an exceptionally attractive service supplied by
cloud computing platforms because of its convenience and economy. As
a potential technique for realizing fine-grained data sharing, attribute-
based encryption (ABE) has drawn wide attentions. However, most of
the existing ABE solutions suffer from the disadvantages of high
computation overhead and weak data security, which has severely
impeded resource-constrained mobile devices to customize the service.
The problem of simultaneously achieving fine-grainedness, high-
efficiency on the data owner's side, and standard data confidentiality of
cloud data sharing actually still remains unresolved. This paper
addresses this challenging issue by proposing a new attribute-based data
sharing scheme suitable for resource-limited mobile users in cloud
computing. The proposed scheme eliminates a majority of the
computation task by adding system public parameters besides moving
partial encryption computation offline. In addition, a public ciphertext
test phase is performed before the decryption phase, which eliminates
most of computation overhead due to illegitimate ciphertexts. For the
sake of data security, a Chameleon hash function is used to generate an
immediate ciphertext, which will be blinded by the offline ciphertexts to
obtain the final online ciphertexts. The proposed scheme is proven
secure against adaptively chosen-ciphertext attacks, which is widely
recognized as a standard security notion. Extensive performance analysis
indicates that the proposed scheme is secure and efficient.

Block design-based key agreement for group data sharing in cloud


Computing

Data sharing in cloud computing enables multiple participants to freely


share the group data, which improves the efficiency of work in
cooperative environments. However, how to ensure the security of data
sharing within a group and how to efficiently share the outsourced data
in a group manner are formidable challenges. Note that key agreement
protocols have played a very important role in secure and efficient group
data sharing in cloud computing. In this paper, by taking advantage of
the symmetric balanced incomplete block design (SBIBD), we present a
novel block design-based key agreement protocol that supports multiple
participants, which can flexibly extend the number of participants in a
cloud environment according to the structure of block design. Based on
the proposed group data sharing model, we present general formulas for
generating the common conference key K for multiple participants. Note
that by benefiting from the ( v,k+1,1 )-block design, the computational
complexity of the proposed protocol linearly increases with the number
of participants and the communication complexity is greatly reduced. In
addition, the fault tolerance property of our protocol enables the group
data sharing in cloud computing to withstand different key attacks,
which is similar to Yi's protocol.

SAMPLE CODE

CREATE DATABASE /*!32312 IF NOT


EXISTS*/`anonymoustraceable` /*!40100 DEFAULT CHARACTER
SET latin1 */;

USE `anonymoustraceable`;

/*Table structure for table `cloud` */

DROP TABLE IF EXISTS `cloud`;

CREATE TABLE `cloud` (


`username` varchar(50) DEFAULT NULL,
`password` varchar(50) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
/*Data for the table `cloud` */

insert into `cloud`(`username`,`password`) values ('cloud','cloud');

/*Table structure for table `grp` */

DROP TABLE IF EXISTS `grp`;

CREATE TABLE `grp` (


`id` int(11) NOT NULL AUTO_INCREMENT,
`groupname` varchar(100) DEFAULT NULL,
`groupkey` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT
CHARSET=latin1;

/*Data for the table `grp` */

insert into `grp`(`id`,`groupname`,`groupkey`) values


(7,'Student','8LmhUivcz1CkhFrgEwhJhA==');

/*Table structure for table `manager` */

DROP TABLE IF EXISTS `manager`;

CREATE TABLE `manager` (


`username` varchar(50) DEFAULT NULL,
`password` varchar(50) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

/*Data for the table `manager` */

insert into `manager`(`username`,`password`) values


('manager','manager');

/*Table structure for table `member` */


DROP TABLE IF EXISTS `member`;

CREATE TABLE `member` (


`username` varchar(100) DEFAULT NULL,
`password` varchar(100) DEFAULT NULL,
`email` varchar(100) NOT NULL,
`dob` varchar(100) DEFAULT NULL,
`gender` varchar(100) DEFAULT NULL,
`address` varchar(100) DEFAULT NULL,
`mobile` varchar(100) DEFAULT NULL,
PRIMARY KEY (`email`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

/*Data for the table `member` */

insert into
`member`(`username`,`password`,`email`,`dob`,`gender`,`address`,`mobi
le`) values ('nikith','123','mail.1000projects@gmail.com','1994-05-
04','MALE','hyd','9052016340'),
('shiva','123','mail1.1000projects@gmail.com','1990-05-
14','MALE','100projects','9052016340');

/*Table structure for table `memberfile` */

DROP TABLE IF EXISTS `memberfile`;

CREATE TABLE `memberfile` (


`id` int(11) NOT NULL AUTO_INCREMENT,
`user` varchar(50) DEFAULT NULL,
`groupname` varchar(50) DEFAULT NULL,
`filekey` varchar(50) DEFAULT NULL,
`status` varchar(50) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT
CHARSET=latin1;

/*Data for the table `memberfile` */


insert into `memberfile`(`id`,`user`,`groupname`,`filekey`,`status`)
values
(4,'mail.1000projects@gmail.com','Student','8LmhUivcz1CkhFrgEwhJh
A==','File Key sent'),
(5,'mail1.1000projects@gmail.com','Student','8LmhUivcz1CkhFrgEwhJ
hA==','File Key sent');

/*Table structure for table `membergroup` */

DROP TABLE IF EXISTS `membergroup`;

CREATE TABLE `membergroup` (


`id` int(11) NOT NULL AUTO_INCREMENT,
`email` varchar(50) DEFAULT NULL,
`groupname` varchar(50) DEFAULT NULL,
`loginkey` varchar(50) DEFAULT NULL,
`filekey` varchar(50) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT
CHARSET=latin1;

/*Data for the table `membergroup` */

insert into
`membergroup`(`id`,`email`,`groupname`,`loginkey`,`filekey`) values
(4,'mail.1000projects@gmail.com','Student','Student7935','8LmhUivcz1
CkhFrgEwhJhA=='),
(5,'mail1.1000projects@gmail.com','Student','Student7681','8LmhUivcz
1CkhFrgEwhJhA=='),
(6,'mail.1000projects@gmail.com','Student','Pending','Pending');

/*Table structure for table `memberrequest` */

DROP TABLE IF EXISTS `memberrequest`;

CREATE TABLE `memberrequest` (


`id` int(11) DEFAULT NULL,
`user` varchar(50) DEFAULT NULL,
`groupname` varchar(50) DEFAULT NULL,
`username` varchar(50) DEFAULT NULL,
`email` varchar(50) DEFAULT NULL,
`loginkey` varchar(50) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

/*Data for the table `memberrequest` */

insert into
`memberrequest`(`id`,`user`,`groupname`,`username`,`email`,`loginkey`)
values
(5,'mail.1000projects@gmail.com','Student','Pending','mail1.1000project
s@gmail.com','Student7681');

/*Table structure for table `messages` */

DROP TABLE IF EXISTS `messages`;


7. SCREEN SHOTS

Fig: Home Page


Fig: Member Registration
Fig: Member Login
Fig: Select Group and Send Request
Fig: Select Group and Login
Fig: Verify Group Key
Fig: Member Home
Fig: request File Request
Fig: File Upload
Fig: Download Files
Fig: Member File Request
Fig: Group Chat
Fig: Group Manager Login
Fig: Manager Home
Fig: Add Group
Fig: View and Delete Groups
Fig: Member Login Requests
Fig: Member File Upload and Download Requests
Fig: Member Details Request
Fig: Cloud Login
Fig: Cloud Home
Fig: View and Delete Groups
Fig: View Files
Fig: Member Details
8. TESTING AND VALIDATION

The purpose of testing is to discover errors. Testing is the


process of trying to discover every conceivable fault or weakness in a
work product. It provides a way to check the functionality of
components, sub assemblies, assemblies and/or a finished product It is
the process of exercising software with the intent of ensuring that the

Software system meets its requirements and user expectations and does
not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.

TYPES OF TESTS
Unit testing:
Unit testing involves the design of test cases that validate that the
internal program logic is functioning properly, and that program inputs
produce valid outputs. All decision branches and internal code flow
should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before
integration. This is a structural testing, that relies on knowledge of its
construction and is invasive. Unit tests perform basic tests at component
level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business
process performs accurately to the documented specifications and
contains clearly defined inputs and expected results.
Integration testing:
Integration tests are designed to test integrated software
components to determine if they actually run as one program. Testing is
event driven and is more concerned with the basic outcome of screens or
fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing
is specifically aimed at exposing the problems that arise from the
combination of components.

Functional testing:
Functional tests provide systematic demonstrations that functions
tested are available as specified by the business and technical
requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be


rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be


exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.


Organization and preparation of functional tests is focused on
requirements, key functions, or special test cases. In addition, systematic
coverage pertaining to identify Business process flows; data fields,
predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are
identified and the effective value of current tests is determined.

LEVELS OF TESTING:
Code testing:
This examines the logic of the program. For example, the logic for
updating various sample data and with the sample files and directories
were tested and verified.
Specification Testing:
Executing this specification starting what the program should do
and how it should performed under various conditions. Test cases for
various situation and combination of conditions in all the modules are
tested.
Unit testing:
In the unit testing we test each module individually and integrate
with the overall system. Unit testing focuses verification efforts on the
smallest unit of software design in the module. This is also known as
module testing. The module of the system is tested separately. This
testing is carried out during programming stage itself. In the testing step
each module is found to work satisfactorily as regard to expected output
from the module. There are some validation checks for fields also. For
example the validation check is done for varying the user input given by
the user which validity of the data entered. It is very easy to find error
debut the system.
Each Module can be tested using the following two Strategies:
• Black Box Testing
• White Box Testing
BLACK BOX TESTING
What is Black Box Testing?

Black box testing is a software testing techniques in which


functionality of the software under test (SUT) is tested without looking
at the internal code structure, implementation details and knowledge of
internal paths of the software. This type of testing is based entirely on
the software requirements and specifications.
In Black Box Testing we just focus on inputs and output of the software
system without bothering about internal knowledge of the software
program.

                        
The above Black Box can be any software system you want to test.
For example : an operating system like Windows, a website like
Google ,a database like Oracle or even your own custom application.
Under Black Box Testing , you can test these applications by just
focusing on the inputs and outputs without knowing their internal code
implementation.
Black box testing - Steps
Here are the generic steps followed to carry out any type of Black Box
Testing.
• Initially requirements and specifications of the system are
examined.
• Tester chooses valid inputs (positive test scenario) to check
whether SUT processes them correctly. Also some invalid inputs
(negative test scenario) are chosen to verify that the SUT is able to
detect them.
• Tester determines expected outputs for all those inputs.
• Software tester constructs test cases with the selected inputs.
• The test cases are executed.
• Software tester compares the actual outputs with the expected
outputs.
• Defects if any are fixed and re-tested.
Types of Black Box Testing
There are many types of Black Box Testing but following are the
prominent ones -
• Functional testing – This black box testing type is related to
functional requirements of a system; it is done by software testers.
• Non-functional testing – This type of black box testing is not
related to testing of a specific functionality, but non-functional
requirements  such as performance, scalability, usability.
• Regression testing – Regression testing is done  after code fixes ,
upgrades or any other system maintenance to check the new code
has not affected the existing code.
WHITE BOX TESTING
White Box Testing is the testing of a software solution's internal
coding and infrastructure. It focuses primarily on strengthening security,
the flow of inputs and outputs through the application, and improving
design and usability. White box testing is also known as clear, open,
structural, and glass box testing.
It is one of two parts of the "box testing" approach of software
testing. Its counter-part, blackbox testing, involves testing from an
external or end-user type perspective. On the other hand, Whitebox
testing is based on the inner workings of an application and revolves
around internal testing. The term "whitebox" was used because of the
see-through box concept. The clear box or whitebox name symbolizes
the ability to see through the software's outer shell (or "box") into its
inner workings. Likewise, the "black box" in "black box testing"
symbolizes not being able to see the inner workings of the software so
that only the end-user experience can be tested
What do you verify in White Box Testing ?
White box testing involves the testing of the software code for the
following:
• Internal security holes
• Broken or poorly structured paths in the coding processes
• The flow of specific inputs through the code
• Expected output
• The functionality of conditional loops
• Testing of each statement, object and function on an individual
basis
How do you perform White Box Testing?
  To give you a simplified explanation of white box testing, we have
divided it into two basic steps. This is what testers do when testing an
application using the white box testing technique:
Step 1) UNDERSTAND THE SOURCE CODE
The first thing a tester will often do is learn and understand the
source code of the application. Since white box testing involves the
testing of the inner workings of an application, the tester must be very
knowledgeable in the programming languages used in the applications
they are testing. Also, the testing person must be highly aware of secure
coding practices. Security is often one of the primary objectives of
testing software. The tester should be able to find security issues and
prevent attacks from hackers and naive users who might inject malicious
code into the application either knowingly or unknowingly.
 Step 2) CREATE TEST CASES AND EXECUTE
The second basic step to white box testing involves testing the
application’s source code for proper flow and structure. One way is by
writing more code to test the application’s source code. The tester will
develop little tests for each process or series of processes in the
application. This   method requires that the tester must have intimate
knowledge of the code and is often done by the developer. Other
methods include manual testing, trial and error testing and the use of
testing tools as we will explain further on in this article.
9.CONCLUSION

In this paper, we present a secure and fault-tolerant key agreement for


group data sharing in a cloud storage scheme. Based on the SBIBD and
group signature technique, the proposed approach can generate a
common conference key efficiently, which can be used to protect the
security of the out sourced data and support secure group data sharing in
the cloud at the same time. Note that algorithms to construct the SBIBD
and mathematical descriptions of the SBIBD are presented in this paper.
Moreover, authentication services and efficient access control are
achieved with respect to the group signature technique. In addition, our
scheme can support the traceability of user identity in an anonymous
environment.

In terms of dynamic changes of the group member, taking advantage of


the key agreement and efficient access control, the computational
complexity and communication complexity for updating the common
conference key and the encrypted data are relatively low.
10. BIBLIOGRAPHY

REFERNCES

[1] J. Yu, K. Ren, C. Wang, and V. Varadharajan, “Enabling cloud


storage auditing with key-exposure resistance,” IEEE Trans. Inf.
Forensics Security, vol. 10, no. 6, pp. 1167–1179, Jun. 2015.

[2] X. Chen, J. Li, X. Huang, J. Ma, and W. Lou, “New publicly


verifiable databases with efficient updates,” IEEE Trans. Depend. Sec.
Comput., vol. 12, no. 5, pp. 546–556, Sep. 2015.

[3] X. Chen, J. Li, J. Ma, Q. Tang, and W. Lou, “New algorithms for
secure outsourcing of modular exponentiations,” IEEE Trans. Parallel
Distrib. Syst., vol. 25, no. 9, pp. 2386–2396, Sep. 2014.

[4] J. Li, Y. Zhang, X. Chen, and Y. Xiang, “Secure attribute-based data


sharing for resource-limited users in cloud computing,” Comput. Secur.,
vol. 72, pp. 1–12, Jan. 2018, doi: 10.1016/j.cose.2017.08.007.

[5] J. Shen, T. Zhou, D. He, Y. Zhang, X. Sun, and Y. Xiang, “Block


design-based key agreement for group data sharing in cloud computing,”
IEEE Trans. Depend. Sec. Comput., to be published, doi:
10.1109/TDSC.2017.2725953.
[6] H. Wang, Q. Wu, B. Qin, and J. Domingo-Ferrer, “FRR: Fair remote
retrieval of outsourced private medical records in electronic health
networks,” J. Biomed. Inform., vol. 50, pp. 226–233, Aug. 2014.

[7] J. Shen, S. Chang, J. Shen, Q. Liu, and X. Sun, “A lightweight multi-


layer authentication protocol for wireless body area networks,” Future
Generat. Comput. Syst., vol. 78, pp. 956–963, Jan. 2016, doi:
10.1016/j.future.2016.11.033.

[8] Q. Liu, G. Wang, and J. Wu, “Time-based proxy re-encryption


scheme for secure data sharing in a cloud environment,” Inf. Sci., vol.
258, pp. 355–370, Feb. 2014.

[9] X. Chen, J. Li, J. Weng, J. Ma, and W. Lou, “Verifiable computation


over large database with incremental updates,” IEEE Trans. Comput.,
vol. 65, no. 10, pp. 3184–3195, Oct. 2016.

[10] J. Shen, J. Shen, X. Chen, X. Huang, and W. Susilo, “An efficient


public auditing protocol with novel dynamic structure for cloud data,”
IEEE Trans. Inf. Forensics Security, vol. 12, no. 10, pp. 2402–2415,
Oct. 2017.

[11] S. Yu, C. Wang, K. Ren, and W. Lou, “Achieving secure, scalable,


and fine-grained data access control in cloud computing,” in Proc. Conf.
Inf. Commun., 2010, pp. 1–9.
[12] X. Yi, “Identity-based fault-tolerant conference key agreement,”
IEEE Trans. Depend. Sec. Comput., vol. 1, no. 3, pp. 170–178, Jul.
2004.

[13] G. Ateniese, K. Fu, M. Green, and S. Hohenberger, “Improved


proxy re-encryption schemes with applications to secure distributed
storage,” ACM Trans. Inf. Syst. Secur., vol. 9, no. 1, pp. 1–30, 2006.

[14] M. Blaze, G. Bleumer, and M. Strauss, “Divertible protocols and


atomic proxy cryptography,” Eurocrypt, vol. 1403, pp. 127–144, May
1998.

[15] P. Li, J. Li, Z. Huang, C.-Z. Gao, W.-B. Chen, and K. Chen,
“Privacypreserving outsourced classification in cloud computing,” in
Cluster Computing, 2017, doi: 10.1007/s10586-017-0849-9.

[16] H. Wang, B. Qin, Q. Wu, and L. Xu, “TPP: Traceable privacy-


preserving communication and precise reward for vehicle-to-grid
networks in smart grids,” IEEE Trans. Inf. Forensics Security, vol. 10,
no. 11, pp. 2340–2351, Nov. 2015.

You might also like