Professional Documents
Culture Documents
Dual Access Control For Cloud Based Data Storage and Sharing
Dual Access Control For Cloud Based Data Storage and Sharing
Cloud-based data storage service has drawn increasing interests from both academic
and industry in the recent years due to its efficient and low cost management. Since it
provides services in an open network, it is urgent for service providers to make use of
secure data storage and sharing mechanism to ensure data confidentiality and service
user privacy. To protect sensitive data from being compromised, the most widely used
method is encryption. However, simply encrypting data (e.g., via AES) cannot fully
address the practical need of data management. Besides, an effective access control
over download request also needs to be considered so that Economic Denial of
Sustainability (EDoS) attacks cannot be launched to hinder users from enjoying
service. In this paper, we consider the dual access control, in the context of cloud-
based storage, in the sense that we design a control mechanism over both data access
and download request without loss of security and efficiency. Two dual access control
systems are designed in this paper, where each of them is for a distinct designed
setting. The security and experimental analysis for the systems are also presented.
TABLE OF CONTENTS
1 INTRODUCTION 1
2 LITERATURE 3
SURVEY
3 SYSTEM ANALYSIS 6
3.3 FEASABILITY 9
STUDY
3.4 MODULES 10
4.1 HARDWARE 11
REQUIRMENTS
4.2 SOFTWARE 11
REQUIREMENTS
5 SYSTEM DESIGN 12
5.1 INTRODUCTION 12
5.3.3 SEQUENCE 25
DIAGRAMS
6 IMPLEMENTATION 26
6.1 TECHNOLOGY 26
USED
7 SOURCE CODE 36
8 TESTING 43
8.1 INTRODUCTION 43
9 SCREENS 47
10 CONCLUSION 52
11 FUTURE 53
ENHANCEMENT
12 BIBLIOGRAPHY 54
Dual Access Control For Cloud Based Data Storage And Sharing
1. INTRODUCTION
1.1. INTRODUCTION
A strawman solution to the control of download request is to leverage dummy
cipher texts to In the recent decades, cloud-based storage service has attracted
considerable attention from both academia and industries. It may be widely used in many
Internet-based commercial applications (e.g., Apple I Could) due to its long-list benefits
including access flexibility and free of local data management. Increasing number of
individuals and companies nowadays prefer to outsource their data to remote cloud in
such a way that they may reduce the cost of upgrading their local data management
facilities/devices. However, the worry of security breach over outsourced data may be
one of the main obstacles hindering Internet users from widely using cloud-based storage
service. In many practical applications, outsourced data may need to be further shared
with others. For example, a Dropbox user Alice may share photos with her friends.
Without using data encryption, prior to sharing the photos, Alice needs to generate a
sharing link and further share the link with friends. Although guaranteeing some level of
access control over unauthorized users (e.g., those are not Alice’s friends), the sharing
link may be visible within the Dropbox administration level (e.g., administrator could
reach the link).Since the cloud (which is deployed in an open network) is not be fully
trusted, it is generally recommended to encrypt the data prior to being uploaded to the
cloud to ensure data security and privacy. One of the corresponding solutions is to
directly employ an encryption technique (e.g., AES) on the outsourced data
To prevent shared photos being accessed by the “insiders” of the system, a
straightforward way is to designate the group of authorized data users prior to encrypting
the data. In some cases, nonetheless, Alice may have no idea about who the photo
receivers/users are going to be. It is possible that Alice only has knowledge of attributes
w.r.t. photo receivers. In this case, traditional public key encryption (e.g., Paillier
Encryption), which requires the encryptor to know who the data receiver is in advance,
cannot be leveraged. so that Alice makes use of the mechanism to define access policy
over the encrypted photos to guarantee only a group of authorized users is able to
access the photos.
In a cloud-based storage service, there exists a common attack that is well-known
as resource-exhaustion attack. Since a (public) cloud may not have any control over
download request (namely, a service user may send unlimited numbers of download
request to cloud server), a malicious service user may launch the denial-of-service
(DoS)/distributed denial-of-service (DDoS) attacks to consume the resource of cloud
storage service server so that the cloud service could not be able to respond honest users’
service requests. As a result, in the “pay-as-you-go” model, economic aspects could be
disrupted due to higher resource usage. The costs of cloud service users will rise
dramatically as the attacks scale up. This has been known as Economic Denial of
Sustainability (EDoS) attack. which targets to the cloud adopter’s economic resources.
Apart from economic loss, unlimited download itself could open a window for network
attackers to observe the encrypted download data that may lead to some potential
information leakage (e.g., file size). Therefore, an effective control over download
request for outsourced (encrypted) data is also needed.
In this project, we propose a new mechanism, dubbed dual access control, to
tackle the above mentioned two problems. To secure data in cloud-based storage service,
attribute-based encryption (ABE) is one of the promising candidates that enables the
confidentiality of outsourced data as well as fine-grained control over the outsourced
data. In particular, Cipher text-Policy ABE (CP-ABE) provides an effective way of data
encryption such that access policies, defining the access privilege of potential data
receivers, can be specified over encrypted data. Note that we consider the use of CP-ABE
in our mechanism in this paper. Nevertheless, simply employing CP-ABE technique is
not sufficient to design an elegant mechanism guaranteeing the control of both data
access and download request. verify data receiver’s decryption rights. It, concretely,
requires data owner, say Alice, to upload multiple “testing” cipher texts along with the
“real” encryption of data to cloud, where the “testing” cipher texts are the encryptions of
dummy messages under the same access policy as that of the “real” data. After receiving
a download request from a user, say Bob, cloud asks Bob to randomly decrypt one of the
“testing” cipher texts. If a correct result/decryption is returned.
2. LITERATURE SURVEY
1) John Bethencourt, Amit Sahai, and BrentWaters. Ciphertext-policy attribute-
based encryption. In S&P 2007, pages 321–334. IEEE, 2007.
In several distributed systems a user should only be able to access data if a user
posses a certain set of credentials or attributes. Currently, the only method for enforcing
such policies is to employ a trusted server to store the data and mediate access control.
However, if any server storing the data is compromised, then the confidentiality of the
data will be compromised. In this paper we present a system for realizing complex access
control on encrypted data that we call ciphertext-policy attribute-based encryption. By
using our techniques encrypted data can be kept confidential even if the storage server is
untrusted; moreover, our methods are secure against collusion attacks. Previous attribute-
based encryption systems used attributes to describe the encrypted data and built policies
into user's keys; while in our system attributes are used to describe a user's credentials,
and a party encrypting data determines a policy for who can decrypt. Thus, our methods
are conceptually closer to traditional access control methods such as role-based access
control (RBAC). In addition, we provide an implementation of our system and give
performance measurements.
2) Jinguang Han, Willy Susilo, Yi Mu, Jianying Zhou, and Man Ho Allen Au.
Improving privacy and security in decentralized ciphertext-policy attribute-based
encryption. IEEE transactions on information forensics and security, 10(3):665–678,
2015.
In previous privacy-preserving multiauthority attribute-based encryption (PPMA-
ABE) schemes, a user can acquire secret keys from multiple authorities with them
knowing his/her attributes and furthermore, a central authority is required. Notably, a
user's identity information can be extracted from his/her some sensitive attributes. Hence,
existing PPMA-ABE schemes cannot fully protect users' privacy as multiple authorities
can collaborate to identify a user by collecting and analyzing his attributes. Moreover,
ciphertext-policy ABE (CP-ABE) is a more efficient public-key encryption, where the
encryptor can select flexible access structures to encrypt messages. Therefore, a
challenging and important work is to construct a PPMA-ABE scheme where there is no
necessity of having the central authority and furthermore, both the identifiers and the
solve the security issues in cloud setting. However, the computation cost and ciphertext
size in most ABE schemes grow with the complexity of the access policy. Outsourced
ABE (OABE) with fine-grained access control system can largely reduce the computation
cost for users who want to access encrypted data stored in cloud by outsourcing the heavy
computation to cloud service provider (CSP). However, as the amount of encrypted files
stored in cloud is becoming very huge, which will hinder efficient query processing. To
deal with above problem, we present a new cryptographic primitive called attribute-based
encryption scheme with outsourcing key-issuing and outsourcing decryption, which can
implement keyword search function (KSF-OABE). The proposed KSF-OABE scheme is
proved secure against chosen-plaintext attack (CPA). CSP performs partial decryption
task delegated by data user without knowing anything about the plaintext. Moreover, the
CSP can perform encrypted keyword search without knowing anything about the
keywords embedded in trapdoor.
5) Jiguo Li, Yao Wang, Yichen Zhang, and Jinguang Han. Full verifiability for
outsourced decryption in attribute based encryption. IEEE Transactions on
Services Computing, DOI: 10.1109/TSC.2017.2710190, 2017.
Attribute based encryption (ABE) is a popular cryptographic technology to
protect the security of users' data. However, the decryption cost and ciphertext size
restrict the application of ABE in practice. For most existing ABE schemes, the
decryption cost and ciphertext size grow linearly with the complexity of access structure.
This is undesirable to the devices with limited computing capability and storage space.
Outsourced decryption is considered as a feasible method to reduce the user's decryption
overhead, which enables a user to outsource a large number of decryption operations to
the cloud service provider (CSP). However, outsourced decryption cannot guarantee the
correctness of transformation done by the cloud, so it is necessary to check the
correctness of outsourced decryption to ensure security for users' data. Current research
mainly focuses on verifiability of outsourced decryption for the authorized users. It still
remains a challenging issue that how to guarantee the correctness of outsourced
decryption for unauthorized users. In this paper, we propose an ABE scheme with
verifiable outsourced decryption (called full verifiability for outsourced decryption),
3. SYSTEM ANALYSIS
EXISTING SYSTEM
Although being able to support fine-grained data access, CP-ABE, acting as a single
solution, is far from practical and effective to hold against EDoS attack which s the case
of DDoS in the cloud setting. Several countermeasures to the attack have been proposed
in the literature. But Xue et al. [38] stated that the previous works could not fully defend
the EDoS attack in the algorithmic (or protocol) level, and they further proposed a
solution to secure cloud data sharing from the attack.
However, [38] suffers from two disadvantages. First, the data owner is required to
generate a set of challenge ciphertexts in order to resist the attack, which enhances its
computational burden. Second, a data user is required to decrypt one of the challenge
ciphertexts as a test, which costs a plenty of expensive operations (e.g., pairing). Here the
computational complexity of both parties is inevitably increased and meanwhile, high
network bandwidth is required for the delivery of ciphertexts. The considerable
computational power of cloud is not fully considered in . In this paper, we will present a
new solution that requires less computation and communication cost to stand still in front
of the EDoS attack.
Disadvantages
PROPOSED SYSTEM
In this paper, we propose a new mechanism, dubbed dual access control, to tackle
the above aforementioned two problems. To secure data in cloud-based storage service,
attribute-based encryption (ABE) is one of the promising candidates that enables the
confidentiality of outsourced data as well as fine-grained control over the outsourced
data.
Advantages
(1) Confidentiality of outsourced data. In our proposed systems, the outsourced data is
encrypted prior to being uploaded to cloud. No one can access them without valid access
rights.
(2) Anonymity of data sharing. Given an outsourced data, cloud server cannot identify
data owner, so that the anonymity of owner can be guaranteed in data storage and
sharing.
(3) Fine-grained access control over outsourced (encrypted) data. Data owner keeps
controlling his encrypted data via access policy after uploading the data to cloud. In
particular, a data owner can encrypt his outsourced data under a specified access policy
such that only a group of authorized data users, matching the access policy, can access
the data.
(4) Control over anonymous download request and EDoS attacks resistance. A cloud
server is able to control the download request issued by any system user, where the
download request can set to be anonymous. With the control over download request, we
state that our systems are resistant to EDoS attacks.
(5) High efficiency. Our proposed systems are built on the top of the CP-ABE system.
Compared with [36], they do not incur significant additional computation and
communication overhead. This makes the systems feasible for real-world applications.
FEASIBILTY STUDY
The feasibility of the project is analyzed in this phase and business proposal is put
forth with a very general plan for the project and some cost estimates. During system
analysis the feasibility study of the proposed system is to be carried out. This is to ensure
that the proposed system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential. Three key
considerations involved in the feasibility analysis are
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have
on the organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the
developed system as well within the budget and this was achieved because most of the
technologies used are freely available. Only the customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the
available technical resources. This will lead to high demands on the available technical
resources. This will lead to high demands being placed on the client. The developed
system must have a modest requirement, as only minimal or null changes are
required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user.
This includes the process of training the user to use the system efficiently. The user must
not real threatened by the system, instead must accept it as a necessity. The level of
acceptance by the users solely depends on the methods that are employed to educate the
user about the system and to make him familiar with it. His level of confidence must be
raised so hat he is also able to make some constructive criticism, which is welcomed, as
he is the final user of the system.
End User
In this module, receivers logs in by using his/her user name and password. After
Login receiver will Search for files and request for secret key of a particular file
from Authority, and get the secret key. After getting secret key he is trying to
download file by entering file name and secret key from cloud server.
Authority
In this module, the authority helps to check transaction of files and also. If
receiver exists and the profile. Authority also view the requests from the receivers
and generates the secret key and send to the requested data receivers.
Cloud Server
The cloud service provider manages a cloud to provide data storage service. Data
owners encrypt their data files and store them in the cloud for sharing with
Remote User. To access the shared data files, data consumers download encrypted
data files of their interest from the cloud and then decrypt them.
Data Encryption and Decryption
All the legal receivers in the system can freely query any interested encrypted and
decrypted data. Upon receiving the data from the server, the receiver runs the
decryption algorithm Decrypt to decrypt the cipher text by using its secret keys
from different Users.
Attacker Module
In Data Receiver module, while downloading files if receiver enters wrong secret
key for particular file, then cloud servers treats him as attacker and moves to
attacker list.
SOFTWARE REQUIREMENTS
Operating System : Windows XP/7.
Coding Language : Java/J2EE
Web Server : Tomcat7.x
Database : MySQL 5.5
5. SYSTEM DESIGN
INTRODUCTION
A graphical tool used to describe and analyze the moment of data through a
system manual or automated including the process, stores of data, and delays in the
system. Data Flow Diagrams are the central tool and the basis from which other
components are developed. The transformation of data from input to output, through
processes, may be described logically and independently of the physical components
associated with the system. The DFD is also known as a data flow graph or a bubble
chart.
DFDs are the model of the proposed system. They clearly should show the
requirements on which the new system should be built. Later during design activity this is
taken as the basis for drawing the system’s structure charts. The Basic Notation used to
create a DFD’s are as follows:
1. Dataflow: Data move in a specific direction from an origin to a destination.
Mapping Constraints
An E-R diagram may define certain constraints which the contents of a database must
conform.
Mapping Cardinalities
It expresses the number of entities to which another entity can be associated via a
relationship. For binary relationship sets between entity sets A and B, the mapping
cardinality must be one of the following:
One-to-One – An entity in A is associated with at most one entity in B, and an entity in
B is associated with at most one entity in A.
One-to-many -An entity in A is associated with any number in B. An entity in B is
associated with any number in A.
Many-to-many – Entities in A and B are associated with any number from each other.
Cardinality: It indicates that which type relationship the business rule follows is
called cardinality.
Connectivity: It specifies that which type of notation the entities are connected in both
sides that one side or many side.
DATA DICTIONARY
The logical characteristics of current systems data stores, including name,
description, aliases, contents, and organization, identifies processes where the data are
used and where immediate access to information required, Serves as the basis for
identifying database requirements during system design.
Uses of Data Dictionary
To manage the details in large systems.
To communicate a common meaning for all system elements.
To Document the features of the system.
Annotational things
Structural Things
Structural things are the nouns of the UML models. These are mostly static parts
of the model, representing elements that are either conceptual or physical. In all, there are
seven kinds of Structural things.
Use Case
Use case is a description of a set of sequence of actions that a system performs
that yields an observable result of value to a particular things in a model. Graphically,
Use Case is rendered as an ellipse with dashed lines, usually including only its name as
shown below.
Collaboration
Collaboration defines an interaction and is a society of roles and other elements
that work together to provide some cooperative behaviour that’s bigger than the sum of
all the elements. Graphically, collaboration is rendered as an ellipse with dashed lines,
usually including only its name as shown below.
Component
Component is a physical and replaceable part of a system that conforms to and
provides the realization of a set of interfaces. Graphically, a component is rendered as a
rectangle with tabs, usually including only its name, as shown below.
Behavioral Things
Behavioral things are the dynamic parts of UML models. These are the verbs of a
Department of CSE Page 18
Dual Access Control For Cloud Based Data Storage And Sharing
model, representing behavior over time and space.
Interaction
An interaction is a behavior that comprises a set of messages exchanged among a
set of objects within a particular context to accomplish a specific purpose.
Display
Fig-5.3.7 : Sample Interaction Diagram
State Machine
A state machine is a behavior that specifies the sequence of states an object or an
interaction goes through during its lifetime on response to events, together with its
responses to those events. Graphically, a state is rendered as rounded rectangle usually
including its name and its sub-states, if any, as shown below.
Fig-5.3.13 : Generalization
Fourth, a realization is a semantic relationship between classifiers, wherein one classifier
specifies a contract that another classifier guarantees to carry out. You’ll encounter
realization relationships in two places between interfaces and the classes or components
that realize them and between use cases and the collaborations that realize them.
Fig-5.3.14 : Realization
Each UML diagram is designed to let developers and customers view a software system
from a different perspective and in varying degrees of abstraction. Use Case Diagram
displays the relationship among actors and use cases.
Class Diagram models class structure and contents using design elements such as classes,
packages and objects. It also displays relationships such as containment, inheritance,
associations and others.
Interaction Diagrams
Sequence Diagram displays the time sequence of the objects participating in the
interaction. This consists of the vertical dimension (time) and horizontal dimension
(different objects).
Collaboration Diagram displays an interaction organized around the objects and
their links to one another. Numbers are used to show the sequence of messages.
State Diagram displays the sequences of states that an object of an interaction
goes through during its life in response to received stimuli, together with its
responses and actions.
Activity Diagram displays a special state diagram where most of the states are action
states and most of the transitions are triggered by completion of the actions in the source
states. This diagram focuses on flows driven by internal processing.
Physical Diagrams
Component Diagram displays the high level packaged structure of the code itself.
Dependencies among components are shown, including source code components,
binary code components, and executable components. Some components exist at
compile time, at link time, at run times well as at more than one time.
Deployment Diagram displays the configuration of run-time processing elements
and the software components, processes, and objects that live on them. Software
component instances represent run-time manifestations of code.
Fig-5.3.2.1:Class Diagram
TECHNOLOGY USED
Java technology is both a programming language and a platform.
The Java Programming Language
The Java programming language is a high-level language that can be
characterized by all of the following buzzwords:
Simple
Architecture neutral
Object oriented
Portable
Distributed
High performance
Interpreted
Multithreaded
Robust
Dynamic
Secure
Native code is code that after you compile it, the compiled code runs on a specific
hardware platform. As a platform-independent environment, the Java platform can be a
bit slower than native code. However, smart compilers, well-tuned interpreters, and just-
in- time byte code compilers can bring performance close to that of native code without
threatening portability.
What Can Java Technology Do?
The most common types of programs written in the Java programming language
are applets and applications. If you’ve surfed the Web, you’re probably already familiar
with applets. An applet is a program that adheres to certain conventions that allow it to
run within a Java-enabled browser.
However, the Java programming language is not just for writing cute, entertaining
applets for the Web. The general-purpose, high-level Java programming language is also
a powerful software platform. Using the generous API, you can write many types of
programs.
with early reviewer feedback, have finalized the JDBC class library into a solid
framework for building database applications in Java.
The goals that were set for JDBC are important. They will give you some insight as to
why certain classes and functionalities behave the way they do. The eight design goals
for JDBC are as follows:
IP datagram’s
The IP layer provides a connectionless and unreliable delivery system. It
considers each datagram independently of the others. Any association between datagram
must be supplied by the higher layers. The IP layer supplies a checksum that includes its
own header. The header includes the source and destination addresses. The IP layer
handles routing through an Internet. It is also responsible for breaking up large datagram
into smaller ones for transmission and reassembling them at the other end.
UDP
UDP is also connectionless and unreliable. What it adds to IP is a checksum for
the contents of the datagram and port numbers. These are used to give a client/server
model - see later.
TCP
TCP supplies logic to give a reliable connection-oriented protocol above IP. It
provides a virtual circuit that two processes can use to communicate
Internet addresses
In order to use a service, you must be able to find it. The Internet uses an address
scheme for machines so that they can be located. The address is a 32 bit integer which
gives the IP address. This encodes a network ID and more addressing.
Network address
Class A uses 8 bits for the network address with 24 bits left over for other
addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network
addressing and class D uses all 32.
Subnet address
Internally, the UNIX network is divided into sub networks. Building 11 is
currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.
Host address
8 bits are finally used for host addresses within our subnet. This places a limit of 256
machines that can be on the subnet.
Total Address
The 32 bit address is usually written as 4 integers separated by dots.
Port addresses
A service exists on a host, and is identified by its port. This is a 16 bit number. To
send a message to a server, you send it to the port for that service of the host that it is
running on. This is not location transparency! Certain of these ports are "well known".
Sockets
A socket is a data structure maintained by the system to handle network
connections. A socket is created using the call socket. It returns an integer that is like a
file descriptor. In fact, under Windows, this handle can be used with Read File and Write
File functions.
#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);
Here "family" will be AF_INET for IP communications, protocol will be zero, and type
will depend on whether TCP or UDP is used. Two processes wishing to communicate
over a network create a socket each. These are similar to two ends of a pipe - but the
actual pipe does not yet exist.
JFree Chart
JFreeChart is a free 100% Java chart library that makes it easy for developers to
display professional quality charts in their applications. JFreeChart's extensive feature set
includes:
A consistent and well-documented API, supporting a wide range of chart types;
A flexible design that is easy to extend, and targets both server-side and client-side
applications;
Support for many output types, including Swing components, image files
(including PNG and JPEG), and vector graphics file formats (including PDF, EPS and
SVG);
JFreeChart is "open source" or, more specifically, free software. It is distributed under the
terms of the GNU Lesser General Public Licence (LGPL), which permits use in
proprietary applications.
1. Map
Charts showing values that relate to geographical areas. Some examples include:
(a) population density in each state of the United States, (b) income per capita for each
country in Europe, (c) life expectancy in each country of the world. The tasks in this
project include:
Sourcing freely redistributable vector outlines for the countries of the world,
states/provinces in particular countries (USA in particular, but also other areas).
Creating an appropriate dataset interface (plus default implementation), a rendered,
and integrating this with the existing XYPlot class in JFreeChart;
Testing, documenting, testing some more, documenting some more.
Implement a new (to JFreeChart) feature for interactive time series charts --- to display a
separate control that shows a small version of ALL the time series data, with a sliding
"view" rectangle that allows you to select the subset of the time series data to display in
the main chart.
1.Dashboards
There is currently a lot of interest in dashboard displays. Create a flexible dashboard
mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers,
bars, and lines/time series) that can be delivered easily via both Java Web Start and an
applet. 2.Property Editors
The property editor mechanism in JFreeChart only handles a small subset of the
properties that can be set for charts. Extend (or reemployment) this mechanism to provide
greater end-user control over the appearance of the charts. J2ME (Java 2 Micro edition)
Sun Microsystems defines J2ME as "a highly optimized Java run-time environment
targeting a wide range of consumer products, including pagers, cellular phones, screen-
phones, digital set-top boxes and car navigation systems." Announced in June 1999 at the
JavaOne Developer Conference, J2ME brings the cross-platform functionality of the Java
language to smaller devices, allowing mobile wireless devices to share applications. With
J2ME, Sun has adapted the Java platform for consumer products that incorporate or are
based on small computing devices.
DATABASE
SQL Level API
The designers felt that their main goal was to define a SQL interface for Java.
Although not the lowest database interface level possible, it is at a low enough level for
higher-level tools and APIs to be created. Conversely, it is at a high enough level for
application programmers to use it confidently. Attaining this goal allows for future tool
vendors to “generate” JDBC code and to hide many of JDBC’s complexities from the end
user.
1. SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In an effort
to support a wide variety of vendors, JDBC will allow any query statement to be passed
through it to the underlying database driver. This allows the connectivity module to
handle non-standard functionality in a manner that is suitable for its users. 1. JDBC must
be implemental on top of common database interfaces
The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal
allows JDBC to use existing ODBC level drivers by the use of a software interface. This
interface would translate JDBC calls to ODBC and vice versa.
2. Provide a Java interface that is consistent with the rest of the Java system
Because of Java’s acceptance in the user community thus far, the designers feel
that they should not stray from the current design of the core Java system.
3. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no
exception. Sun felt that the design of JDBC should be very simple, allowing for only one
method of completing a task per mechanism. Allowing duplicate functionality only
serves to confuse the users of the API.
4. Use strong, static typing wherever possible
Strong typing allows for more error checking to be done at compile time; also,
less error appear at runtime.
1. Design View
2. Datasheet View
Design View: To build or modify the structure of a table we work in the table design
view. We can specify what kind of data will be hold.
Datasheet View: To add, edit or analyses the data itself we work in tables datasheet
view mode.
Query: A query is a question that has to be asked the data. Access gathers data that
answers the question from one or more table. The data that make up the answer is either
dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get
latest information in the dynaset. Access either displays the dynaset or snapshot for us to
view or perform an action on it, such as deleting or updating.
7. SAMPLE CODE
Register.jsp
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
lang="en"> <head>
<!--[if lt IE 7]>
<![endif]-->
<style type="text/css">
<!--
.style1 {
font-size: 37px;
color: #CCCCCC;
-->
</style>
</head>
<body id="page1">
<div class="tail-top">
<div class="tail-bottom">
<div class="body-bg">
<div id="header">
<div class="row-1">
<div class="fleft">
</div>
<div class="fright"></div>
</div>
<div class="row-2">
<ul>
</ul>
</div>
<fieldset>
<div><span>
</fieldset>
</form>
</div>
</div>
<div id="content">
<div class="tail-right">
<div class="wrapper">
<div class="col-1">
<div class="indent">
<div class="indent1">
<p align="justify"> </p>
</label>
<p>
/> </p>
</label>
<p>
</p>
<p>
</label>
/> </p>
</label>
<p>
/> </p>
</label>
<p>
id="address"></textarea> </p>
</label>
<p>
</p>
</label>
<p>
<option>-Select-</option>
<option>Male</option>
<option>Female</option>
</select>
</p>
</label>
<p>
</p>
</label>
<p>
/> </p>
</label>
<p>
</p>
<p>
/> </p>
</form>
<p align="justify"> </p>
</div>
<h4> </h4>
</div>
</div>
<div class="col-2">
<ul>
href="TA_Login.jsp">T-Authority</a></li>
<li><a href="C_Login.jsp">Cloud</a></li>
</ul>
</div>
</div>
</div>
</div>
<div id="footer">
<div class="indent">
<div class="fleft"></div>
<div class="fright"></div>
</div>
</div>
</div>
</div>
</div>
<div align=center></div>
</body>
</html>
8. TESTING
Testing is a process, which reveals errors in the program. It is the major quality
measure employed during software development. During software development, during
testing, the program is executed with a set of test cases and the output of the program for
the test cases is evaluated to determine if the program is performing as it is expected to
perform.
Testing Methodologies
In order to make sure that the system does not have errors, the different levels of
testing strategies to that are applied to at differing phases of software development.
Unit Testing
Unit testing is done on individual modules as they are completed and become
executable. It is confined only to the designer's requirements. Each module can be tested
using the following two Strategies,
In this strategy some test cases are generated as input conditions that fully execute all
functional requirements for the program. This testing has been uses to find errors in the
following categories:
Incorrect or missing functions
Interface errors
Errors in data structure or external database access
Performance errors
Initialization and termination errors.
In this testing only the output is checked for correctness. The logical flow of the data
is not checked.
In this the test cases are generated on the logic of each module by drawing flow
graphs of that module and logical decisions are tested on all the cases. It has been uses to
generate the test cases in the following cases:
Guarantee that all independent paths have been executed.
Execute all logical decisions on their true and false Sides.
Execute all loops at their boundaries and within their operational bounds
Execute internal data structures to ensure their validity.
Integrating Testing
Integration testing ensures that software and subsystems work together a whole. It
tests the interface of all the modules to make sure that the modules behave properly when
integrated together.
System Testing
Here the entire software system is tested. The reference document for this process
is the requirements document, and the goal is to see if software meets its requirements.
Here entire ‘Cybernetic Protectors Application’ has been tested against requirements of
project and it is checked whether all requirements of project have been satisfied or not.
Acceptance Testing
Acceptance Test is performed with realistic data of the client to demonstrate that
the software is working satisfactorily. Testing here is focused on external behavior of the
system; the internal logic of program is not emphasized. In this project ‘Cybernetic
Protectors Application’ I have collected some data and tested whether project is working
correctly or not. Test cases should be selected so that the largest number of attributes of
an equivalence class is exercised at once. The testing phase is an important part of
software development. It is the process of finding errors and missing operations and also
a complete verification to determine whether the objectives are met and the user
requirements are satisfied.
Department of CSE Page 43
Dual Access Control For Cloud Based Data Storage And Sharing
Test Approach
Testing can be done in two ways:
Bottom up approach
Top down approach
Bottom Up Approach
Testing can be performed starting from smallest and lowest level modules and
proceeding one at a time. For each module in bottom up testing a short program executes
the module and provides the needed data so that the module is asked to perform the way
it will when embedded with in the larger system.
The system has been tested and implemented successfully and thus ensured that all
the requirements as listed in the software requirements specification are completely
fulfilled. In case of erroneous input corresponding error messages are displayed.
TEST CASES
EXPECTED
S. No. TEST CASES INPUT ACTUAL RESULT STATUS
RESULT
Cloud page
4 Cloud Login Give Username should be Cloud page has Pass
and password been opened
opened
Details
6 Upload Enter All the Details Should Uploaded Pass
Details Details be Upload
Success Fully
9. SCREENS
Fig-8.1:Home Page
Fig-8.15:File Uploading
Fig-8.17:Request Msk
Fig-8.19:Msk Status
Fig-8.23:Request Trapdoor
Fig-8.24:Download File
10. CONCLUSION
CONCLUSION
We addressed an interesting and long-lasting problem in cloud-based data
sharing, and presented two dual access control systems. The proposed systems are
resistant to DDoS/EDoS attacks. We state that the technique used to achieve the feature
of control on download request is “transplantable” to other CP-ABE constructions. Our
experimental results show that the proposed systems do not impose any significant
computational and communication overhead (compared to its underlying CP-ABE
building block). In our enhanced system, we employ the fact that the secret information
loaded into the enclave cannot be extracted. However, recent work shows that enclave
may leak some amounts of its secret(s) to a malicious host through the memory access
patterns or other related side- channel attacks. The model of transparent enclave
execution is hence introduced in. Constructing a dual access control system for cloud data
sharing from transparent enclave is an interesting problem. In our future work, we will
consider the corresponding solution to the problem.
It is not possible to develop a system that makes all the requirements of the user. User
requirements keep changing as the system is being used. Some of the future enhancements that
can be done to this system are
It is further to implement to send the encryption to key to mail.
It is better to implement to generate OTP/Biometric login Authentication for
strengthening security.
This project needs to be enhance with cryptography encryption techniques.
12. BIBLIOGRAPHY
[1] Joseph A Akinyele, Christina Garman, Ian Miers, Matthew W Pagano, Michael
Rushanan, Matthew Green, and Aviel D Rubin.Charm: a framework for rapidly
prototyping cryptosystems. Journal of Cryptographic Engineering, 3(2):111–128, 2013.
[2] Ittai Anati, Shay Gueron, Simon Johnson, and Vincent Scarlata.Innovative technology
for cpu based attestation and sealing. In Workshop on hardware and architectural support
for security and privacy(HASP), volume 13, page 7. ACM New York, NY, USA, 2013.
[3] Alexandros Bakas and Antonis Michalas. Modern family: A revocable hybrid
encryption scheme based on attribute-based encryption,symmetric searchable encryption
and SGX. In SecureComm 2019, pages 472–486, 2019.
[4] Amos Beimel. Secure schemes for secret sharing and key distribution.PhD
thesis, PhD thesis, Israel Institute of Technology, Technion,Haifa, Israel, 1996.
[5] John Bethencourt, Amit Sahai, and BrentWaters. Ciphertext-policy attribute-
based encryption. In S&P 2007, pages 321–334. IEEE,2007.
[6] Victor Costan and Srinivas Devadas. Intel sgx explained. IACR Cryptology
ePrint Archive, 2016(086):1–118, 2016.
[7] Ben Fisch, Dhinakaran Vinayagamurthy, Dan Boneh, and Sergey Gorbunov. IRON:
functional encryption using intel SGX. In Proceedings of the 2017 ACM SIGSAC
Conference on Computer and Communications Security, CCS 2017, pages 765–782,
2017.
[8] Eiichiro Fujisaki and Tatsuaki Okamoto. Secure integration of asymmetric and
symmetric encryption schemes. In Advances in Cryptology-CRYPTO 1999, pages 537–
[9] Vipul Goyal, Omkant Pandey, Amit Sahai, and Brent Waters.Attribute-based
encryption for fine-grained access control of encrypted data. In ACM CCS 2006, pages
89–98. ACM, 2006.