You are on page 1of 73

1.

INTRODUCTION

1.1 PROJECT PROFILE


Cloud storage has emerged as a promising solution for providing ubiquitous,
convenient, and on-demand accesses to large amounts of data shared over the
Internet. Today, millions of users are sharing personal data, such as photos and
videos, with their friends through social network applications based on cloud storage
on a daily basis. Business users are also being attracted by cloud storage due to its
numerous benefits, including lower cost, greater agility, and better resource
utilization.
However, while enjoying the convenience of sharing data via cloud storage,
users are also increasingly concerned about inadvertent data leaks in the cloud. Such
data leaks, caused by a malicious adversary or a misbehaving cloud operator, can
usually lead to serious breaches of personal privacy or business secrets (e.g., the
recent high profile incident of celebrity photos being leaked in I Cloud). To address
users’ concerns over potential data leaks in cloud storage, a common approach is for
the data owner to encrypt all the data before uploading them to the cloud, such that
later the encrypted data may be retrieved and decrypted by those who have the
decryption keys. Such cloud storage is often called the cryptographic cloud storage.
However, the encryption of data makes it challenging for users to search and then
selectively retrieve only the data containing given keywords. A common solution is
to employ a searchable encryption (SE) scheme in which the data owner is required
to encrypt potential keywords and upload them to the cloud together with encrypted
data, such that, for retrieving data matching a keyword, the user will send the
corresponding keyword trapdoor to the cloud for performing search over the
encrypted data.
Although combining a searchable encryption scheme with cryptographic
cloud storage can achieve the basic security requirements of a cloud storage,
implementing such a system for large scale applications involving millions of users
and billions of files may still be hindered by practical issues involving the efficient
management of encryption keys, which, to the best of our knowledge, are largely
ignored in the literature. First of all, the need for selectively sharing encrypted data
with different users (e.g., sharing a photo with certain friends in a social network
application, or sharing a business document with certain colleagues on a cloud drive)
usually demands different encryption keys to be used for different files. However,
this implies the number of keys that need to be distributed to users, both for them to
search over the encrypted files and to decrypt the files, will be proportional to the
number of such files. Such a large number of keys must not only be distributed to
users via secure channels, but also be securely stored and managed
by the users in their devices. In addition, a large number of trapdoors must be
generated by users and submitted to the cloud in order to perform a keyword search
over many files. The implied need for secure communication, storage, and
computational complexity may render such a system inefficient and impractical.
In this project, address this challenge by proposing the novel concept of key-
aggregate searchable encryption (KASE), and instantiating the concept through a
concrete KASE scheme. The KASE framework is composed of seven algorithms.
Specifically, to set up the scheme, the cloud server would generate public parameters
of the system through the Setup algorithm, and these public parameters can be
reused by different data owners to share their files. For each data owner, he/she
should produce a public/master-secret key pair through the Keygen algorithm.
Keywords of each document can be encrypted via the Encrypt algorithm with the
unique searchable encryption key. Then, the data owner can use the master-secret
key to generate an aggregate searchable encryption key for a group of selected
documents via the Extract algorithm. The aggregate key can be distributed securely
(e.g., via secure e-mails or secure devices) to authorized users who need to
access those documents. After that, an authorized user can produce a keyword
trapdoor via the Trapdoor algorithm using this aggregate key, and submit the
trapdoor to the cloud. After receiving the trapdoor, to perform the keyword search
over the specified set of documents, the cloud server will run the Adjust algorithm
to generate the right trapdoor for each document, and then run the Test algorithm to
test whether the document contains the keyword.
Enterprises can choose to deploy applications on Public, Private or Hybrid
clouds. Cloud Integrators can play a vital part in determining the right cloud path for
each organization.

(a) Public Cloud

Public clouds are owned and operated by third parties; they deliver superior
economies of scale to customers, as the infrastructure costs are spread among a mix
of users, giving each individual client an attractive low-cost, “Pay-as-you-go”
model. All customers share the same infrastructure pool with limited configuration,
security protections, and availability variances. These are managed and supported
by the cloud provider. One of the advantages of a Public cloud is that they may be
larger than an enterprises cloud, thus providing the ability to scale seamlessly, on
demand.

(b) Private Cloud

Private clouds are built exclusively for a single enterprise. They aim to address
concerns on data security and offer greater control, which is typically lacking in a
public cloud. There are two variations to a private cloud:
 On-premise Private Cloud: On-premise private clouds, also known as
internal clouds are hosted within one own data centre. This model provides a
more standardized process and protection, but is limited in aspects of size and
scalability. IT departments would also need to incur the capital and
operational costs for the physical resources. This is best suited for
applications which require complete control and configurability of the
infrastructure and security.
 Externally hosted Private Cloud: This type of private cloud is hosted
externally with a cloud provider, where the provider facilitates an exclusive
cloud environment with full guarantee of privacy. This is best suited for
enterprises that don't prefer a public cloud due to sharing of physical
resources.

(c) Hybrid Cloud

Hybrid Clouds combine both public and private cloud models. With a Hybrid
Cloud, service providers can utilize 3rd party Cloud Providers in a full or partial
manner thus increasing the flexibility of computing. The Hybrid cloud environment
is capable of providing on-demand, externally provisioned scale. The ability to
augment a private cloud with the resources of a public cloud can be used to manage
any unexpected surges in workload.

Scope of the project

The scope of the project is key aggregate searchable encryption (KASE) and
instantiating the concept through a concrete KASE scheme, in which a data owner
only needs to distribute a single key to a user for sharing a large number of
documents, and the user only needs to submit a single trapdoor to the cloud for
querying the shared documents.
Consider a scenario where two employees of a company would like to share
some confidential business data using a public cloud storage service. For instance,
Alice wants to upload a large collection of financial documents to the cloud storage,
which are meant for the directors of different departments to review. Suppose those
documents contain highly sensitive information that should only be accessed by
authorised users, and Bob is one of the directors and is thus authorized to view
documents related to his department. Due to concerns about potential data leakage
in the cloud, Alice encrypts these documents with different keys, and generates
keyword ciphertexts based on department names, before uploading to the cloud
storage. Alice then uploads and shares those documents with the directors using the
sharing functionality of the cloud storage. In order for Bob to view the documents
related to his department, Alice must delegate to Bob the rights both for keyword
search over those documents, and for decryption of documents related to Bob’s
department.
In KASE, Alice only needs to distribute a single aggregate key, instead for
sharing m documents with Bob, and Bob only needs to submit a single aggregate
trapdoor, to the cloud server. The cloud server can use this aggregate trapdoor and
some public information to perform keyword search and return the result to Bob.
Therefore, in KASE, the delegation of keyword search right can be achieved by
sharing the single aggregate key. We note that the delegation of decryption rights
can be achieved using the key-aggregate encryption approach recently proposed in,
but it remains an open problem to delegate the keyword search rights together with
the decryption rights, which is the subject topic of this paper.
The three issues of cloud computing security are: confidentiality, integrity and
availability.
A. Availability

Availability is the attestation that data will be available to the user in a


perpetual manner irrespective of location of the user. It is ensured by: fault tolerance,
network security and authentication.

B. Integrity

Integrity is the assurance that the data sent is same as the message received
and it is not altered in between. Integrity is infringed if the transmitted message is
not same as received one. It is ensured by: Firewalls and intrusion detection.

C. Confidentiality

Confidentiality is avoidance of unauthorized exposé of user data. It is ensured


by: security protocols, authentication services and data encryption services. Since
cloud computing is utility available on internet, so various issues like user privacy,
data theft and leakage and unauthenticated accesses are raised. Cryptography is the
science of securely transmitting and retrieving information using an insecure
channel. It involves two processes: encryption and decryption. Encryption is a
process in which sender converts data in form of an unintelligible string or cipher
text for transmission, so that an eavesdropper could not know about the sent data.
Decryption is just the reverse of encryption. The receiver transforms sender’s cipher
text into a meaningful text known as plaintext.
1.2 Objectives of the project

Security and privacy are very important issues in cloud computing. The aim
is to build practical data sharing system based on public cloud storage to avoid
inadvertent data leaks in the cloud and privacy for preserving data.
Our main contributions are as follows.
1) First define a general framework of key aggregate searchable encryption (KASE)
composed of seven polynomial algorithms for security parameter setup, key
generation, encryption, key extraction, trapdoor generation, trapdoor adjustment,
and trapdoor testing. We then describe both functional and security requirements for
designing a valid KASE scheme.
2) Then instantiate the KASE framework by designing a concrete KASE scheme.
After providing detailed constructions for the seven algorithms, we analyze the
efficiency of the scheme, and establish its security through detailed analysis.
3) Discuss various practical issues in building actual group data sharing system
based on the proposed KASE scheme, and evaluate its performance. The evaluation
confirms our system can meet the performance requirements of practical
applications.
The proposed KASE scheme applies to any cloud storage that supports the
searchable group data sharing functionality, which means any user may selectively
share a group of selected files with a group of selected users, while allowing the
latter to perform keyword search over the former. To support searchable group data
sharing the main requirements for efficient key management are twofold. First, a
data owner only needs to distribute a single aggregate key (instead of a group of
keys) to a user for sharing any number of files. Second, the user only needs to submit
a single aggregate trapdoor (instead of a group of trapdoors) to the cloud for
performing keyword search over any number of shared files. To the best of our
knowledge, the KASE scheme proposed in this paper is the first known scheme that
can satisfy both requirements (the key-aggregate cryptosystem, which has inspired
our work, can satisfy the first requirement but not the second).
The goal of KASE is to delegate the keyword search right to any user by distributing
the aggregate key to him/her in a group data sharing system, whereas the goal of
MKSE is to ensure the cloud server can perform keyword search with one trapdoor
over different documents owing to a user.
2. SYSTEM STUDY

2.1 EXISTING SYSTEM


Data sharing is an important functionality in cloud storage. For example,
bloggers can let their friends view a subset of their private pictures; an enterprise
may grant her employees access to a portion of sensitive data. The challenging
problem is how to effectively share encrypted data. Of course users can download
the encrypted data from the storage, decrypt them, then send them to others for
sharing, but it loses the value of cloud storage. Users should be able to delegate the
access rights of the sharing data to others so that they can access these data from the
server directly.
In existing system, it shows how to securely, efficiently, and flexibly
share data with others in cloud storage. We describe new public-key cryptosystems
that produce constant-size cipher texts such that efficient delegation of decryption
rights for any set of cipher texts are possible. The novelty is that one can aggregate
any set of secret keys and make them as compact as a single key, but encompassing
the power of all the keys being aggregated. In other words, the secret key holder can
release a constant-size aggregate key for flexible choices of cipher text set in cloud
storage, but the other encrypted files outside the set remain confidential. This
compact aggregate key can be conveniently sent to others or be stored in a smart
card with very limited secure storage. We provide formal security analysis of our
schemes in the standard model. We also describe other application of our schemes.
In particular, our schemes give the first public-key patient-controlled encryption for
flexible hierarchy, which was yet to be known
Disadvantages

 More complex mathematical transformation to support keyword cipher text


encryption.
 key is prompt to leakage
 Large number of keyword trapdoors
 High Complexity

2.2 PROPOSED SYSTEM


In this project, address this challenge by proposing the novel concept of key-
aggregate searchable encryption (KASE), and instantiating the concept through a
concrete KASE scheme. The proposed KASE scheme applies to any cloud storage
that supports the searchable group data sharing functionality, which means any user
may selectively share a group of selected files with a group of selected users, while
allowing the latter to perform keyword search over the former. To support searchable
group data sharing the main requirements for efficient key management are twofold.
First, a data owner only needs to distribute a single aggregate key (instead of a group
of keys) to a user for sharing any number of files. Second, the user only needs to
submit a single aggregate trapdoor (instead of a group of trapdoors) to the cloud for
performing keyword search over any number of shared files. To the best of our
knowledge, the KASE scheme proposed in this paper is the first known scheme that
can satisfy both requirements (the key-aggregate cryptosystem, which has inspired
our work, can satisfy the first requirement but not the second).
Advantages of Proposed System

 Effective solution to building practical data sharing system based on public


cloud storage.
 Multithread technique can provide the help for improving performance.
 Highly secure and practically efficient
 Avoid inadvertent data leaks in the cloud and provide privacy for preserving
data.
 Achieve the goal of controlled searching and query privacy.
 Verification time reduced.

2.3 FEASIBILITY STUDY


Feasibility study is made to see if the project on completion will serve the
purpose of the organization for the amount of work, effort and the time that spend
on it. Feasibility study lets the developer foresee the future of the project and the
usefulness. A feasibility study of a system proposal is according to its workability,
which is the impact on the organization, ability to meet their user needs and effective
use of resources. Thus when a new application is proposed it normally goes through
a feasibility study before it is approved for development.

The document provide the feasibility of the project that is being designed and
lists various areas that were considered very carefully during the feasibility study of
this project such as Technical, Economic and Operational feasibilities. The
following are its features:

Technical Feasibility

The system must be evaluated from the technical point of view first. The
assessment of this feasibility must be based on an outline design of the system
requirement in the terms of input, output, programs and procedures. Having
identified an outline system, the investigation must go on to suggest the type of
equipment, required method developing the system, of running the system once it
has been designed.

The project should be developed such that the necessary functions and
performance are achieved within the constraints. The project is developed within
latest technology. Through the technology may become obsolete after some period
of time, due to the fact that never version of same software supports older versions,
the system may still be used. So there are minimal constraints involved with this
project. The system has been developed using Asp.net the project is technically
feasible for development.

Economic Feasibility

The developing system must be justified by cost and benefit. Criteria to ensure
that effort is concentrated on project, which will give best, return at the earliest. One
of the factors, which affect the development of a new system, is the cost it would
require.

The following are some of the important financial questions asked during
preliminary investigation:

 The costs conduct a full system investigation.

 The cost of the hardware and software.

 The benefits in the form of reduced costs or fewer costly errors.

Since the system is developed as part of project work, there is no manual cost
to spend for the proposed system. Also all the resources are already available, it give
an indication of the system is economically possible for development.
Front-end and back-end selection

An important issue for the development of a project is the selection of suitable


front-end and back-end. When we decided to develop the project we went through
an extensive study to determine the most suitable platform that suits the needs of the
organization as well as helps in development of the project.

The aspects of our study included the following factors.

Front-end selection

1. It must have a graphical user interface that assists employees that are not from IT
background.

2. Scalability and extensibility.

3. Flexibility.

4. Robustness.

5. According to the organization requirement and the culture.

6. Must provide excellent reporting features with good printing support.

7. Platform independent.

8. Easy to debug and maintain.

9. Event driven programming facility.

10. Front end must support some popular back end like Ms SqlServer.

According to the above stated features we selected C#.net as the front-end for
developing our project.
Back-end Selection

1. Multiple user support.

2. Efficient data handling.

3. Provide inherent features for security.

4. Efficient data retrieval and maintenance.

5. Stored procedures.

6. Popularity.

7. Operating System compatible.

8. Easy to install.

9. Various drivers must be available.

10. Easy to implant with the Front-end.

According to above stated features we selected Ms Sql Server as the backend.

The technical feasibility is frequently the most difficult area encountered at


this stage. It is essential that the process of analysis and definition be conducted in
parallel with an assessment to technical feasibility. It canters on the existing
computer system (hardware, software etc.) and to what extent it can support the
proposed system.

Operational Feasibility

Operational feasibility determines if the human resources are available to


operate the system once it has been installed. The resources that are required to
implement or install are already available with the organization. The persons of the
organization need no exposure to computer but have to be trained to use this
particular software. A few of them will be trained. Further, training is very less. The
management will also be convinced that the project is optimally feasible.
3. SYSTEM DESCRIPTION

3.1 HARDWARE DESCRIPTION

Processor : Intel Pentium D

Mother Board : Intel 945G Express Chipset

Bus Speed : 2.80 GHZ

RAM : 2 GB

Hard disk : 20 GB or more

Monitor : 17 “inch CRT (IBM)

Keyboard : 104 Keys

Mouse : Lenovo PS/2 3 buttons

CD-ROM : LITE-ON CD-ROM

3.2 SOFTWARE DESCRIPTION

Operating System : Windows 7

Front End : Asp.Net (Using C# Script)

Back End : Ms-Access 2008


3.3 PROJECT DESCRIPTION
Data sharing is an important functionality in cloud storage. For example,
bloggers can let their friends view a subset of their private pictures; an enterprise
may grant her employees access to a portion of sensitive data. The challenging
problem is how to effectively share encrypted data. Of course users can download
the encrypted data from the storage, decrypt them, then send them to others for
sharing, but it loses the value of cloud storage. Users should be able to delegate the
access rights of the sharing data to others so that they can access these data from the
server directly. The modules are
Group Sharing Portal
 Create New Users
 User Login
 View and Edit Profile
 Find Friends
 Friend List
 Share Comments and Photos
Key Aggregate Searchable Cryptosystem
 Cloud Storage
 Key Aggregate Cryptosystem
 Setup Phase
 Key Generation Phase
 Encryption Phase
 Extraction Phase
 Trapdoor Phase
 Adjust Phase
 Test and Decryption Phase
A) Group Sharing Portal

Online group Sharing Portal is implementing a web portal which is used to


upload photos on web for sharing with friends. This module is used to track the
friend list relative to users profile and others. It also provides an Invite a
friend feature to allow your users to send and track invitations to join your site. It
requires users who are interested in semi-private sites to first request an invitation.
When an invited friend joins the site then that person is automatically added to the
member that invited them. The module excels at two roles. Its first role is that of a
web gallery on presentation web pages maintained by the web master or site owner.
The second role refers to community oriented sites, where users can upload their
images, comment are share to social networks. The module shows the shared
comments and photos from already invited friends.
Create New Users
To start group data sharing, the user must have account in the system. For that
the user registers their name with mail id. This module also contains the basic
information’s such as gender, date of birth and contact number.
User Login
To enter in the system, user enters the email id and password associated with
it. Then only the user accesses the system. The password is kept as secret. To
provide a strong password, the password may contain combination of capital letters,
number, and punctuations.
View and Edit Profile
This module provides the support to edit the details of user. We can add our
personal details, educational details and also work details. If we want to add photo,
and then uploaded it hear.
Find Friends
This module is used to track the friend list relative to users profile and others.
It also provides an Invite a friend feature to allow your users to send and track
invitations to join your site. It require users who are interested in semi-private sites
to first request an invitation.

Friend List
This module create a relationship between inviting and invited user. Allows
you to add members to your friends list. When an invited friend joins the site then
that person is automatically added to the member that invited them.
Share Comments and Photos
The module excels at two roles. Its first role is that of a web gallery on
presentation web pages maintained by the web master or site owner. The second role
refers to community oriented sites, where users can upload their images, comment
are share to social networks. The module also shows the shared comments and
photos from already invited friends.
B) Key Aggregate Searchable Cryptosystem
Cloud Storage
Cloud storage is a storage of data online in cloud which is accessible from
multiple and connected resources. As increase in outsourcing of data the cloud
computing serves does the management of data. The main concern is how to share
the data securely the answer is cryptography. Cryptography helps the data owner to
share the data to in safe way. A user can upload a file and store it securely in the
cloud. Privacy preserving mechanism, all the photos were encrypted and uploading
to cloud server.
Key Aggregate Searchable Encryption
A key-aggregate encryption scheme consists of seven polynomial-time
algorithms: Setup, Keygen, Encrypt, Extract, Trapdoor, Adjust and Test. The main
idea of data sharing in cloud storage using KASE wants to share her data on the
server. The sizes of cipher-text, public-key, and master-secret key and aggregate key
in our KAC schemes are all of constant size. The key owner holds a master-secret
called master-secret key, which can be used to extract secret keys for different
classes.
Setup Phase
The data owner establishes the public system parameter i.e, randomly pick a
bilinear group G of prime order p and q. It executed by the data owner to setup an
account on an untrusted server. Input a security level parameter and output the
system parameter as param. It is needed each time and it can be fetched on demand
from large cloud storage.
Key Generation Phase
Key generation is an important part in secure data sharing in cloud. Public
Key Cryptography (PKC) uses separate keys for encryption and decryption.
Executed by the data owner to randomly generate a public/master-secret key pair
(pk, msk). The sender will encrypt the message data with public key and receiver
will decrypt with master key.
Data Encryption
To provide secure data storage, the data needs to be encrypted. In KAC, users
encrypt a message not only under a public-key, but also under an identifier of cipher-
text called class. This phase is executed by anyone who wants to send the encrypted
data. Encrypt (pk, m, i), the encryption algorithm takes input as public parameters
pk, a message m, and i denoting cipher text class. The algorithm encrypts message
m and produces a cipher text C that satisfies the access structure is able to decrypt
the message Input= public key pk, an index i, and message m .Output = cipher text
C.
Extraction
This algorithm is run by the data owner to generate an aggregate searchable
encryption key for delegating the keyword search right for a certain set of documents
to other users. It takes as input the owner’s master-secret key msk and a set S which
contains the indices of documents, then outputs the aggregate key
Trapdoor
This algorithm using this aggregate key, and submit the trapdoor to the cloud.
After receiving the trapdoor, to perform the keyword search over the specified set of
documents. It takes as input the aggregate searchable encryption key and a keyword
w, then outputs only one trapdoor.
Adjust
This algorithm is run by cloud server to adjust the aggregate trapdoor to
generate the right trapdoor for each different document. It takes as input the system
public parameters params, the set S of documents’ indices, the index i of target
document and the aggregate trapdoor Tr, then outputs each trapdoor Tri for the i-th
target document in S.
Test & Decryption
This algorithm is run by the cloud server to perform keyword search over an
encrypted document. It takes as input the trapdoor Tri and the document index i, then
outputs true or false to denote whether the document doci contains the keyword w.
Then user with an aggregate key can decrypt these encrypted photos any cipher-text
provided that the cipher-text’s class is contained. Decryption key more powerful in
the sense that it allows decryption of multiple cipher text.
4. SYSTEM ANALYSIS AND SYSTEM DESIGN

4.1 SYSTEM ANALYSIS


Systems analysis is a process of collecting factual data, understand the
processes involved, identifying problems and recommending feasible suggestions
for improving the system functioning. This involves studying the business processes,
gathering operational data, understand the information flow, finding out bottlenecks
and evolving solutions for overcoming the weaknesses of the system so as to achieve
the organizational goals. System analysis also includes subdividing of complex
process involving the entire system, identification of data store and manual
processes.

Assuming that a new system is to be developed, the next phase is system analysis.
Analysis involved a detailed study of the current system, leading to specifications of
a new system. Analysis is a detailed study of various operations performed by a
system and their relationships within and outside the system. During analysis, data
are collected on the available files, decision points and transactions handled by the
present system. Interviews, on-site observation and questionnaire are the tools used
for system analysis.

4.2 SYSTEM DESIGN


The KASE framework is composed of seven algorithms. Specifically, to set
up the scheme, the cloud server would generate public parameters of the system
through the Setup algorithm, and these public parameters can be reused by different
data owners to share their files. For each data owner, he/she should produce a
public/master-secret key pair through the Keygen algorithm. Keywords of each
document can be encrypted via the Encrypt algorithm with the unique searchable
encryption key. Then, the data owner can use the master-secret key to generate an
aggregate searchable encryption key for a group of selected documents via the
Extract algorithm. The aggregate key can be distributed securely (e.g., via secure e-
mails or secure devices) to authorized users who need to access those documents.
After that, an authorized user can produce a keyword trapdoor via the Trapdoor
algorithm using this aggregate key, and submit the trapdoor to the cloud. After
receiving the trapdoor, to perform the keyword search over the specified set of
documents, the cloud server will run the Adjust algorithm to generate the right
trapdoor for each document, and then run the Test algorithm to test whether the
document contains the keyword.
The KASE framework provides general guidance to designing a KASE
scheme. However, a valid KASE scheme must also satisfy several functional and
security requirements, as stated in the following.
A KASE scheme should satisfy three functional requirements as follows.
Compactness: This requirement demands a KASE scheme to ensure the size of the
aggregate key to be independent of the number of files to be shared. Formally, for a
set of keys {ki}. it requires that kagg Extract(msk, S). How to aggregate the set of
keys into a single key without invalidating later steps is a key challenge in designing
KASE schemes.
4.2.1 SYSTEM ARCHITECTURE

Public and Image


master Document
key s
Data Key Data Cloud
owner Generation Encryption Server

Public key

Key Secure Aggregate


Extract E- mails Trapdoor
Aggregate key
(k1, k2,
Master- k3...kn) Data
key, Decryption
keyword
View
document

Search-ability: This requirement is central to all KASE schemes since it enables


users to generate desired trapdoors for any given keyword for searching encrypted
documents. In another word, reducing the number of keys should preserve the search
capability.
Delegation: The main goal of KASE is to delegate the keyword search right to a
user through an aggregate key. To ensure any user with the delegated key can
perform keyword search, this requirement requires that the inputs of the adjustment
algorithm must not be public, i.e., these inputs should not rely on any user’s private
information. This is the second key challenge in designing KASE schemes. In
addition, any KASE scheme should also satisfy two security requirements as
follows.
Controlled searching: Meaning that the attackers cannot search for an arbitrary
word without the data owner’s authorization. That is, the attacker cannot perform
keyword search over the documents which are not relevant to the known aggregate
key, and he/she cannot generate new aggregate searchable encryption keys for other
set of documents from the known keys.
Query privacy: Meaning that the attackers cannot determine the keyword used in a
query, apart from the information that can be acquired via observation and the
information derived from it. That is, the user may ask an untrusted cloud server to
search for a sensitive word without revealing the word to the server.
4.1 Input Design
 A data input specification is a detailed description of the individual fields (data
elements) on an input document together with their characteristics (i.e., type
and length).
 Be specific and precise, not general, ambiguous, or vague (BAD: Syntax error,
Invalid entry, General Failure)
 Be positive; Avoid condemnation. Possibly even to the point of avoiding
pejorative terms such as "invalid" "illegal" or "bad."
 Be user-centric and attempt to convey to the user that he or she is in control
by replacing imperatives such as "Enter date" with wording such as "Ready
for date."
4.2 Output Design
Output may be designed to aid future change by stressing formless reports,
defining field size for future growth, making field constants into variables, and
leaving room on review reports for added ratios and statistics. Output can now be
more easily tailored to the needs of individual users because inquiry-based systems
allow users themselves to generate ad hoc reports. An output intermediary can
restrict access to key information and avoid illegal access. An information
clearinghouse (or information centre) is a service centre that provides consultation,
assistance, and documentation to encourage end-user development and use of
applications.

Encrypt (PK, i, m) CT
 Input= Public Key PK, File Index i and Message m
 Output = Ciphertext CT
Extract(msk, S) Kagg
 Input= Master Key msk, S subset of indices of documents
 Output = Aggregate key Kagg
Trapdoor(Kagg,w) Tr
 Input= Aggregate key Kagg, keyword w
 Output = one trapdoor Tr
Test(Tri, i) True/False, m
 Input= Adjusted trapdoor Tri, document i
 Output = True/False, message m
4.2.2 DATA FLOW DIAGRAM

Data Flow Diagram shows the flow of data or information. It can be partitioned
into single processes or functions. The DFD is an excellent communication tool
for analysts to model processes and functional requirements. Used effectively, it
is a useful and easy to understand modeling tool. It has broad application and
usability across most software development projects. It is easily integrated with
data modeling, workflow modeling tools, and textual specs. Together with these,
it provides analysts and developers with solid models and specs. Alone, however,
it has limited usability. It is simple and easy to understand by users. And can be
easily extended and refined with further specification into a physical version for
the design and development teams. The symbols used to prepare DFD do not
imply a physical implementation, a DFD can be considered to an abstract of the
logic of an information-oriented or a process-oriented system flow-chart.

DFD LEVEL 0

Encrypted Request
Data
Cloud
Data Owner User
Server
Reply

Store Data Retrieve Data

Network
Storage
DFD Level 1

Data Owner

Initialize System
Parameter

Bilinear Map (P) Random Generator (Q)

Key Generation

Public key (pk) Master Key


(msk)

Generate Cipher Message (m)


text (CT)

Upload CT to
Server

Fig : Data Owner


DFD Level 2

Multi-Key SE

Retrieve master key+ Master


indices of document Key

Extract Key

Generate Aggregate
key

Send to Data User Secure


Mail

Fig : Multi-Key SE
DFD Level 3

Data User

Request using indices


of document

Retrieve Aggregate Secure


key Mail

Generate Keyword
trapdoor

Test & Decrypt CT

Fig : Data User


4.2.3 DATABASE DESIGN

A database is an organized mechanism that has the capability of storing


information through which a user can retrieve stored information in an effective and
efficient manner. The data is the purpose of any database and must be protected.
The database design is a two level process. In the first step, user requirements
are gathered together and a database is designed which will meet these requirements
as clearly as possible. This step is called Information Level Design and it is taken
independent of any individual DBMS.
In the second step, this Information level design is transferred into a design
for the specific DBMS that will be used to implement the system in question. This
step is called Physical Level Design, concerned with the characteristics of the
specific DBMS that will be used. A database design runs parallel with the system
design. The organization of the data in the database is aimed to achieve the following
two major objectives.
 Data Integrity.
 Data independence.
Normalization is the process of decomposing the attributes in an application,
which results in a set of tables with very simple structure. The purpose of
normalization is to make tables as simple as possible. Normalization is carried out
in this system for the following reasons.
 To structure the data so that there is no repetition of data, this helps in saving.
 To permit simple retrieval of data in response to query and report request.
 To simplify the maintenance of the data through updates, insertions, deletions.
 To reduce the need to restructure or reorganize data which new application
requirements arise.
Relationship Database Management System (RDBMS)
A relational model represents the database as a collection of relations. Each
relation resembles a table of values or file of records. In formal relational model
terminology, a row is called a tuple, a column header is called an attribute and the
table is called a relation. A relational database consists of a collection of tables, each
of which is assigned a unique name. A row in a tale represents a set of related values.
Relations, Domains & Attributes
A table is a relation. The rows in a table are called tuples. A tuple is an ordered
set of n elements. Columns are referred to as attributes. Relationships have been set
between every table in the database. This ensures both Referential and Entity
Relationship Integrity. A domain D is a set of atomic values. A common method of
specifying a domain is to specify a data type from which the data values forming the
domain are drawn. It is also useful to specify a name for the domain to help in
interpreting its values. Every value in a relation is atomic, that is not decomposable.
Relationships
Table relationships are established using Key. The two main keys of prime
importance are Primary Key & Foreign Key. Entity Integrity and Referential
Integrity Relationships can be established with these keys. Entity Integrity enforces
that no Primary Key can have null values. Referential Integrity enforces that no
Primary Key can have null values.
Referential Integrity for each distinct Foreign Key value, there must exist a
matching Primary Key value in the same domain. Other key are Super Key and
Candidate Keys. Relationships have been set between every table in the database.
This ensures both Referential and Entity Relationship Integrity.
Normalization
As the name implies, it denoted putting things in the normal form. The
application developer via normalization tries to achieve a sensible organization of
data into proper tables and columns and where names can be easily correlated to the
data by the user. Normalization eliminates repeating groups at data and thereby
avoids data redundancy which proves to be a great burden on the computer
resources. This includes:
 Normalize the data.
 Choose proper names for the tables and columns.
 Choose the proper name for the data.
First Normal Form
The First Normal Form states that the domain of an attribute must include
only atomic values and that the value of any attribute in a tuple must be a single
value from the domain of that attribute. In other words 1NF disallows “relations
within relations” or “relations as attribute values within tuples”. The only attribute
values permitted by 1NF are single atomic or indivisible values.
The first step is to put the data into First Normal Form. This can be donor by
moving data into separate tables where the data is of similar type in each table. Each
table is given a Primary Key or Foreign Key as per requirement of the project. In
this we form new relations for each non atomic attribute or nested relation. This
eliminated repeating groups of data.
A relation is said to be in first normal form if only if it satisfies the constraints
that contain the primary key only.
Second Normal Form
According to Second Normal Form, for relations where primary key contains
multiple attributes, no non key attribute should be functionally dependent on a part
of the primary key.
In this we decompose and setup a new relation for each partial key with its
dependent attributes. Make sure to keep a relation with the original primary key and
any attributes that are fully functionally dependent on it. This step helps in taking
out data that is only dependent on a part of the key.
A relation is said to be in second normal form if and only if it satisfies all the
first normal form conditions for the primary key and every non-primary key
attributes of the relation is fully dependent on its primary key alone.
Third Normal Form
According to Third Normal Form, Relation should not have a non-key
attribute functionally determined by another non key attribute or by a set of non-key
attributes. That is, there should be no transitive dependency on the primary key.
In this we decompose and set up relation that includes the non key attributes
that functionally determines other non-key attributes. This step is taken to get rid of
anything that does not depend entirely on the Primary Key.

Login
This table is used to save the security information in which it can be retrieved for
future reference and also update whenever required.

Field Name Data Type Description


uname Text Username
pass Text Password
Register
This table is used to save the register information in which it can be retrieved
for future reference and also update whenever required.

Field Name Data Type Description


name1 Text Name
gender Text Gender
country Text Country
email id Text Email Id
uname Text Username
pass1 Text Password
date1 Text Date

Profile

This table is used to save the profile information in which it can be retrieved
for future reference and also update whenever required.

Field Name Data Type Description


uname Text Username
name1 Text Name
qual Text Qualification
desg Text Designation
address1 Text Address
phno1 Text Phone Number
image1 Text Image
status1 Text Status
status2 Text Status

Request
This table is used to save the request information in which it can be retrieved
for future reference and also update whenever required.

Field Name Data Type Description


uname Text Username
image1 Text Image
status Text Status
name1 Text Name
country1 Text Country
uname2 Text Username
image2 Text Image
status2 Text Status
name2 Text Name
country2 Text Country
AddPhoto
This table is used to save the add photo information in which it can be
retrieved for future reference and also update whenever required.

Field Name Data Type Description


uname Text Username
image1 Text Image
content1 Text Content
idno Text ID Number
key1 Text Key
key2 Text Key

Friends
This table is used to save the friends information in which it can be retrieved
for future reference and also update whenever required.

Field Name Data Type Description


uname Text Username
name1 Text Name
image1 Text Image
uname2 Text Username
name2 Text Name
image2 Text Image
5. SYSTEM TESTING

System Testing is the process of executing software in a controlled manner.


Software testing is often used in association with the terms verification and
validation. Validation is the checking or testing of items, includes software, for
conformance and consistency with an associated specification. Software testing is
just one kind of verification, which also uses techniques such as reviews, analysis,
inspections, and walkthroughs. Validation is the process of checking that what has
been specified is what the user actually wanted.

Software testing should not be confused with debugging. Debugging is the


process of analyzing and localizing bugs when software does not behave as
expected. Although the identification of some bugs will be obvious from playing
with the software, a methodical approach to software testing is a much more
thorough means for identifying bugs. Debugging is therefore an activity which
supports testing, but cannot replace testing. Other activities which are often
associated with software testing are static analysis and dynamic analysis. Static
analysis investigates the source code of software, looking for problems and gathering
metrics without actually executing the code. Dynamic analysis

Looks at the behavior of software while it is executing, to provide information


such as execution traces, timing profiles, and test coverage information. Testing is a
set of activity that can be planned in advanced and conducted systematically. Testing
begins at the module level and work towards the integration of entire computers
based system. Nothing is complete without testing, as it vital success of the system
testing objectives, there are several rules that can serve as testing objectives. They
are
Testing is a process of executing a program with the intend of finding an error.
A good test case is one that has high possibility of finding an undiscovered error. A
successful test is one that uncovers an undiscovered error.

If a testing is conducted successfully according to the objectives as stated above,


it would uncovered errors in the software also testing demonstrate that the software
function appear to be working according to the specification, that performance
requirement appear to have been met.

There are three ways to test program.

 For correctness

 For implementation efficiency

 For computational complexity

Test for correctness are supposed to verify that a program does exactly what it
was designed to do. This is much more difficult than it may at first appear, especially
for large programs.

Unconventional Testing
Unconventional Testing is done by Quality Assurance team with the reference
of each and every documents starting from initial stage of SDLC. This process
involves verification through walkthrough and inspections to check whether the
development happened as per the company process guidelines or not.
Conventional Testing
Unit Testing

Unit testing focuses verification effort on the smallest unit of software design
– the software component or module. Using the component level design description
as a guide, important control paths are tested to uncover errors within the boundary
of the module. The relative complexity of tests and uncovered scope established for
unit testing. The unit testing is white-box oriented, and step can be conducted in
parallel for multiple components. The modular interface is tested to ensure that
information properly flows into and out of the program unit under test. The local
data structure is examined to ensure that data stored temporarily maintains its
integrity during all steps in an algorithm’s execution. Boundary conditions are tested
to ensure that all statements in a module have been executed at least once. Finally,
all error handling paths are tested.

Tests of data flow across a module interface are required before any other test
is initiated. If data do not enter and exit properly, all other tests are moot. Selective
testing of execution paths is an essential task during the unit test. Good design
dictates that error conditions be anticipated and error handling paths set up to reroute
or cleanly terminate processing when an error does occur. Boundary testing is the
last task of unit testing step. Software often fails at its boundaries.

Unit testing was done in Sell-Soft System by treating each module as separate
entity and testing each one of them with a wide spectrum of test inputs. Some flaws
in the internal logic of the modules were found and were rectified.

Integration Testing

Integration testing is systematic technique for constructing the program


structure while at the same time conducting tests to uncover errors associated with
interfacing. The objective is to take unit tested components and build a program
structure that has been dictated by design. The entire program is tested as whole.
Correction is difficult because isolation of causes is complicated by vast expanse of
entire program. Once these errors are corrected, new ones appear and the process
continues in a seemingly endless loop.
After unit testing in Sell-Soft System all the modules were integrated to test
for any inconsistencies in the interfaces. Moreover differences in program structures
were removed and a unique program structure was evolved.

Validation Testing

This is the final step in testing. In this the entire system was tested as a whole
with all forms, code, modules and class modules. This form of testing is popularly
known as Black Box testing or System testing.

Black Box testing method focuses on the functional requirements of the


software. That is, Black Box testing enables the software engineer to derive sets of
input conditions that will fully exercise all functional requirements for a program.

Black Box testing attempts to find errors in the following categories;


incorrect or missing functions, interface errors, errors in data structures or external
data access, performance errors and initialization errors and termination errors.

Acceptance Testing
Testing generally involves running a suite of tests on the completed system.
Each individual test, known as a case, exercises a particular operating condition of
the user's environment or features of the system, and will result in a pass or fail,
or outcome. There is generally no degree of success or failure. The test environment
is usually designed to be identical, or as close as possible, to the anticipated user's
environment, including extremes of such. These test cases must each be
accompanied by test case input data or a formal description of the operational
activities (or both) to be performed—intended to thoroughly exercise the specific
case—and a formal description of the expected results.
6. SOURCE CODE

1. Create Account

using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Data.OleDb;
public partial class frmreg : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
DateTime dt;
dt = DateTime.Now;
Label2.Text = dt.ToString("dd/MM/yyyy");

}
protected void Button1_Click(object sender, EventArgs e)
{
OleDbConnection con;
string strConnString =
ConfigurationManager.ConnectionStrings["LocalSqlServer1"].ConnectionString;
con = new OleDbConnection(strConnString);
con.Open();
OleDbCommand cmd = new OleDbCommand("select uname from reg where
uname='" + TextBox6.Text + "'", con);
OleDbDataAdapter da = new OleDbDataAdapter(cmd);
DataTable dt = new DataTable();
da.Fill(dt);
if (dt.Rows.Count != 0)
{

MessageBox("Username Already Exists!..");


}
else
{
indata();

}
}
protected void Button2_Click(object sender, EventArgs e)
{
DropDownList1.ClearSelection();
DropDownList2.ClearSelection();
TextBox2.Text = "";
TextBox3.Text = "";
TextBox6.Text = "";
TextBox7.Text = "";
TextBox1.Text = "";
}
private void indata()
{
OleDbConnection con;
string strConnString =
ConfigurationManager.ConnectionStrings["LocalSqlServer1"].ConnectionString;
con = new OleDbConnection(strConnString);
con.Open();
OleDbCommand cmd = new OleDbCommand("insert into
reg(Name1,Gender,Country,Email_ID,Uname,Pass1,date1)values('" +
TextBox2.Text + "','" + DropDownList1.SelectedValue + "','" +
DropDownList2.SelectedValue + "','" + TextBox3.Text + "','" + TextBox6.Text +
"','" + TextBox7.Text + "','" + Label2.Text + "')", con);
OleDbCommand cmd1 = new OleDbCommand("insert into
profile(uname,name1,image1)values('" + TextBox6.Text + "','" + TextBox2.Text +
"','Photo.jpg')", con);
cmd.ExecuteNonQuery();
cmd1.ExecuteNonQuery();
DropDownList1.ClearSelection();
DropDownList2.ClearSelection();
TextBox2.Text = "";
TextBox3.Text = "";
TextBox6.Text = "";
TextBox7.Text = "";
TextBox1.Text = "";
Server.Transfer("Default.aspx");
}
private void MessageBox(string msg)
{
Label lbl = new Label();
lbl.Text = "<script language='javascript'>" + Environment.NewLine +
"window.alert('" + msg + "')</script>";
Page.Controls.Add(lbl);
}
}

2. Login page

using System;
using System.Data;
using System.Configuration;
using System.Collections.Generic;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Data.OleDb;
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
protected void Button2_Click(object sender, EventArgs e)
{
OleDbConnection con;
string strConnString =
ConfigurationManager.ConnectionStrings["LocalSqlServer1"].ConnectionString;
con = new OleDbConnection(strConnString);
con.Open();
OleDbCommand cmd = new OleDbCommand("select Uname,Pass1 from reg
where Uname='" + TextBox1.Text + "' and Pass1='" + TextBox2.Text + "'", con);
OleDbDataAdapter da = new OleDbDataAdapter(cmd);
DataTable dt = new DataTable();
da.Fill(dt);
if (dt.Rows.Count != 0)
{
Session["user"] = TextBox1.Text;
Server.Transfer("frmhome.aspx");
}
else
{
MessageBox("Invalid UserName or Password");
}
}
private void MessageBox(string msg)
{
Label lbl = new Label();
lbl.Text = "<script language='javascript'>" + Environment.NewLine +
"window.alert('" + msg + "')</script>";
Page.Controls.Add(lbl);
}
}

3. Profile Information

using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Data.OleDb;
public partial class frmprofile : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
if (Session["user"] == null)
{
Server.Transfer("Default.aspx");
}
else
{
if (!IsPostBack)
{
OleDbConnection con;
string strConnString =
ConfigurationManager.ConnectionStrings["LocalSqlServer1"].ConnectionString;
con = new OleDbConnection(strConnString);
con.Open();
OleDbCommand cmd = new OleDbCommand("Select
qual,desg,address1,phno1,image1 from profile where uname='" +
Session["user"].ToString() + "'", con);
// OleDbCommand cmd = new OleDbCommand("Select
qual,desg,address1,phno1,image1 from profile",con);
OleDbDataReader dr;
dr = cmd.ExecuteReader();
if (dr.Read())
{
DropDownList1.SelectedValue = dr["qual"].ToString();
DropDownList2.SelectedValue = dr["desg"].ToString();
TextBox2.Text = dr["address1"].ToString();
TextBox1.Text = dr["phno1"].ToString();
FileUpload1.Equals(dr["image1"].ToString());

}
dr.Close();
con.Close();
}
}
}
protected void Button1_Click(object sender, EventArgs e)
{
string dd = null;
string _fileExt = null;
dd = System.IO.Path.GetFileName(FileUpload1.PostedFile.FileName);
_fileExt = System.IO.Path.GetExtension(dd);
OleDbConnection con;
string strConnString =
ConfigurationManager.ConnectionStrings["LocalSqlServer1"].ConnectionString;
con = new OleDbConnection(strConnString);
con.Open();
OleDbCommand cmd = new OleDbCommand("update profile set qual='"
+ DropDownList1.SelectedValue + "',desg='" + DropDownList2.SelectedValue +
"',image1='" + dd + "',status1='0',status2='0' where uname='" +
Session["user"].ToString() + "'", con);
OleDbCommand cmd1 = new OleDbCommand("update profile set
address1='" + TextBox2.Text.Trim() + "',phno1='" + TextBox1.Text.Trim() + "'
where uname='" + Session["user"].ToString() + "'", con);
OleDbCommand cmd2 = new OleDbCommand("update friends set
image2='" + dd + "' where uname2='" + Session["user"].ToString() + "'", con);
cmd.ExecuteNonQuery();
cmd1.ExecuteNonQuery();
cmd2.ExecuteNonQuery();
FileUpload1.PostedFile.SaveAs(Server.MapPath(".//Photo//") + dd);
con.Close();
Server.Transfer("frmprofile.aspx");
}
private void MessageBox(string msg)
{
Label lbl = new Label();
lbl.Text = "<script language='javascript'>" + Environment.NewLine +
"window.alert('" + msg + "')</script>";
Page.Controls.Add(lbl);
}
}

4. Key Generation & Encryption

using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Drawing;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Data.OleDb;
using System.IO;
public partial class frmupload : System.Web.UI.Page
{
double myID;
string str;
static string loadImage = "";
static string f_encmsg = "";
int m_Ji, n_Ji;
protected void Page_Load(object sender, EventArgs e)
{
if (Session["user"] == null)
{
Server.Transfer("Default.aspx");
}
else
{
if (!IsPostBack)
{
fillgrid1();
txtfill();
}
}
}
protected void Button1_Click(object sender, EventArgs e)
{
Panel1.Visible = true;
string dd = null;
string _fileExt = null;
dd = System.IO.Path.GetFileName(FileUpload1.PostedFile.FileName);
_fileExt = System.IO.Path.GetExtension(dd);
if (_fileExt == ".jpg" || _fileExt == ".bmp" || _fileExt == ".png" || _fileExt ==
".jpeg")
{
FileInfo fi = new FileInfo(Server.MapPath(".//files//") + dd);
System.Drawing.Image UploadedImage =
System.Drawing.Image.FromStream(FileUpload1.PostedFile.InputStream);
//Determine width and height of uploaded image
Image1.ImageUrl = "~/files/" + dd;
float UploadedImageWidth = UploadedImage.PhysicalDimension.Width;
float UploadedImageHeight = UploadedImage.PhysicalDimension.Height;
Label2.Text = fi.Name;
double imageMB = ((fi.Length / 1024f) / 1024f);
Label3.Text = UploadedImageHeight + " X " + UploadedImageWidth;
Label4.Text = imageMB.ToString(".##") + "MB";
}
else
{
MessageBox("Invalid Image Format");
}

}
protected void DataList1_ItemCommand(object source,
DataListCommandEventArgs e)
{
}
private void encdec()
{
txtfill();
string dd = null;
dd = System.IO.Path.GetFileName(FileUpload1.PostedFile.FileName);
FileInfo fi = new FileInfo(Server.MapPath(".//Photo//") + dd);
System.Drawing.Image UploadedImage =
System.Drawing.Image.FromStream(FileUpload1.PostedFile.InputStream);
//Determine width and height of uploaded image
float UploadedImageWidth = UploadedImage.PhysicalDimension.Width;
float UploadedImageHeight = UploadedImage.PhysicalDimension.Height;
//TextBox4.Text = "File Name: " + fi.Name;
double imageMB = ((fi.Length / 1024f) / 1024f);
//TextBox4.Text = "Image Resolution: " + UploadedImageHeight + " X " +
UploadedImageWidth;
//TextBox4.Text = "Image Size: " + imageMB.ToString(".##") + "MB";
loadImage = BitConverter.ToString(ConvertImageToByte(UploadedImage));
File.WriteAllText("C:\\test.txt", loadImage);
//Encyption================
int P = Convert.ToInt32(TextBox3.Text);
int Q = Convert.ToInt32(TextBox5.Text);
m_Ji = Su_Ji(P, Q);
n_Ji = Phi_Ji(P, Q);
char[] Ming;
Ming = loadImage.ToCharArray();
foreach (char c in Ming)
{
string ming;
int Ming_Num = J_Mi((int)c, Get_e(n_Ji), m_Ji);
Char[] Ming_Char = Ming_Num.ToString().ToCharArray();

if (Ming_Char.GetLength(0) == 1)
{
ming = "000" + Ming_Char[0].ToString();
Ming_Char = ming.ToCharArray();
}

if (Ming_Char.GetLength(0) == 2)
{
ming = "00" + Ming_Char[0].ToString() + Ming_Char[1].ToString();
Ming_Char = ming.ToCharArray();
}

if (Ming_Char.GetLength(0) == 3)
{
ming = "0" + Ming_Char[0].ToString() + Ming_Char[1].ToString() +
Ming_Char[2].ToString();
Ming_Char = ming.ToCharArray();
}

foreach (char m in Ming_Char)


{
TextBox7.Text += m;
}
}
File.WriteAllText("C:\\test1.txt", TextBox7.Text);
//Decryption
string linetotext = File.ReadAllText("C:\\test1.txt");
TextBox2.Text = "";

string[] Siyao_Num;

Siyao_Num = TextBox6.Text.Split(new Char[] { ',' }, 2);


int Siyao_d = Convert.ToInt32(Siyao_Num[0]);
int Siyao_n = Convert.ToInt32(Siyao_Num[1]);

for (int i = 0; i <= (linetotext.Length - 4); )


{

int Mi_Number = Convert.ToInt32(linetotext.Substring(i, 4));

string Ming_wen = J_Mi(Mi_Number, Siyao_d, Siyao_n).ToString();


char Ming_char = Convert.ToChar(Convert.ToInt32(Ming_wen));

TextBox2.Text += Ming_char.ToString();
i = i + 4;
}
File.WriteAllText("C:\\test2.txt", TextBox2.Text);
//Convert to image====
string linetotext1 = File.ReadAllText("C:\\test2.txt");
System.Drawing.Image UploadedImage1 =
ConvertByteToImage(DecodeHex(linetotext1));
UploadedImage1.Save("C:\\Img.jpg",
System.Drawing.Imaging.ImageFormat.Jpeg);
}
public static byte[] DecodeHex(string hextext)
{
String[] arr = hextext.Split('-');
byte[] array = new byte[arr.Length];
for (int i = 0; i < arr.Length; i++)
array[i] = Convert.ToByte(arr[i], 16);
return array;
}
public static Bitmap ConvertByteToImage(byte[] bytes)
{
return (new Bitmap(System.Drawing.Image.FromStream(new
MemoryStream(bytes))));
}
public static byte[] ConvertImageToByte(System.Drawing.Image My_Image)
{
MemoryStream m1 = new MemoryStream();
new Bitmap(My_Image).Save(m1,
System.Drawing.Imaging.ImageFormat.Jpeg);
byte[] header = new byte[] { 255, 216 };
header = m1.ToArray();
return (header);
}
public void fillgrid1()
{

}
private void txtfill()
{
OleDbConnection con;
string strConnString =
ConfigurationManager.ConnectionStrings["LocalSqlServer1"].ConnectionString;
con = new OleDbConnection(strConnString);
con.Open();
OleDbCommand cmd = new OleDbCommand("select max(idno) as idno from
adphoto where uname='" + Session["user"].ToString() + "'", con);
OleDbDataReader dr;
dr = cmd.ExecuteReader();
while (dr.Read())
{
string strid = dr["idno"].ToString();
if (strid == "")
{
str= "1";
TextBox1.Text = "1";

}
else
{
myID = Convert.ToDouble(dr["idno"]) + 1;
str = myID.ToString();
TextBox1.Text = myID.ToString(); ;

}
}
dr.Close();
con.Close();
}
protected void Button2_Click(object sender, EventArgs e)
{
try
{
int P = Convert.ToInt32(TextBox3.Text);
int Q = Convert.ToInt32(TextBox5.Text);
if (!IsPrime(P))
{
MessageBox("Enter Prime Number:P");
}
else if (!IsPrime(Q))
{
MessageBox("Enter Prime Number:Q");
}
else
{
m_Ji = Su_Ji(P, Q);
n_Ji = Phi_Ji(P, Q);
TextBox4.Text = Get_e(n_Ji) + "," + m_Ji;
TextBox6.Text = Get_d(n_Ji, Get_e(n_Ji)) + "," + m_Ji;

}
}
catch (Exception ex) {
MessageBox("Enter Number Only");
}
}
private int Su_Ji(int m, int n)
{
int t;
t = m * n;
return t;
}
private int Phi_Ji(int m, int n)
{
int t;
t = (m - 1) * (n - 1);
return t;
}
private bool phi(int Num)
{
bool flag = true;
for (int start = 2; start < (Num / 2); start++)
{
if (Num % start == 0)
{
flag = false;
break;
}
}
return flag;
}
private void MessageBox(string msg)
{
Label lbl = new Label();
lbl.Text = "<script language='javascript'>" + Environment.NewLine +
"window.alert('" + msg + "')</script>";
Page.Controls.Add(lbl);
}
private int Get_e(int m)
{

m = m - 1;
int start = m;

if (phi(m))
{
start = m;
}
else
{
for (start = 2; start < m / 2; start++)
{
if (m % start == 0)
{
break;
}
}
}
return start;
}

private int Get_d(int n, int e)


{
int d;
d = (n - 1) * (e - 1) / e + 1;
return d;
}
private int J_Mi(int m, int n, int x)
{
int d = m;
int Yu = d % x;
for (int i = 1; i < n; i++)
{
d = Yu * m;
Yu = d % x;
}
return Yu;
}
public static bool IsPrime(int number)
{
if (number < 2) return false;
if (number % 2 == 0) return (number == 2);
int root = (int)Math.Sqrt((double)number);
for (int i = 3; i <= root; i += 2)
{
if (number % i == 0)
return false;
}
return true;
}
protected void Button3_Click(object sender, EventArgs e)
{
txtfill();
string msgname=null;
msgname=Session["user"].ToString() + "_"+ TextBox1.Text + ".txt";
OleDbConnection con = new
OleDbConnection(ConfigurationManager.ConnectionStrings["LocalSqlServer1"].
ConnectionString);
con.Open();
OleDbCommand cmd = new OleDbCommand("insert into
adphoto(uname,image1,content1,idno,key1)values('" + Session["user"].ToString()
+ "','" + msgname + "','" + TextBox8.Text + "','" + TextBox1.Text + "','" +
TextBox6.Text + "')", con);
cmd.ExecuteNonQuery();
File.WriteAllText(Server.MapPath(".//EncMsg//") + msgname , f_encmsg);
con.Close();
Server.Transfer("frmupload.aspx");
}

protected void Button4_Click(object sender, EventArgs e)


{
txtfill();
string dd = null;
string _fileExt = null;
dd = System.IO.Path.GetFileName(FileUpload1.PostedFile.FileName);
_fileExt = System.IO.Path.GetExtension(dd);
if (_fileExt == ".jpg" || _fileExt == ".bmp" || _fileExt == ".png" || _fileExt ==
".jpeg")
{
FileUpload1.PostedFile.SaveAs(Server.MapPath(".//files//") + dd);
FileInfo fi = new FileInfo(Server.MapPath(".//files//") + dd);
System.Drawing.Image UploadedImage =
System.Drawing.Image.FromStream(FileUpload1.PostedFile.InputStream);
//Determine width and height of uploaded image
float UploadedImageWidth = UploadedImage.PhysicalDimension.Width;
float UploadedImageHeight = UploadedImage.PhysicalDimension.Height;
//TextBox4.Text = "File Name: " + fi.Name;
double imageMB = ((fi.Length / 1024f) / 1024f);
//TextBox4.Text = "Image Resolution: " + UploadedImageHeight + " X " +
UploadedImageWidth;
//TextBox4.Text = "Image Size: " + imageMB.ToString(".##") + "MB";
loadImage = BitConverter.ToString(ConvertImageToByte(UploadedImage));
//File.WriteAllText("C:\\test.txt", loadImage);
TextBox7.Text = loadImage;
}
else
{
MessageBox("Invalid Image Format");
}

}
protected void Button5_Click(object sender, EventArgs e)
{
if (TextBox7.Text == "")
{
MessageBox("Enter Image to byte data");
}
else
{

//Encyption================
int P = Convert.ToInt32(TextBox3.Text);
int Q = Convert.ToInt32(TextBox5.Text);
m_Ji = Su_Ji(P, Q);
n_Ji = Phi_Ji(P, Q);
char[] Ming;
Ming = loadImage.ToCharArray();
string encmsg = "";
foreach (char c in Ming)
{
string ming;
int Ming_Num = J_Mi((int)c, Get_e(n_Ji), m_Ji);
Char[] Ming_Char = Ming_Num.ToString().ToCharArray();

if (Ming_Char.GetLength(0) == 1)
{
ming = "000" + Ming_Char[0].ToString();
Ming_Char = ming.ToCharArray();
}

if (Ming_Char.GetLength(0) == 2)
{
ming = "00" + Ming_Char[0].ToString() + Ming_Char[1].ToString();
Ming_Char = ming.ToCharArray();
}

if (Ming_Char.GetLength(0) == 3)
{
ming = "0" + Ming_Char[0].ToString() + Ming_Char[1].ToString() +
Ming_Char[2].ToString();
Ming_Char = ming.ToCharArray();
}
foreach (char m in Ming_Char)
{
encmsg += m;
}
}
f_encmsg = encmsg + "-" + TextBox1.Text;
TextBox2.Text = f_encmsg;

MessageBox("Encryption done successfully....");


}

}
protected void Button6_Click(object sender, EventArgs e)
{
Panel1.Visible = false;
}
}
7. SCREENSHOTS

Member Registration

Home page
View Profile

Friends List
System Initalization and Key Generation

Data Encryption
Encrypted Files

Key Request
Extraction

Trapdoor Search & Test


8. CONCLUSION

Considering the practical problem of privacy preserving data sharing


system based on public cloud storage which requires a data owner to distribute a
large number of keys to users to enable them to access his/her documents, we for the
first time propose the concept of key-aggregate searchable encryption (KASE) and
construct a concrete KASE scheme. Both analysis and evaluation results confirm
that our work can provide an effective solution to building practical data sharing
system based on public cloud storage.
In a KASE scheme, the owner only needs to distribute a single key to a user
when sharing lots of documents with the user and the user only needs to submit a
single trapdoor when he queries over all documents shared by the same owner.
However, if a user wants to query over documents shared by multiple owners, he
must generate multiple trapdoors to the cloud.
9. FUTURE ENHANCEMENT

In future work, Signcryption is a public-key primitive that performs the


functions of both digital signature and encryption. It is a secure and efficient
technique of providing security between the sender and the receiver. This project a
authentication to secure data using KAS encryption and digital signature in cloud
storage. For, a generic approach of constructing Digital Signature using SHA-256.
It avoid modifying data from cloud sever.
10. REFERENCES

 Jason Butlre and tony Caudil, ”ASP.NET Database Programming” Fifth


Edition,
Hungry minds, Inc.Publicating Company Limited, New Delhi, 2000.
 Glen Johnsonwiley, Learning ASP.NET , Fourth Edition, Hungry minds,
Inc. Publicating Company Limited, Uttar Pradesh, 2001.
 Jason Butlre and tony Caudil, ”ASP.NET Database Programming” Second
Edition, Hungry minds, Inc.Publicating Company Limited, New Delhi, 2000.
 K.M. Hussian and Donna Hussaian, Information Systems: Analysis, Design
and implementation, Second Edition, Tata McGraw-Hill, Delhi, 1995.
 Edward Yourdon, Managing the System Life Cycle, Second Edition,
Englewood cliffs & N.J, Yourdon Press, US, 1989.
 Edward Yourdon and Larry L. Constantine, Structured Design: Fundamentals
and Applications in Software Engineering, Second Edition, Englewood Cliffs
& N.J, Yourdon Press, US, 1989.
 Shaku Atre, Data Base: Structured Techniques for Design, Performance and
Management, Third Edition, Wiley, New York,1980.
 J.D.Lomax and Rochell Park, Data Dictionary Systems, NCC
Publications,1977.

You might also like