You are on page 1of 68

1.

INTRODUCTION

Cloud storage enables users to remotely store their data and enjoy the on-
demand high quality cloud applications without the burden of local hardware and
software management. Though the benefits are clear, such a service is also
relinquishing user’s physical possession of their outsourced data, which inevitably
poses new security risks towards the correctness of the data in cloud. In order to
address this new problem and further achieve a secure and dependable cloud storage
service. This cloud computing storage enables a flexible distributed storage integrity
auditing mechanism, utilizing the homomorophic token and distributed erasure-coded
data.

The proposed design allows users to audit the cloud storage with very
lightweight communication and computation cost. The auditing result not only
ensures strong cloud storage correctness guarantee, but also simultaneously achieves
fast data error localization, i.e., the identification of misbehaving server. Considering
the cloud data are dynamic in nature, the proposed design further supports secure and
efficient dynamic operations on outsourced data, including block modification,
deletion, and append. Analysis shows the proposed scheme is highly efficient and
resilient against Byzantine failure, malicious data modification attack, and even
server colluding attacks.

1
2. SYSTEM SPECIFICATION

2.1 Hardware Requirements

Processor : IInd Gen I3 processor


Mother Board : 61 chipset Board
RAM : DDR3 2 GB
Hard Disk : 250 GB
Monitor : 18.5 LED
Keyboard : USB Mechanical keyboard
Mouse : USB Optical Mouse

2.2 Software Requirements

Operating system : Windows 7


IDE : Visual Studio 2010
Front end : ASP.Net with C#
Back end : SQL Server 2008

2
3. SYSTEM ANALYSIS

3.1 EXISTING SYSTEM

In contrast to traditional solutions, where the IT services are under proper


physical, logical and personnel controls, Cloud Computing moves the application
software and databases to the large data centers, where the management of the data
and services may not be fully trustworthy. This unique attribute, however, poses
many new security challenges which have not been well understood.

1. No user data privacy


2. Security risks towards the correctness of the data in cloud

3.2 PROPOSED SYSTEM

The users focus on cloud data storage security, which has always been an
important aspect of quality of service. To ensure the correctness of users‟ data in the
cloud, the user proposes an effective and flexible distributed scheme with two salient
features, opposing to its predecessors. By utilizing the homomorophic token with
distributed verification of erasure-coded data, our scheme achieves the integration of
storage correctness insurance and data error localization, i.e., the identification of
misbehaving server(s). Unlike most prior works, the new scheme further supports
secure and efficient dynamic operations on data blocks, including: data update, delete
and append.

1. In cloud computing storage, the users proposes an effective and flexible


distributed scheme with explicit dynamic data support to ensure the
correctness of users‟ data in the cloud.

2. Cloud Computing is not just a third party data warehouse. The data stored
in the cloud may be frequently updated by the users, including insertion,
deletion, modification, appending, etc. To ensure storage correctness
under dynamic data update is hence of paramount importance.

3
3.3 SYSTEM STUDY

Feasibility study

The feasibility of the project is analyzed in this phase and business proposal is
put forth with a very general plan for the project and some cost estimates. For
feasibility analysis, some understanding of the major requirements for the system is
essential. Three key considerations involved in the feasibility analysis are,

 Economical feasibility
 Technical feasibility
 Social feasibility

Economical feasibility

This study is carried out to check the economic impact that the system will
have on the organization. The amount of fund that the company can pour into the
research and development of the system is limited. The expenditures must be
justified. Thus the developed system as the user within the budget and this was
achieved because most of the technologies used are freely available.

Technical feasibility

This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on
the available technical resources. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this
system.

Social Feasibility

The aspect of study is to check the level of acceptance of the system by the
user. The user must not feel threatened by the system, instead must accept it as a
necessity. The level of acceptance by the users solely depends on the methods that are
employed to educate the user about the system and to make him familiar with it.

4
4. SOFTWARE CONCEPT

4.1 .Net Technology

The Microsoft .NET Framework

The .NET Framework is the infrastructure for the Microsoft .NET platform.
The .NET Framework is an environment for building, deploying, and running web
applications and web Services. Microsoft's first server technology ASP was a
powerful and flexible "programming language". It was not an application framework
and not an enterprise development tool. The Microsoft .NET Framework was
developed to solve this problem.

.NET Frameworks keywords

 Easier and quicker programming


 Reduced amount of code
 Declarative programming model
 Richer server control hierarchy with events
 Better support for development tools

The .NET Framework consists of 3 main parts

Programming languages:
 C# (Pronounced C sharp)
 Visual Basic (VB .NET)
 J# (Pronounced J sharp)

Server technologies and client technologies:


 ASP .NET (Active Server Pages)
 Windows Forms (Windows desktop solutions)
 Compact Framework (PDA / Mobile solutions)

Development environments:
 Visual Studio .NET (VS .NET)
 Visual Web Developer

5
ACTIVE SERVER PAGE.NET (ASP.Net)

The .NET Framework is such a comprehensive platform that it can be a little


difficult to describe. The users have heard it described as a Development Platform, an
Execution Environment, and an Operating System among other things. In fact, in
some ways each of these descriptions is accurate, if not sufficiently precise.

The software industry has become much more complex since the introduction
of the Internet. Users have become both more sophisticated and less sophisticated at
the same time. (The user suspect not many individual users have undergone both
metamorphoses but as a body of users this has certainly happened). Folks who had
never touched a computer less than five years ago are now comfortably including the
Internet in their daily lives. Meanwhile, the technophile or professional computer user
has become much more advanced, as have their expectations from software.

It is this collective expectation from software that drives our industry. Each
time a software developer creates a successful new idea, they raise user expectations
for the next new feature. In a way this has been true for years. But now software
developers face the added challenge of addressing the Internet and Internet-users in
many applications that in the past were largely unconnected. It is this new challenge
that the .NET Framework directly addresses.

Software that addresses the Internet must be able to communicate. However,


the Internet is not just about communication. This assumption has led the software
industry down the wrong path in the past. Communication is simply the base
requirement for software in an Inter-networked world.

The Web Development Environment

Many people are Web page "authors." Armed with a visual Web production
tool such as Microsoft FrontPage, Macromedia Dreamweaver, Adobe Go Live!, and
others, even a technical novice can throw together a Web site with decorated text and
sparkling graphics to present all manner of personal statements on the Web. We may
be one of these people, spanning the generations between toddlers and grandparents,

6
to have one's own Web site or to have produced sites for others. It is no small trick to
do so, but users are just dipping a toe into an ocean of opportunity.

Fewer people are Web site "developers." A smaller cadre possesses the
technical know-how and system-level sensibilities to produce sites that are truly
useful, responsive, self-sustaining, and integral to the public lives they represent.
More importantly, fewer people are able to create commercial-grade Web sites, those
that are foundational to the operations and management of modern organizations.
Here is where the action is and herein lie the know-how and skills to separate user
self from the crowded pack of Web page authors. Wes are soon to join these latter
ranks of professional Web developers, possessing a toolkit to pursue the depths and
breadths of Web technologies.

Web Page Authoring

Authoring Web pages is not a particularly difficult task now-a-days. Many


standard desktop software packages come equipped with built-in features to convert
word processing documents, spreadsheets, databases, and the like to coded documents
that are ready for access across the Web. These and other authoring packages permit
creation of Web pages with drag-and-drop ease. In most cases it is not even necessary
to know or even to be aware of the special HTML (Hyper Text Markup Language)
coding that takes place behind the scenes. It's all produced for user in documents that
can be shopping on the Web for immediate access. If the users know the HTML
language -- more recently XHTML (Extensible Hyper Text Markup Language) –
then we can add pages with a simple text editor, usually gaining a great deal more
control over their structure and formatting than is possible with drag-and-drop
methods. In addition, the user have the ability to easily integrate HTML code,
browser scripting languages, Java applets, multimedia plug-ins, and other processing
features to bring a modicum of user interactively to add pages. Irrespective of the
substance or sizzle of add pages, however, their purpose tends to be limited to
presenting interesting or informative text and graphics for personal consumption. It is
unlikely we can tackle the task of producing a commercial Web site or Web-based
business system armed with HTML and a few plug-in.

7
Web Development

Professional Web development goes well beyond the use of HTML markup
codes and a few plug-ins and scripting techniques to make attractive and informative
Web pages. It involves the use of special strategies produce applications best
characterized as Web-based, three-tier, client/server, information processing
systems. These terms are considered in more detail below for understanding the
broader-ranging purposes for which Web pages and Web sites are developed.

Information Processing Systems

Web technologies can be used not just to produce simple personal or


promotional Web sites but are becoming important means to support the foundational
business processes of modern organizations, their underlying operational and
management-support functions. The technical infrastructures to realize are roughly
classified into three types of Web-based information processing systems termed
intranets, internets, and extranets.

Intranets are private, internal systems to help carry out the day-to-day
information processing, management information, of organizations. Web-based
intranets service these standard functions and in doing so impact basic organizational
systems such as accounting and financial reporting systems, marketing and sales
systems, distribution systems, production systems, Web-based intranets will likely
become the primary technical means through which organizations internally to carry
out their business and work-flow processes.

Internets are public information systems. They include public sites that
provide news, information, and entertainment; electronic commerce (e-commerce)
sites to market and sell products and services; government sites to inform or service
the general public; and educational sites to provide local and remote access to
education and training. In all sectors of society public internets are providing goods,
services, and information to the public through the World Wide Web (WWW) and
its associated networks and facilities.

8
Extranets are business-to-business (B2B) systems that manage electronic
data interchange (EDI) between business enterprises. These systems facilitate the
flow of information between organizations -- between a company and its suppliers
and between the company and its distributors -- to help coordinate the sequence of
purchasing, production, and distribution. Electronic data interchange helps eliminate
the paper flow accompanying business transactions by using Web technologies to
transfer electronic documents for processing between computers rather than between
people. As Web-based systems, EDI applications eliminate the difficulties of
transmitting information among different hardware and software platforms with
inherently different information formats and with different protocols for exchanging
information.

Web-Based

The term "Web-based" implies that information processing systems rely on


the technology of the Internet, particularly on that portion known as the World Wide
Web (WWW), for implementation. That is, Web-based systems operate within a
technical framework with the following characteristics. First, systems operate across
public, rather than private, data networks. They communicate over the Internet, the
world-wide interconnected networks of computers that are publicly accessible. This
means that Web-based information systems can literally span the globe with
processing activities shared among components located anywhere on Earth and
beyond where Web service is available.

Second, these communications networks are based on open and public


technical standards such as Ethernet architectures, TCP/IP transmission protocols,
and HTTP (HyperText Transport Protocol), FTP (File Transfer Protocol), SMTP
(Simple Mail Transfer Protocol), and other common application protocols. These are not
private or proprietary standards but are fundamentally open and free to public use.

Third, Web-based systems use common, often-times free, software for


development and delivery. Even the most complex systems can be coded with nothing
more than a standard text editor and placed into production through common Web
server software. This software is usually built into the server computer's

9
operating system or can often be downloaded for free. Also, interaction with Web-
based systems takes place through standard Web browsers rather than specially
configured hardware and software. Of course, as Web-based systems grow and
become more crucial to operations they require additional components to increase
capacity, speed, and security. However, most hardware and software components are
off-the-shelf varieties rather than special-made.

Thus, common, non-specialized, non-proprietary hardware and software


systems provide the technical environment for developing information processing
systems and for operating and managing information processing activities.

Three-tier, Client/Server Architecture

The term "client/server" pertains to the use of server-based networks to


manage resource sharing and to distribute processing tasks among hardware and
software components. Within Web-based client/server networks the distribution of
processing tasks occurs in three tiers that correspond to the three primary
hardware/software components of the system.

Historically, the World Wide Web has functioned simply for information
delivery. People have used it to gather information about a world of topics for which
millions of Web sites have provided a portal. More recently, though, the Web has
become something more than an electronic library of information. It has become a
communications, information, and transactional platform through which many
economic, social, political, educational, and cultural activities are enjoined.

Information Delivery Model

When a web site functions as an information delivery system web


development activities are fairly simple and straightforward. First, information
content is entered into a document that will eventually become a web page. This
content is surrounded by special late and formatting codes written in the Hyper Text
Markup Language to control its structure and appearance in a web browser. Then the
document is saved to a web server computer to await public access. Users access the

10
document by entering its web address in their browsers. This address, known as the
URL or Uniform Resource Locator, specifies the site where the page is stored and
its directory location on the web server. The server, in turn, retrieves the page and
sends it to the browser, which interprets the HTML code and renders the document on
the computer screen.

There are certain consequences of basing web access on an information


delivery model and in following the traditional web authoring process. In the first
place, the information content of the web page tends to get fixed or frozen into place.
It becomes embedded in and closely linked with the HTML formatting codes that
surround it. Thus, it becomes difficult to make changes to the content without
rewriting and reediting its presentation formats. So, it becomes troublesome to keep
pages up to date, especially if the content changes routinely.

Information Processing Model

To overcome this static, passive use of the web the need is to begin viewing
the web not just as a simple information delivery system but as a full-featured
information processing system. This change of viewpoint means that the web itself
and its comprising sites and pages need to be viewed as mechanisms to perform the
full complement of input, processing, output, and storage activities required to
delivery dynamic, active content -- in short, to provide the basic functionality‟s of an
information processing system.

Introduction to ADO.NET

This section is an introduction to ADO.NET. It introduces primary ADO.NET


concepts and objects that users will learn about in later lessons. Here are the
objectives of this lesson:
 Learn what ADO.NET is.
 Understand what a data provider is.
 Understand what a connection object is.
 Understand what a command object is.
 Understand what a Data Reader object is.

11
 Understand what a DataSet object is.
 Understand what a DataAdapter object is.

Introduction

ADO.NET is an object-oriented set of libraries that allows the users to interact


with data sources. Commonly, the data source is a database, but it could also be a text
file, an Excel spread sheet, or an XML file. For the purposes of this tutorial, the user
will look at ADO.NET as a way to interact with a database. As the users are probably
aware, there are many different types of databases available. For example, there is
Microsoft SQL Server, Microsoft Access, Oracle, Borland Interface, and IBM DB2,
just to name a few. To further refine the scope of this tutorial, all of the examples will
use SQL Server. ADO.NET, but it is one of the many data sources the user can
interact with by using ADO.NET If the users need help with MSDE 2000, The user
refer to the Microsoft web site, where the user can find pertinent information on
licensing and technical assistance.

Data Providers

The user know that ADO.NET allows us to interact with different types of
data sources and different types of data bases. However, there isn't a single set of
classes that allow the user to accomplish this universally. Since different data sources
expose different protocols, the users need a way to communicate with the right data
source using the right protocol. Some older data sources use the ODBC protocol,
many the user data sources use the OLEDB protocol, and there are more data sources
every day that allow us to communicate with them directly through .NET ADO.NET
class libraries.

ADO.NET provides a relatively common way to interact with data sources,


but comes in different sets of libraries for each way the user can talk to a data source.
These libraries are called Data Providers and are usually named for the protocol or
data source type they allow the user to interact with table1 lists some users known
data providers, the API prefix they use, and the type of data source they allow the
user to interact with.

12
An example may help us to understand the meaning of the API prefix. One of
the first ADO.NET objects the user will learn about is the connection object, which
allows the user to establish a connection to a data source. If the user the users are
using the OLEDB Data Provider to connect to a data source that exposes an OLEDB
interface, the user would use a connection object named OLEDB Connection.
Similarly, the connection object name would be prefixed with ODBC or SQL for an
OLEDB Connection object on an ODBC data source or a SQL Connection object on
a SQL Server data base, respectively. Since the users are using MSDE in this tutorial
(a scaled down version of SQL Server) all the API objects will have the SQL
Connection.

ADO.NET Objects

ADO.NET includes many objects the user can use to work with data. This
section introduces some of the primary objects the user will use. Over the course of
this tutorial, the user will be exposed to many more ADO.NET objects from the
perspective of how they are used in a particular lesson. The objects below are the
ones the user must know. Learning about them will give the user an idea of the types
of things the user can do with data when using ADO.NET.

The SQL Connection Object

To interact with a data base, the user must have a connection to it. The
connection helps identify the data base server, the data base name, user name,
password, and other parameters that are required for connecting to the data base. A
connection object is used by command objects so they will know which data base to
execute command on.

The SQL Command Object

The process of interacting with a data base means that the user must specify
the actions the users want to occur. This is done with a command object. The users
use a command object to send SQL statements to the data base. A command object
uses a connection object to figure out which data base to communicate with. The user

13
can use a command object alone, to execute a command directly, or assign a reference
to a command object to an SQL Data Adapter, which holds a set of commands that
work on a group of data as described below.

The SQL Data Reader Object

Many data operations require that the user only get a stream of data for
reading. The data reader object allows the user to obtain the results of a SELECT
statement from a command object. For performance reasons, the data returned from a
data reader is a fast forward-only stream of data. This means that the user can only
pull the data from the stream in a sequential manner. This is good for speed, but if the
users need to manipulate data, then a Dataset is a better object to work with.

The Dataset Object

Dataset objects are in-memory representations of data. They contain multiple


Data table objects, which contain columns and rows, just like normal data base tables.
The user can even define relations between tables to create parent-child relationships.
The Dataset is specifically designed to help manage data in memory and to support
disconnected operations on data, when such a scenario make sense. The Dataset is an
object that is used by all of the Data Providers, which is why it does not have a Data
Provider specific prefix.

The SQL Data Adapter Object

Sometimes the data the user work with is primarily read-only and the user
rarely needs to make changes to the underlying data source. Some situations also call
for caching data in memory to minimize the number of data base calls for data that
does not change. A data adapter contains a reference to the connection object and
opens and closes the connection automatically when reading from or writing to the
data base. Additionally, the data adapter contains command object references for
SELECT, INSERT, UPDATE, and DELETES operations on the data. The user will
have a data adapter defined for each table in a Dataset and it will take care of all
communication with the database for us.

14
4.2 SQL

The OLAP Services feature available in SQL Server version 7.0 is now called
SQL Server 2000 Analysis Services. Analysis Services also includes a new data
mining component. The Repository component available in SQL Server version 7.0 is
now called Microsoft SQL Server 2000 Meta Data Services. References to the
component now use the term Meta Data Services. The term repository is used only in
reference to the repository engine within Meta Data Services.

SQL-SERVER database consist of six type of objects,


They are,
1. TABLE
2. QUERY
3. FORM
4. REPORT
5. MACRO

TABLE

A database is a collection of data about a specific topic.

VIEWS OF TABLE

The user can work with a table in two types,


1. Design View
2. Datasheet View

Design View

To build or modify the structure of a table the user work in the table design
view. The user can specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself the user work in tables datasheet view
mode.

15
QUERY

A query is a question that has to be asked the data. Access gathers data that
the user the question from one or more table. The data that make up the answer is
either dynast (if you edit it) or a snapshot (it cannot be edited).Each time the user run
query, the user get latest information in the dynast. Access either displays the dynast
or snapshot for us to view or perform an action on it, such as deleting or updating.

SQL Connection

A connection object is used by command objects so they will know which


data base to execute command on.

SQL Command

The users use a command object to send SQL statements to the data base. A
command object uses a connection object to figure out which data base to
communicate with.

SQL Data Reader

Many data operations require that the user only get a stream of data for
reading. The data reader object allows the user to obtain the results of a SELECT
statement from a command object.

Dataset Object

Dataset objects are in-memory representations of data. They contain multiple


Data table objects, which contain columns and rows, just like normal data base tables.
The user can even define relations between tables to create parent-child relationships.

SQL Data Adapter

A data adapter contains a reference to the connection object and opens and
closes the connection automatically when reading from or writing to the data base.

16
17
5. PROJECT DETAILS

5.1MODULES DESCRIPTION

1. File Retrieval and Error Recovery

The user can reconstruct the original file by downloading the data vectors
from the first m servers, assuming that they return the correct response values. Notice
that the user verification scheme is based on random spot-checking, so the storage
correctness assurance is a probabilistic one. The user can guarantee the successful file
retrieval with high probability. On the other hand, whenever the data corruption is
detected, the comparison of pre-computed tokens and received response values can
guarantee the identification of misbehaving server(s).

2. Third Party Auditing

In this cloud computing, in case the user does not have the time, feasibility or
resources to perform the storage correctness verification, he can optionally delegate
this task to an independent third party auditor, making the cloud storage publicly
verifiable. However, as pointed out by the recent work, to securely introduce an
effective TPA, the auditing process should bring in no new vulnerabilities towards
user data privacy. Namely, TPA should not learn user’s data content through the
delegated data auditing.

3. Cloud Operations

(1) Update Operation

In cloud data storage, sometimes the user may need to modify some data
block(s) stored in the cloud, the user refer this operation as data update. In other
words, for all the unused tokens, the user needs to exclude every occurrence of the old
data block and replace it with the new one.

18
(2) Delete Operation

Sometimes, after being stored in the cloud, certain data blocks may need to be
deleted. The delete operation we are considering is a general one, in which user
replaces the data block with zero or some special reserved data symbol. From this
point of view, the delete operation is actually a special case of the data update
operation, where the original data blocks can be replaced with zeros or some
predetermined special blocks.

(3) Append Operation

In some cases, the user may want to increase the size of his stored data by adding
blocks at the end of the data file, which we refer as data append. We anticipate that
the most frequent append operation in cloud data storage is bulk append, in which the
user needs to upload a large number of blocks (not a single block) at one time.

19
5.2 DATAFLOW DIAGRAM

Activity diagram

Client Home Request to


CSP

Validate
Deny
User
Access

Manipulatio
Acknowledgement
n
Monitoring

Client
Database

20
Class Diagram

21
5.3 SYSTEM DESIGN

Architectural Design

Representative network architecture for cloud storage service


architecture is illustrated above. Three different network entities can be identified as
follows:

USER: An entity, who has data to be stored in the cloud and relies on the
cloud for data storage and computation, can be either enterprise or individual
customers.

CLOUD SERVER (CS): an entity, which is managed by cloud service


provider (CSP) to provide data storage service and has significant storage space and
computation resources (the user will not differentiate CS and CSP hereafter).

THIRD PARTY AUDITOR: an optional TPA who has expertise and


capabilities that users may not have, is trusted to access and expose risk of cloud
storage services on behalf of the users upon request.

Input/ Output design

1. Input Design is the process of converting a user-oriented description of the


input into a computer-based system. This design is important to avoid errors in the
data input process and show the correct direction to the management for getting
correct information from the computerized system.

22
2. It is achieved by creating user-friendly screens for the data entry to handle
large volume of data. The goal of designing input is to make data entry easier and to
be free from errors. The data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing facilities.

3. When the data is entered it will check for its validity. Data can be entered
with the help of screens. Appropriate messages are provided as when needed so that
the user will not be in maize of instant. Thus the objective of input design is to create
an input layout that is easy to follow.

4. Designing computer output should proceed in an organized, well thought


out manner; the right output must be developed while ensuring that each output
element is designed so that people will find the system can use easily and effectively.
When analysis design computer output, they should Identify the specific output that is
needed to meet the requirements.

5. Select methods for presenting information.

6. Create document, report, or other formats that contain information


produced by the system.

23
Database Design

Towards Secure Registration Database Design

Security
Uid Username Userpwd Mobile no Email-ID Date
key

1 Kingslin Kingslin 9768768768 dudekings@gmail.com 3/24/2012 56378

2 Vicky Vicky 9676876876 nuvivicky@gmail.com 3/24/2012 18348

3 Malar Malar 9898989990 astutemalar@gmail.com 12/5/2016 40473

Towards Secure Files Arechieve

Fupload
Fdate Fuser Fencryptokey Fverify Fmodify Frequest fresponse
status

3/27/ fXQ1JhBRq/Yjz
Kingslin YES NO NO NO Finish
2016 oZFwwHPYQ==
3/28/ nyc9InyogMkm2
Kingslin YES NO NO NO Finish
2016 zzZMLciPw==

3/28/ TeYm+98S5j/5Z
Kingslin YES NO NO NO Finish
2016 ZMOJWJmLg==

24
6. SOURCE CODING

Admin master page

using System;
using System.Collections; using
System.Configuration; using System.Data;
using System.Linq; using
System.Web;
using System.Web.Security; using
System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts; using
System.Web.UI.HtmlControls;
using System.Xml.Linq;
public partial class AdminMasterPage : System.Web.UI.MasterPage
{
string link1, link2, link3, link4;
string seslink1, seslink2, seslink3, seslink4;
protected void Page_Load(object sender, EventArgs e)
{
Label3.Text = "Welcome " + (string)Session["adminid"] + " !";
LinkButton1.BackColor = System.Drawing.Color.Red; LinkButton1.ForeColor
= System.Drawing.Color.Wheat; seslink1 = (string)Session["alink1"];
seslink2 = (string)Session["alink2"]; seslink3 =
(string)Session["alink3"]; seslink4 =
(string)Session["alink4"]; if (seslink1 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Red;
LinkButton1.ForeColor = System.Drawing.Color.Wheat;
LinkButton2.BackColor = System.Drawing.Color.Transparent;

25
LinkButton3.BackColor = System.Drawing.Color.Transparent;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton2.ForeColor = System.Drawing.Color.Black;
LinkButton4.ForeColor = System.Drawing.Color.Black;
}
if (seslink2 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton2.BackColor = System.Drawing.Color.Red;
LinkButton2.ForeColor = System.Drawing.Color.Wheat;
LinkButton3.BackColor = System.Drawing.Color.Transparent;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton4.ForeColor = System.Drawing.Color.Black;
}
if (seslink3 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton2.BackColor = System.Drawing.Color.Transparent;
LinkButton2.ForeColor = System.Drawing.Color.Black;
LinkButton3.ForeColor = System.Drawing.Color.Wheat;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton4.ForeColor = System.Drawing.Color.Black;
}
if (seslink4 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton2.BackColor = System.Drawing.Color.Transparent;
LinkButton2.ForeColor = System.Drawing.Color.Black;
LinkButton3.BackColor = System.Drawing.Color.Transparent;
LinkButton4.BackColor = System.Drawing.Color.Red;
LinkButton4.ForeColor = System.Drawing.Color.Wheat;
}
}

26
protected void LinkButton1_Click(object sender, EventArgs e)
{
link1 = "yes"; Session["alink1"] = link1;
Session.Remove("alink2");
Session.Remove("alink3");
Session.Remove("alink4");
Response.Redirect("AdminMain.aspx");
}
protected void LinkButton2_Click(object sender, EventArgs e)
{
link2 = "yes"; Session["alink2"] = link2;
Session.Remove("alink1");
Session.Remove("alink3");
Session.Remove("alink4");
Response.Redirect("AdminViewUser.aspx");
}
protected void LinkButton3_Click(object sender, EventArgs e)
{
link3 = "yes"; Session["alink3"] = link3;
Session.Remove("alink1");
Session.Remove("alink2");
Session.Remove("alink4");
Response.Redirect("AdminWAP.aspx");
}
protected void LinkButton4_Click(object sender, EventArgs e)
{
Session.Abandon();
Response.Redirect("Default.aspx");
}
}

27
using System;
using System.Collections; using
System.Configuration; using
System.Data;
using System.Linq; using
System.Web;
using System.Web.Security; using
System.Web.UI;
using System.Web.UI.HtmlControls; using
System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts; using
System.Xml.Linq;
using System.Data.SqlClient; using
System.IO;
public partial class AdminMain : System.Web.UI.Page
{
SqlConnection con = new
SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]); string
fuploadstatus = "Finish";
protected void Page_Load(object sender, EventArgs e)
{
SqlDataAdapter adp = new SqlDataAdapter ("select * from Filearchive where
fuploadstatus='" + fuploadstatus + "'", con);
DataSet ds = new DataSet();
adp.Fill(ds);
if (ds.Tables[0].Rows.Count == 0)
{
Label1.Visible = true;
GridView1.Visible = false;
}
else
{
Label1.Visible = false;

28
GridView1.Visible = true;
GridView1.DataSource = ds;
GridView1.DataBind();
}
}
Public void GridView1_PageInxChanging(object sender, GridViewPageEventArgs e)
{
GridView1.PageIndex =
e.NewPageIndex; bindgrid();
}
public void bindgrid()
{
SqlDataAdapter adp = new SqlDataAdapter("select * from filearchive where
fuploadstatus='" + fuploadstatus + "'", con);
DataSet ds = new DataSet();
adp.Fill(ds);
GridView1.DataSource = ds;
GridView1.DataBind();
}
}

User master page

using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
usingSystem.Web.UI.WebControls.WebParts;

29
using System.Web.UI.HtmlControls;
using System.Xml.Linq;
public partial class MasterPage : System.Web.UI.MasterPage
{
string link1, link2, link3, link4, link5, link6;
string seslink1, seslink2, seslink3, seslink4, seslink5, seslink6;
protected void Page_Load(object sender, EventArgs e)
{
Label3.Text = "Welcome, " + "&nbsp&nbsp" + Session["UserID"] + " !"; if (!
IsPostBack)
{
LinkButton1.BackColor = System.Drawing.Color.Red;
LinkButton1.ForeColor = System.Drawing.Color.Wheat;
}
seslink1 = (string)Session["link1"];
seslink2 = (string)Session["link2"];
seslink3 = (string)Session["link3"];
seslink4 = (string)Session["link4"];
seslink5 = (string)Session["link5"];
seslink6 = (string)Session["link6"];
if (seslink1 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Red;
LinkButton1.ForeColor = System.Drawing.Color.Wheat;
LinkButton2.BackColor = System.Drawing.Color.Transparent;
LinkButton3.BackColor = System.Drawing.Color.Transparent;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton5.BackColor = System.Drawing.Color.Transparent;
LinkButton6.BackColor = System.Drawing.Color.Transparent;
LinkButton2.ForeColor = System.Drawing.Color.Black;
LinkButton3.ForeColor = System.Drawing.Color.Black;
LinkButton4.ForeColor = System.Drawing.Color.Black;

30
LinkButton5.ForeColor = System.Drawing.Color.Black;
LinkButton6.ForeColor = System.Drawing.Color.Black;
}
if (seslink2 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton2.BackColor = System.Drawing.Color.Red;
LinkButton2.ForeColor = System.Drawing.Color.Wheat;
LinkButton3.BackColor = System.Drawing.Color.Transparent;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton5.BackColor = System.Drawing.Color.Transparent;
LinkButton6.BackColor = System.Drawing.Color.Transparent;
LinkButton3.ForeColor = System.Drawing.Color.Black;
LinkButton4.ForeColor = System.Drawing.Color.Black;
LinkButton5.ForeColor = System.Drawing.Color.Black;
LinkButton6.ForeColor = System.Drawing.Color.Black;
}
if (seslink3 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton2.BackColor = System.Drawing.Color.Transparent;
LinkButton2.ForeColor = System.Drawing.Color.Black;
LinkButton3.BackColor = System.Drawing.Color.Red;
LinkButton3.ForeColor = System.Drawing.Color.Wheat;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton5.BackColor = System.Drawing.Color.Transparent;
LinkButton6.BackColor = System.Drawing.Color.Transparent;
LinkButton4.ForeColor = System.Drawing.Color.Black;
LinkButton5.ForeColor = System.Drawing.Color.Black;
LinkButton6.ForeColor = System.Drawing.Color.Black;
}

31
if (seslink4 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton2.BackColor = System.Drawing.Color.Transparent;
LinkButton2.ForeColor = System.Drawing.Color.Black;
LinkButton3.BackColor = System.Drawing.Color.Transparent;
LinkButton3.ForeColor = System.Drawing.Color.Black;
LinkButton4.BackColor = System.Drawing.Color.Red;
LinkButton4.ForeColor = System.Drawing.Color.Wheat;
LinkButton5.BackColor = System.Drawing.Color.Transparent;
LinkButton6.BackColor = System.Drawing.Color.Transparent;
}
if (seslink5 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton2.BackColor = System.Drawing.Color.Transparent;
LinkButton2.ForeColor = System.Drawing.Color.Black;
LinkButton3.BackColor = System.Drawing.Color.Transparent;
LinkButton3.ForeColor = System.Drawing.Color.Black;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton4.ForeColor = System.Drawing.Color.Black;
LinkButton5.BackColor = System.Drawing.Color.Red;
LinkButton5.ForeColor = System.Drawing.Color.Wheat;
LinkButton6.BackColor = System.Drawing.Color.Transparent;
LinkButton6.ForeColor = System.Drawing.Color.Black;
}
if (seslink6 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton2.BackColor = System.Drawing.Color.Transparent;
LinkButton2.ForeColor = System.Drawing.Color.Black;

32
LinkButton3.BackColor = System.Drawing.Color.Transparent;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton4.ForeColor = System.Drawing.Color.Black;
LinkButton5.BackColor = System.Drawing.Color.Transparent;
LinkButton6.BackColor = System.Drawing.Color.Red;
LinkButton6.ForeColor = System.Drawing.Color.Wheat;
}}
protected void LinkButton1_Click(object sender, EventArgs e)
{
link1 = "yes";
Session["link1"] = link1;
Session.Remove("link2");
Session.Remove("link3");
Session.Remove("link4");
Session.Remove("link5");
Session.Remove("link6");
Response.Redirect("Default.aspx");
}
protected void LinkButton2_Click(object sender, EventArgs e)
{
link2 = "yes";
Session["link2"] = link2;
Session.Remove("link1");
Session.Remove("link3");
Session.Remove("link4");
Session.Remove("link5");
Session.Remove("link6");
Response.Redirect("SignUP.aspx");
}
protected void LinkButton3_Click(object sender, EventArgs e)
{
link3 = "yes";
Session["link3"] = link3;

33
Session.Remove("link1");
Session.Remove("link2");
Session.Remove("link4");
Session.Remove("link5");
Session.Remove("link6");
Response.Redirect("UserLogin.aspx");
}
protected void LinkButton6_Click(object sender, EventArgs e)
{
link6 = "yes";
Session["link6"] = link6;
Session.Remove("link1");
Session.Remove("link2");
Session.Remove("link3");
Session.Remove("link4");
Session.Remove("link5");
Response.Redirect("AboutUs.aspx");
}
}

TPA master page

using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using
System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Xml.Linq;

34
public partial class MasterPage : System.Web.UI.MasterPage
{
string link1, link2, link3, link4, link5, link6;
string seslink1, seslink2, seslink3, seslink4, seslink5, seslink6; protected void
Page_Load(object sender, EventArgs e)
{
LinkButton1.BackColor = System.Drawing.Color.Red;
LinkButton1.ForeColor = System.Drawing.Color.Wheat;
}
seslink1 = (string)Session["link1"]; seslink2 =
(string)Session["link2"]; seslink3 =
(string)Session["link3"]; seslink4 =
(string)Session["link4"]; seslink5 =
(string)Session["link5"]; seslink6 =
(string)Session["link6"]; if (seslink1 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Red;
LinkButton1.ForeColor = System.Drawing.Color.Wheat;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton5.BackColor = System.Drawing.Color.Transparent;
LinkButton6.BackColor = System.Drawing.Color.Transparent;
LinkButton2.ForeColor = System.Drawing.Color.Black;
LinkButton3.ForeColor = System.Drawing.Color.Black;
LinkButton4.ForeColor = System.Drawing.Color.Black;
LinkButton5.ForeColor = System.Drawing.Color.Black;
LinkButton6.ForeColor = System.Drawing.Color.Black;
}
if (seslink2 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton2.BackColor = System.Drawing.Color.Red;
LinkButton2.ForeColor = System.Drawing.Color.Wheat;

35
LinkButton3.BackColor = System.Drawing.Color.Transparent;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton5.BackColor = System.Drawing.Color.Transparent;
LinkButton6.BackColor = System.Drawing.Color.Transparent;
LinkButton4.ForeColor = System.Drawing.Color.Black;
LinkButton6.ForeColor = System.Drawing.Color.Black;
}
if (seslink3 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton2.BackColor = System.Drawing.Color.Transparent;
LinkButton2.ForeColor = System.Drawing.Color.Black;
LinkButton3.BackColor = System.Drawing.Color.Red;
LinkButton3.ForeColor = System.Drawing.Color.Wheat;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton5.BackColor = System.Drawing.Color.Transparent;
LinkButton6.BackColor = System.Drawing.Color.Transparent;
LinkButton4.ForeColor = System.Drawing.Color.Black;
LinkButton5.ForeColor = System.Drawing.Color.Black;
LinkButton6.ForeColor = System.Drawing.Color.Black;
}
if (seslink4 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton4.BackColor = System.Drawing.Color.Red;
LinkButton4.ForeColor = System.Drawing.Color.Wheat;
LinkButton6.BackColor = System.Drawing.Color.Transparent;
LinkButton6.ForeColor = System.Drawing.Color.Black;
}
if (seslink5 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;

36
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton4.ForeColor = System.Drawing.Color.Black;
LinkButton6.BackColor = System.Drawing.Color.Transparent;
LinkButton6.ForeColor = System.Drawing.Color.Black;
}
if (seslink6 != null)
{
LinkButton1.BackColor = System.Drawing.Color.Transparent;
LinkButton1.ForeColor = System.Drawing.Color.Black;
LinkButton4.BackColor = System.Drawing.Color.Transparent;
LinkButton4.ForeColor = System.Drawing.Color.Black;
LinkButton6.BackColor = System.Drawing.Color.Red;
LinkButton6.ForeColor = System.Drawing.Color.Wheat;
}
}
protected void LinkButton1_Click(object sender, EventArgs e)
{
link1 = "yes"; Session["link1"] = link1;

Session.Remove("link2");
Session.Remove("link3");
Session.Remove("link4");
Session.Remove("link5");
Session.Remove("link6");
Response.Redirect("Default.aspx");
}
protected void LinkButton4_Click(object sender, EventArgs e)
{
link4 = "yes"; Session["link4"] = link4;
Session.Remove("link1");
Session.Remove("link2");
Session.Remove("link3");

37
Session.Remove("link5");
Session.Remove("link6");
Response.Redirect("TPALogin.aspx");
}
protected void LinkButton6_Click(object sender, EventArgs e)
{
link6 = "yes"; Session["link6"] = link6;
Session.Remove("link1");
Session.Remove("link2");
Session.Remove("link3");
Session.Remove("link4");
Session.Remove("link5");
Response.Redirect("AboutUs.aspx");
}
}

Admin view the user

using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;
using System.Data.SqlClient;

38
public partial class AdminViewUser : System.Web.UI.Page
{
SqlConnection con = new
SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]);
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
SqlDataAdapter adp = new SqlDataAdapter("Select userid from Registration",
con); DataSet ds = new DataSet();
adp.Fill(ds);
for (int i = 0; i < ds.Tables[0].Rows.Count; i++)
{
DropDownList1.Items.Add(ds.Tables[0].Rows[i]["userid"].ToString());
}
protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e)
{
if (DropDownList1.SelectedItem.Text == "--Select--")
{
Panel2.Visible = false;
}
else
{
Panel2.Visible = true;
SqlDataAdapter adp = new SqlDataAdapter("Select * from Registration where
userid='" + DropDownList1.SelectedItem.Text + "'", con);
DataSet ds = new DataSet();
adp.Fill(ds);
Label5.Text = ds.Tables[0].Rows[0]["oid"].ToString();
Label8.Text = ds.Tables[0].Rows[0]["userid"].ToString();
Label11.Text = ds.Tables[0].Rows[0]["mobile"].ToString();
Label14.Text = ds.Tables[0].Rows[0]["emailid"].ToString();
Label17.Text = ds.Tables[0].Rows[0]["date"].ToString();
}
}

39
7. SCREEN LAYOUTS

User Login

40
User security key

41
42
Submit security key

43
44
View TPA Verified Blocks

45
46
User Allow/Block in document blocks

47
48
Text and key will be encrypted

49
50
Mail notification

51
52
TPA login

53
54
Verification Status

55
56
TPA view verify

57
58
Admin login

59
60
Admin views the User details

61
62
8. TESTING AND IMPLEMENTATION

SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying to


discover every conceivable fault or weakness in a work product. It provides a way to
check the functionality of components, sub-assemblies, assemblies and/or a finished
product It is the process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.

Types Of Tests

Unit testing

Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid outputs.
All decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on knowledge
of its construction and is invasive. Unit tests perform basic tests at component level
and test a specific business process, application, and/or system configuration. Unit
tests ensure that each unique path of a business process performs accurately to the
documented specifications and contains clearly defined inputs and expected results.

Integration Testing

Integration tests are designed to test integrated software components to


determine if they actually run as one program. Testing is event driven and is more
concerned with the basic outcome of screens or fields. Integration tests demonstrate
that although the components the userre individually satisfaction, as shown by
successfully unit testing, the combination of components is correct and consistent.

63
Integration testing is specifically aimed at exposing the problems that arise from the
combination of components.

Functional testing

Functional tests provide systematic demonstrations that functions tested are


available as specified by the business and technical requirements, system
documentation, and user manuals.

Functional testing is centered on the following items:


Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements,


key functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and successive
processes must be considered for testing. Before functional testing is complete,
additional tests are identified and the effective value of current tests is determined.

System testing

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.

White box testing

White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least

64
its purpose. It is purpose. It is used to test areas that cannot be reached from a black
box level.

Black box testing

Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most
other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black box the
user cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.

IMPLEMENTATION

Implementation is the stage of the project when the theoretical design is


turned out into a working system. Thus it can be considered to be the most critical
stage in achieving a successful new system and in giving the user, confidence that the
new system will work and be effective.

The implementation stage involves careful planning, investigation of the


existing system and its constraints on implementation, designing of methods to
achieve change over and evaluation of change over methods.

Once the system has been designed, the next step is to convert the designed
one into actual code, so as to satisfy the user requirements as expected. If the system
is approved to be error free it can be implemented. The implemented software should
be maintained for prolonged running of the software.

Coding and debugging

Debugging is not testing but occurs as a consequence of testing, that is when a


test case uncovers an errors, debugging is the process that results in removal of the
error.

65
Normally three categories for debugging approaches are proposed:
• Brute force
• Back tracking
• Cause-Elimination

Brute Force

This is the most common and least efficient method for isolating the cost of
the software error. Brute-Force debugging method is usually applied when all else
fails. Using a “let a computer find the error” philosophy, memory-dumps are taken,
runtime traces are invoked and the program is loaded with WRITE(in the case of
message box)statements. In the information that is produced, a clue is found loading
to the cause of the error.

Back Tracking

This fairly common debugging approach that can be successfully in small


programs. Beginning at the site where the symptom has been uncovered, the source
code is traced backward (manually) until the site of the cost is found. In this project,
backtracking operations is used in fetch operation.

Cause-Elimination

A „Cause Hypothesis‟ is devised and the error related data are used to prove or
disprove the hypothesis. Alternatively, a list of all possible causes is developed, and tests
are conducted to eliminate each. If an initial test indicates that a particular cause
Hypothesis shows promise, that data are refined in an attempt to isolate the path.

Each of the debugging approaches can be supplemented with debugging tools.


A wide variety of debugging compilers, dynamic debugging aids(traces), automatic
test case generators, memory dumps and cross-reference maps can be applied.
However, tools are not a substitute for careful evaluation, based on the complete
software design document and clear source code.

66
9. CONCLUSION

The users investigate the problem of data security in cloud data storage, which
is essentially a distributed storage system. To achieve the assurances of cloud data
integrity and availability and enforce the quality of dependable cloud storage service
for users, the user propose an effective and flexible distributed scheme with explicit
dynamic data support, including block update, delete, and append. The users rely on
erasure-correcting code in the file distribution preparation to provide redundancy
parity vectors and guarantee the data dependability.

By utilizing the holomorphic token with distributed verification of erasure-


coded data, our scheme achieves the integration of storage correctness insurance and
data error localization, i.e., whenever data corruption has been detected during the
storage correctness verification across the distributed servers, the user can almost
guarantee the simultaneous identification of the misbehaving server(s). Considering
the time, computation resources, and even the related online burden of users, the user
also provide the extension of the proposed main scheme to support third-party
auditing, where users can safely delegate the integrity checking tasks to third-party
auditors and be worry-free to use the cloud storage services.

Through detailed security and extensive experiment results, the user show that
our scheme is highly efficient and resilient to Byzantine failure, malicious data
modification attack, and even server colluding attacks.

67
10. BIBLIOGRAPHY

References Made From:

1. C. Wang, Q. Wang, K. Ren, and W. Lou, “Ensuring data storage security in cloud
computing,” in Proc. of IWQoS’09, July 2009, pp. 1–9.

2. Sun Microsystems, Inc., “Building customer trust in cloud computing with


transparent security,” Online at https://www.sun. Com/offers/details/sun
transparency.xml, November 2009.

3. M. Arrington, “Gmail disaster: Reports of mass email deletions,” Online at


http://www.techcrunch.com/2006/12/28/gmail-disasterreports-of-mass-email-
deletions/, December 2006.

4. G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, “Scalable and Efficient


Provable Data Possession,” Proc. of SecureComm ’08, pp. 1–10, 2008.

Sites Referred:

http://www.asp.net.com
http://www.dotnetspider.com/
http://www.dotnetspark.com
http://www.almaden.ibm.com/software/quest/Resources/
http://www.computer.org/publications/dlib
http://www.developerfusion.com/

68

You might also like