You are on page 1of 50

CHAPTER – 1 INTRODUCTION

1.1 OVERVIEW Cloud computing has been envisioned as the next generation information technology (IT) architecture for enterprises, due to its long list of unprecedented advantages in the IT history: on-demand self-service, ubiquitous network access, location independent resource pooling, rapid resource elasticity, usage-based pricing and transference of risk. As a disruptive technology with profound implications,

Cloud Computing is transforming the very nature of how businesses use information technology. One fundamental aspect of this paradigm shifting is that data is being centralized or outsourced to the Cloud. From users’ perspective, including both individuals and IT enterprises, storing data remotely to the cloud in a flexible ondemand manner brings appealing benefits: relief of the burden for storage management, universal data access with location independence, and avoidance of capital expenditure on hardware, software, and personnel maintenances, etc. While Cloud Computing makes these advantages more appealing than ever, it also brings new and challenging security threats towards users’ outsourced data.

Since cloud service providers (CSP) are separate administrative entities, data outsourcing is actually relinquishing user’s ultimate control over the fate of their data. As a result, the correctness of the data in the cloud is being put at risk due to the following reasons. First of all, although the infrastructures under the cloud are much more powerful and reliable than personal computing devices, they are still facing the broad range of both internal and external threats for data integrity . Examples of outages and security breaches of noteworthy cloud services appear from time to time. Secondly, there do exist various motivations for CSP to behave unfaithfully towards the cloud users regarding their outsourced data status. For examples, CSP might reclaim storage for monetary reasons by discarding data that has not been or is rarely accessed, or even hide data loss incidents to maintain a reputation. In short, although outsourcing data to the cloud is economically attractive for long-term largescale storage, it does not immediately offer any guarantee on data integrity and availability. This problem, if not properly addressed, may impede the success of
1

cloud architecture. Security audit is an important solution enabling tracking and analysis of any activities including data accesses, security breaches, application activities, and so on. Data security tracking is crucial for all organizations that must be able to comply with a range of federal laws including the Sarbanes-Oxley Act, Basel II, HIPAA and other regulations.

1.2 OBJECTIVE The main objective of our project is to create an efficient audit system profiting from the interactive zero-knowledge proof system. We address the construction of an interactive PDP protocol to prevent the fraudulence of prover (soundness property) and the leakage of verified data (zero-knowledge property). Another major concern addressed by our project is how to improve the performance of audit services. The audit performance concerns not only the costs of computation, communication, storage for audit activities but also the scheduling of audit activities. No doubt improper scheduling, more or less frequent, causes poor audit performance, but an efficient scheduling can help provide a better quality of and a more cost-effective service. Hence, it is critical to investigate an efficient schedule for cloud audit services.

1.3 AIM Through our project, we aim to:    To design an efficient architecture of audit system to reduce the storage and network overheads and enhance the security of audit activities; To provide an efficient audit scheduling to help provide a more cost-effective audit service To optimize parameters of audit systems to minimize the computation overheads of audit services.

2

CHAPTER – 2 BACKGROUND WORK
2.1 EXISTING SYSTEM 2.1.1 PROBLEM DEFINITION We consider a cloud data storage service involving three different entities, as illustrated in Fig. 1: the cloud user (U), who has large amount of data files to be stored in the cloud; the cloud server (CS), which is managed by cloud service

provider (CSP) to provide data storage service and has significant storage space and computation resources (we will not differentiate CS and CSP hereafter );the third party auditor (TPA), who has expertise and capabilities that cloud users do not have and is trusted to assess the cloud storage service security on behalf of the user upon request. Users rely on the CS for cloud data storage and maintenance. They may also dynamically interact with the CS to access and update their stored data for various application purposes. The users may resort to TPA for ensuring the storage security of their outsourced data, while hoping to keep their data private from TPA.

We consider the existence of a semi-trusted CS in the sense that in most of time it behaves properly and does not deviate from the prescribed protocol execution. While providing the cloud data storage based services, for their own benefits the CS might neglect to keep or deliberately delete rarely accessed data files which belong to ordinary cloud users. Moreover, the CS may decide to hide the data corruptions caused by server hacks or Byzantine failures to maintain reputation. We assume the TPA, who is in the business of auditing, is reliable and independent, and thus has no incentive to collude with either the CS or the users during the auditing process. TPA should be able to efficiently audit the cloud data storage without local copy of data and without bringing in additional on-line burden to cloud users. However, any possible leakage of user’s outsourced data towards TPA through the auditing protocol should be prohibited.

3

FIG 1 ARCHITECTURE OF CLOUD DATA STORAGE SERVICE

2.2 RELATED WORK The traditional cryptographic technologies for data integrity and availability, based on Hash functions and signature schemes, cannot work on the outsourced data without a local copy of data. In addition, it is not a practical solution for data validation by downloading them due to the expensive communications, especially for large size files. Moreover, the ability to audit the correctness of the data in a cloud environment can be formidable and expensive for the cloud users. Therefore, it is crucial to realize public auditability for CSS, so that data owners may resort to a third party auditor (TPA), who has expertise and capabilities that a common user does not have, for periodically auditing the outsourced data. This audit service is significantly important for digital forensics and credibility in clouds.

To implement public auditability, the notions of proof of retrievability (POR) and provable data possession (PDP) have been proposed by some researchers. Their approach was based on a probabilistic proof technique for a storage provider to prove that clients’ data remain intact. For ease of use, some POR/PDP schemes work on a publicly verifiable way, so that anyone can use the verification protocol to prove the availability of the stored data. Hence, this provides us an effective approach to accommodate the requirements from public auditability. POR/PDP schemes evolved around an untrusted storage offer a publicly accessible remote interface to check the tremendous amount of data.
4

There exist some solutions for audit services on outsourced data. For example, Xie et al. proposed an efficient method on content comparability for outsourced database, but it wasn’t suited for irregular data. Wang et al. also provided a similar architecture for public audit services. To support their architecture, a public audit scheme was proposed with privacy-preserving property. However, lack of rigorous performance analysis for constructed audit system greatly affects the practical application of this scheme. For instance, in this scheme an outsourced file is directly split into n blocks, and then each block generates a verification tag.

In order to maintain security, the length of block must be equal to the size of cryptosystem, that is, 160-bit=20- Bytes. This means that 1M-Bytes file is split into 50,000 blocks and generates 50,000 tags, and the storage of tags is at least 1MBytes. It is clearly inefficient to build an audit system based on this scheme. To address such a problem, a fragment technique is introduced in this paper to improve performance and reduce extra storage. Another major concern is the security issue of dynamic data operations for public audit services.

In clouds, one of the core design principles is to provide dynamic scalability for various applications. This means that remotely stored data might be not only accessed but also dynamically updated by the clients, for instance, through block operations such as modification, deletion and insertion. However, these operations may raise security issues in most of existing schemes, e.g., the forgery of the verification metadata (called as tags) generated by data owners and the leakage of the user’s secret key. Hence, it is crucial to develop a more efficient and secure mechanism for dynamic audit services, in which possible adversary’s advantage through dynamic data operations should be prohibited.

5

This also provides a background for the description of our audit service outsourcing as follows: • First. we describe a flowchart for audit service based on TPA. Cloud service provider (CSP): who provides data storage service and has enough storage spaces and computation resources. These two properties ensure that our scheme can not only prevent the deception and forgery of cloud storage providers.CHAPTER – 3 PROPOSED SYSTEM 3. In this architecture. generates a set of public verification information that is stored in TPA. transmits the file and some verification tags to CSP. These applications can be either inside clouds or outside clouds according to the specific requirements.1 PROPOSED SYSTEM In this method we provide an efficient and secure cryptographic interactive audit scheme for public auditability. we consider a data storage service containing four entities: Data owner (DO): who has a large amount of data to be stored in the cloud. Granted applications (GA): who have the right to access and manipulate stored data. Third party auditor (TPA): who has capabilities to manage or monitor outsourced data under the delegation of data owner. 6 . the client (data owner) uses the secret key sk to preprocesses the file. which consists of a collection of n blocks.2 SYSTEM DESCRIPTION Audit system architecture for outsourced data in clouds is shown in figure 1. This scheme retains the soundness property and zero knowledge property of proof systems. Next. 3. but also prevent the leakage of outsourced data in the process of verification. and may delete its local copy.

using a protocol of proof of retrievability. nor assume that the data owner has the ability to collect the evidences of CSP’s fault after errors occur. TPA. and thus has no incentive to collude with either the CSP or the clients during the auditing process: FIG 2 PROPOSED SYSTEM ARCHITECTURE • TPA should be able to make regular checks on the integrity and availability of these delegated data at appropriate intervals. TPA (as an audit agent of clients) issues a challenge to audit (or check) the integrity and availability of the outsourced data in terms of the public verification information. as a trust third party (TTP). we neither assume that CSP is trust to guarantee the security of stored data. We assume the TPA is reliable and independent. This architecture is known as the audit service outsourcing due to data integrity verification can be implemented by TPA without help of data owner. the data owner and granted clients need to dynamically interact with CSP to access or update their data for various application purposes.• At a later time. It is necessary to give an alarm for abnormal events. However. 7 . In this architecture. Hence. is used to ensure the storage security of their outsourced data.

Privacy-preserving: to ensure that there exists no way for TPA to derive users’ data from the information collected during the auditing process. Hence. 8 .• TPA should be able to take the evidences for the disputes about the inconsistency of data in terms of authentic records for all data operations. our protocol design should achieve following security and performance guarantees: Audit-without-downloading: to allow TPA (or other clients with the help of TPA) to verify the correctness of cloud data on demand without retrieving a copy of whole data or introducing additional on-line burden to the cloud users. the TPA could be considered as the root of trust in clouds. Verification-correctness: to ensure there exists no cheating CSP that can pass the audit from TPA without indeed storing users’ data intact. and to support statistical audit sampling and optimized audit schedule with a long enough period of time. and High-performance: to allow TPA to perform auditing with minimum overheads in storage. In this audit architecture. This is because it is more easy and feasible to ensure the security of one TTP than to maintain the credibility of the whole cloud. communication and computation. our core idea is to maintain the security of TPA to guarantee the credibility of cloud storages. To enable privacy-preserving public auditing for cloud data storage under this architecture.

Any changes made to requirements in future will have to go through formal change approval process. 9 . design. WATERFALL MODEL The waterfall model is a sequential development process.1 INTRODUCTION After analyzing the requirements of the task to be performed. and maintenance. integration. in which development is seen as flowing steadily downwards (like a waterfall) through the phases of requirement analysis.CHAPTER – 4 SYSTEM ANALYSIS 4. 4. but the first activity serves as a basis of giving the functional specifications and then successful design of the proposed system. implementation. the next step is to analyze the problem and understand its context. Both the activities are equally important. testing (validation). Royce in 1970 although Royce did not use the term “waterfall” in this article. Understanding the requirements of the new system is more difficult and requires creative thinking and understanding of existing running system is also difficult. The first formal description of waterfall model is often cited to be an article published by Winston W. improper understanding of present system can lead diversion from solution. It means for use by developers and will be the basic during testing phase. The first activity in this phase is studying the existing system and other is to understand the requirements and the domain of the new system.2 ANALYSIS MODEL This document plays a vital role in development life cycle( SDLC) as it describes the complete requirement of the system.

Tight control is maintained over the life of the project through the use of extensive written documentation. Towards later stages of this implementation phase. separate software components produced are combined to 10 . target dates. one first completes requirements specifications. When the requirements are fully completed. budgets and implementation of entire system at one time. Fig 3 ANALYSIS MODEL To follow the water fall model one has to proceed from one phase to the next in a sequential manner. When the design is fully completed.Basic principles of waterfall model are:    Project is divided into sequential phases. The software in question is designed and a blueprint is drawn for the implementers to follow – this design should be plan for implementing the requirements given. time schedules. with some overlap and splash back acceptable between phases. which after sign-off are considered to be “set in stone”. For example. an implementation of that design is made by the coders. as well as through formal reviews and approval/signoff by the user and information technology mangament occurring at the end of most phases before beginning the next phase. Emphasis is on planning. one proceeds to design.

Thus waterfall model maintains that one should move to a phase only if its preceding phase is completed and perfected. 11 .introduce new functionality and reduce the risk of errors through removal of errors.

documenting or managing a process or program in various fields. and arrows connecting them represent flow of control.CHAPTER – 5 SYSTEM DESIGN 5. showing the steps as boxes of various kinds.Net with C# SQL Server 2005 5. they are implied by the sequencing of operations. designing.2 FLOW CHART A flowchart is a type of diagram that represents an algorithm or process. Data flows are not typically represented in a flowchart. Process operations are represented in these boxes. 12 . in contrast with data flow diagrams. This diagrammatic representation can give a step-by-step solution to a given problem. Flowcharts are used in analyzing.1 SYSTEM REQUIREMENTS HARDWARE REQUIREMENTS: Pentium 4 processor 1 GB RAM 80 GB Hard disk space SOFTWARE REQUIREMENTS: Operating system Coding Language Data Base Windows XP ASP. and their order by connecting them with arrows. rather.

Login TPV Server Client File Verification View Client Details no Exists yes Create Account Warning Mail To Client Download Verification File Upload Metadata Cryptogrphic key Sent to Mail key request to Client Verifiction Status Block Download Request Status Check Process Pending Allow File Not Modify File Download File Modify Verify & Upload Downlod File File Download with crypto key download modify File with key Warning from Admin End FIG 4 FLOWCHART OF THE AUDIT SYSTEM 13 .

visualizing. as well as for business modeling and other non-software systems. The UML is a very important part of developing object oriented software and the software development process. A use case is a description of a system's behavior from a user's standpoint. and documenting the artifacts of software systems. Start by listing the sequence of steps the user might take to complete an action. For system developers.1 Use Case Diagram: This displays the relationship among actors and use cases. Login using their user – id and a secret key Upload a file The third party auditor and server will:    Validate the user Verify the uploaded file details Maintain the client details 14 . explore potential designs.3 UNIFIED MODELING LANGUAGE The Unified Modeling Language (UML) is a standard language for specifying.3. For example in our audit system a user will    Create his/her account. Use cases are a relatively easy UML diagram to draw. Each UML diagram is designed to let developers and customers view a software system from a different perspective and in varying degrees of abstraction. That's important if the goal is to build a system that real people can use. And generate a secret key. Using the UML helps project teams communicate. In graphical representations of use cases a symbol for the actor is used. UML diagrams commonly created in visual modeling tools include: 5. The UML uses mostly graphical notations to express the design of software projects.5. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems. constructing. this is a valuable tool: it's a tried-and-true technique for gathering system requirements from a user's point of view. and validate the architectural design of the software.

is to use the steps from one use case as part of the sequence of steps in another use case.inclusion. Another way . 15 .Usecase are collection of scenarios about system use. One way .extension. is to create a new use case by adding steps to an existing use case. Each scenario describes a sequence of events. It's possible to reuse use cases. a piece of hardware. Entities that initiate sequences are called actors. The result of the sequence has to be something of use either to the actor who initiated it or to another actor. or by the passage of time. another system. Each sequence is initiated by a person.

Create Account Login File Upload Server File Verify Client Cryptography key Request TPV File Download File Details Client Details FIG 5 USECASE DIAGRAM FOR AUDIT SYSTEM 16 .

3. classes are represented with boxes which contain three parts:    The upper part holds the name of the class The middle part contains the attributes of the class The bottom part gives the methods or operations the class can take or undertake In the design of a system. and the relationships among the classes. With detailed modeling. the classes of the conceptual design are often split into a number of subclasses. The class diagram is the main building block of object oriented modelling. 17 . a class diagram in the Unified Modeling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes.5.2 Class Diagrams In software engineering. a number of classes are identified and grouped together in a class diagram which helps to determine the static relations between those objects. It is used both for general conceptual modelling of the systematics of the application. operations (or methods). their attributes. In the diagram. interactions in the application and the classes to be programmed. Class diagrams can also be used for data modeling. The classes in a class diagram represent both the main objects. and for detailed modelling translating the models into programming code.

Registration ID OwnerID Password Gender Mobile EMail Date File Archive FileID FileName FileSize FilePath FileOwner MetaData KeyRequest DownloadStatus ModifyStatus VerifyStatus metadatagenration() fileupload() Loginidgenration() CreateAccount() File Archive Modify FileID FileName FileSize FilePath FileOwner MetaData KeyRequest VerifyStatus comparemetadata() fileupload() FIG 6 CLASS DIAGRAM FOR AUDIT SYSTEM 18 .

5. in the order in which they occur.3. the messages exchanged between them. A sequence diagram shows object interactions arranged in time sequence. A sequence diagram shows. event scenarios. It depicts the objects and classes involved in the scenario and the sequence of messages exchanged between the objects needed to carry out the functionality of the scenario.3 Sequence Diagram A sequence diagram is a kind of interaction diagram that shows how processes operate with one another and in what order. Sequence diagrams are sometimes called event diagrams. as horizontal arrows. different processes or objects that live simultaneously. and. Sequence diagrams are typically associated with use case realizations in the Logical View of the system under development. It is a construct of a Message Sequence Chart. This allows the specification of simple runtime scenarios in a graphical manner. and timing diagrams. as parallel vertical lines (lifelines). 19 .

Database TPV Server Client Create Account Upload Files Cryptography Encryption key Verify Client Files Direct Verify Files Download Verification key request Download key Request Allow/Block Download Verification/Key Processing Upload Verification File File Modify Status Download File View Client Detalils Warning To Client Waring from TPV Download verification Download verification user blocked Modify File FIG 7 SEQUENCE DIAGRAM FOR AUDIT SYSTEM 20 .

An activity diagram shows the overall flow of control. the join and split symbols in activity diagrams only resolve this for simple cases. the meaning of the model is not clear when they are arbitrarily combined with decisions or loops.5. Typical flowchart techniques lack constructs for expressing concurrency. activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system. diamonds represent decisions. Hence they can be regarded as a form of flowchart. an encircled black circle represents the end (final state). In the Unified Modeling Language.4 Activity Diagram Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice. Arrows run from the start towards the end and represent the order in which activities happen. bars represent the start (split) or end (join) of concurrent activities. iteration and concurrency. connected with arrows. a black circle represents the start (initial state) of the workflow. 21 . The most important shape types:      rounded rectangles represent actions.3. However. Activity diagrams are constructed from a limited number of shapes.

Login TPV Client Untrust Server No Exists View Client Details yes File Verification Create Account File Upload Warning Mail To Client Download Verification Cryptographic key Sent to Mail key request to Client Verification Status Download Request Status Check Process Pending File Not Modify user blocked File Download File Download with crypto key blocked Verify & Upload Downlod File Warning from TPV A FIG 8 ACTIVITY DIAGRAM FOR AUDIT SYSTEM 22 .

It also provides record viewing facilities. 23 . avoiding delay.CHAPTER – 6 RESULTS AND PERFORMANCE ANALYSIS 6. Input Design considered the following things:     What data should be given as input? How the data should be arranged or coded? The dialog to guide the operating personnel in providing input. controlling the errors. The goal of designing input is to make data entry easier and to be free from errors. avoiding extra steps and keeping the process simple. This design is important to avoid errors in the data input process and show the correct direction to the management for getting correct information from the computerized system.  It is achieved by creating user-friendly screens for the data entry to handle large volume of data. The data entry screen is designed in such a way that all the data manipulates can be performed. The input is designed in such a way so that it provides security and ease of use with retaining the privacy. It comprises the developing specification and procedures for data preparation and those steps are necessary to put transaction data in to a usable form for processing can be achieved by inspecting the computer to read data from a written or printed document or it can occur by having people keying the data directly into the system. OBJECTIVES Input Design is the process of converting a user-oriented description of the input into a computer-based system. The design of input focuses on controlling the amount of input required. Methods for preparing input validations and steps to follow when error occur.1 INPUT DESIGN The input design is the link between the information system and the user.

Create document. report. It is the most important and direct source information to the user. or warnings. opportunities. the right output must be developed while ensuring that each output element is designed so that people will find the system can use easily and effectively. When the data is entered it will check for its validity. When analysis design computer output. problems. Signal important events. Trigger an action. Efficient and intelligent output design improves the system’s relationship to help user decision-making. Appropriate messages are provided as when needed so that the user will not be in maize of instant.  Designing computer output should proceed in an organized. Confirm an action. or other formats that contain information produced by the system. well thought out manner.      Convey information about past activities. current status or projections of the Future. 24 . Data can be entered with the help of screens. Thus the objective of input design is to create an input layout that is easy to follow 6. The output form of an information system should accomplish one or more of the following objectives. which meets the requirements of the end user and presents the information clearly. they should Identify the specific output that is needed to meet the requirements.2 OUTPUT DESIGN A quality output is one. In output design it is determined how the information is to be displaced for immediate need and also the hard copy output.   Select methods for presenting information. In any system results of processing are communicated to the users and to other system through outputs.

Features to be tested: Verify that the entries are of the correct format No duplicate entries should be allowed 25 . This is a structural testing. Unit tests perform basic tests at component level and test a specific business process. messages and responses must not be delayed. There are various types of test. Testing is the process of trying to discover every conceivable fault or weakness in a work product. Test objectives:      All field entries must work properly.2 TYPES OF TESTING In order to make sure that the system does not have any errors. sub assemblies. application.2.1 INTRODUCTION TO SOFTWARE TESTING The purpose of testing is to discover errors. Pages must be activated from the identified link. Each test type addresses a specific testing requirement. All decision branches and internal code flow should be validated. 7. assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. and that program inputs produce valid outputs. that relies on knowledge of its construction and is invasive. The entry screen. It is the testing of individual software units of the application .CHAPTER – 7 SOFTWARE TESTING 7. and/or system configuration.it is done after the completion of an individual unit before integration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results. the different levels of testing strategies applied at different phases of software development are: 7. It provides a way to check the functionality of components.1 Unit testing Unit testing involves the design of test cases that validate that the internal program logic is functioning properly.

2. 7. This testing is used to find errors in following categories:      Incorrect or missing functions Interface errors Errors in data structure or external database access Performance errors Initialization and termination errors In this testing only the output is checked for correctness. Integration tests demonstrate that although the components were individually satisfaction. All links should take the user to the correct page.2.3 White box testing In this the test cases are generated on the logic of each module by drawing flow graphs of that module and logical decisions are tested on all the cases. 7. Integration testing is specifically aimed at exposing the problems that arise from the combination of components. The logical flow of data is not checked. as shown by successfully unit testing.2.4 Integration testing Integration tests are designed to test integrated software components to determine if they actually run as one program. 26 .2 Black box testing In this strategy some test cases are generated as input conditions that fully execute all functional requirements for the program. Testing is event driven and is more concerned with the basic outcome of screens or fields. the combination of components is correct and consistent. It has been used to generate the test cases in the following cases:     Guarantee that all independent paths have been executed Execute all logical decisions on their true and false sides Execute all the loops at their boundaries and within their operational bounds Execute internal data structures to ensure their validity 7.

5 System Testing Involves in house testing of the entire system before delivery to the user. Top down approach It is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.7.2. This approach is helpful only when all or most of the modules of the same development level are ready. the next level of modules will be formed and can be used for integration testing. Test Approach: Testing can be done in two ways:   Bottom up approach Top down approach Bottom up approach: Testing can be performed starting from smallest and lowest level modules and proceeding one at a time. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage. procedures or functions are tested and then integrated. After the integration testing of lower level integrated modules. 27 . All the bottom or low-level modules. Its aim is to satisfy the user that the system meets all the requirements of clients specifications.

2.6 Test Cases Testcase no Description Expected Result Input Actual Result 1 User name is valid only if it is present in database User name is valid or invalid shravya Invalid message appears 2 Invalid passwords are not accepted Invalid message should appear User name=”hhhhh” && secret key=”1111” Invalid message appears 3 User cannot register without entering proper field values User cannot download the file without the secret key Secret key should be sent to mail if the user searches for a file and submits email –id to TPA User should not be able to upload the file without selecting a file to upload User registration should fail Do not enter email – id field Error message appears 4 Encrypted text appears Enter a file name Encrypted text appears 5 Secret key is sent to mail id Enter mail id Secret key is sent to mail 6 Error message appears Enter only the file name but do not select the file to upload Error message – Error unable to upload file appears 28 .7.

Users can also search for files by clicking on “Search files click here” label. If the user is new.CHAPTER – 8 OUTPUT SCREENS SCREEN 1: HOME PAGE – In this page the cloud user is expected to login using a user name and a secret key. he/she has to register and login using the user name and secret key sent to their mail. 29 .

SCREEN 2: REGISTRATION PAGE –User has to register himself to login and Upload his files. 30 . Registration can be done by entering the valid field values.

SCREEN 3: SUCCESSFUL REGISTRATION – On successful registration a secret key is sent to the registered mail. 31 .

SCREEN 4: DATA OWNER LOGIN – Data owner can login using his user name and secret key after registration 32 .

SCREEN 5: DATA OWNER PAGE – Data owner can view his profile. save his files to cloud and upload file details 33 .

SCREEN 6: DATA OWNER PROFILE – User can view his profile details 34 .

SCREEN 7: FILE UPLOAD PAGE – User can upload his files by entering valid field values. 35 .

36 . and entering the file details click on upload button to upload.SCREEN 8: FILE UPLOAD PAGE – After selecting the file.

37 .SCREEN 9: UPLOADED FILE DETAILS – Data owner can views the details of the files uploaded by him in this page.

SCREEN 10: FILE DOWNLOAD AND VERIFICATION – Data owner can verify the files by downloading them. 38 .

SCREEN 11: TPA LOGIN – TPA can login by using his user name and secret key 39 .

SCREEN 12: TPA PAGE – TPA can view data owner details and the uploaded file details. 40 .

SCREEN 13: OWNER DETAIL PAGE – Data owner details can be viewed in this page. 41 .

SCREEN 14: UPLOADED FILE DETAILS – Uploaded file details can be viewed in this page 42 .

SCREEN 15: SEARCH PAGE – File can be searched by clicking on search button in home page 43 .

SCREEN 16: SEARCH RESULT PAGE – User cannot view the contents of a file without email – id and secret key 44 .

SCREEN 17: SECRET KEY REQUEST PAGE – To view a file user should request for a secret key from TPA 45 .

SCREEN 18: SECRET SENT TO GIVEN MAIL – TPA sends the secret key to the registered mail id 46 .

SCREEN 19: FILE DOWNLOAD USING SECRET KEY – By entering the mail – id and secret key user can download the file 47 .

our technology can be easily adopted in a cloud computing environment to replace the traditional Hash-based solution.CHAPTER – 9 CONCLUSION In this project. while still achieves the detection of servers’ misbehavior with a high probability. 48 . as well as an optimization method of parameters of cloud audit services. known as an agent of data owners. To realize the audit model. More importantly. the third party auditor. Hence. This approach greatly reduces the workload on the storage servers. we proposed and quantified a new audit approach based on probabilistic queries and periodic verification. we only need to maintain the security of the third party auditor and execute the verification protocol. In this audit service. we addressed the construction of an efficient audit service for data integrity in clouds. can issue a periodic verification to monitor the change of outsourced data by providing an optimized schedule.

Joseph.. CCS 2007. pp. H... Song..D.BIBILIOGRAPHY  Armbrust.  Boneh... In: Cryptographic Hardware and Embedded Systems. Brisebarre.V. J.. 213–229. 9th International Workshop. Identity-based encryption from the weil pairing. M. Griffith. L. Konwinski.. M. pp. Burns. Scalable and efficient provable data possession..H. Galbraith. 50–58. D. In: In Proceedings of CRYPTO 04. C. Des. 598–609. 239–255. Franklin. pp.. 2001.  Barreto. 1 –10. Boyen. Shacham. Short group signatures. A.2004. Springer-Verlag. G.C. Peterson.. J. Mancini. Arithmetic operators for pairing-based cryptography. R.. Patterson.L. 2007. M. O’Eigeartaigh..S. Pietro. I.. R. D.. CHES 2007... D.M. pp. A. Fox. ACM 53 (4).. 2007. 239–271. Detrey. In: Proceedings of the 2007 ACM Conference on Computer and Communications Security. Zaharia... 2007.  Beuchat.. 2010.D.  Boneh. pp. Z... Herring. Katz. Codes Cryptogr. A. In: Proceedings of the 4th International Conference on Security and Privacy in Communication Networks.D. A view of cloud computing. M..-L.  Ateniese. Lee. Rabkin. Stoica. S. In: Advances in Cryptology (CRYPTO’2001).. J.... Curtmola.J. 2139 of LNCS. Efficient pairing computation on supersingular abelian varieties.X.. P.A. R. Tsudik. N... G. G. Scott. 49 . SecureComm. Okamoto.. Provable data possession at untrusted stores.. G. Commun. 41–55.N. A.. LNCS Series. E.. R. 2008. X. Kissner. R. Vol. L. 42 (3).D.  Ateniese.

A. A.  Hsiao. Inform. Lin. A. In: Public Key Cryptography. CCS 2007. 50 .. Küpc¸ ü. Efficient zero-knowledge proofs of knowledge without intractability assumptions. Yang..  Erway. MacKenzie.... H.-H.. H. Wang. C.-Y. A.. A... Kaliski. Studer. In: ACSAC. Papamanthou. K.. Hail: a high-availability and integrity layer for cloud storage. 2000. 187–198.-H.. Feng.S. Bowers. B. L.. pp.. Theory 53 (7). Damgård.D... Sun. On a class of pseudorandom sequences from elliptic curves over finite fields. 2009. pp. Y. R. Hu.. B. Tamassia.. 2598 – 2605. Juels. 2009. pp. A. In: Proceedings of the 2007 ACM Conference on Computer and Communications Security. pp. Dynamic provable data possession... 584–597. 2009.C.  Hu. CCS 2009... H. pp. K. D. Pors: proofs of retrievability for large files. 213 –222. 2007. C.. IEEE Trans.-M...  Juels Jr. A study of user-friendly hash comparison schemes. P. In: Proceedings of the 2009 ACM Conference on Computer and Communications Security. R. Studer. C. I. Kikuchi.-C. 105–114. H. Oprea. 354–373. In: ACM Conference on Computer and Communications Security.D. 2007.  Cramer.. Perrig.