You are on page 1of 51


The Infrastructure as a Service cloud computing has emerged as a viable alternative to the acquisition and management of physical resources. With IAAS, users can lease storage and computation time from large datacenters. Leasing of computation time is accomplished by allowing users to deploy virtual machines (VMs) on the datacenter’s resources. Since the user has complete control over the configuration of the VMs using on-demand deployments, IAAS leasing is equivalent to purchasing dedicated hardware but without the long-term commitment and cost. The on-demand nature of IAAS is critical to making such leases attractive, since it enables users to expand or shrink their resources according to their computational needs, by using external resources to complement their local resource base. This problem is particularly acute for VM images used in scientific computing where image sizes are large. A typical deployment consists of hundreds or even thousands of such images. Conventional deployment techniques broadcast the images to the nodes before starting the VM instances.


1.1 ABOUT THE PROJECT This emerging model leads to new challenges relating to the design and development of IAAS systems. One of the commonly occurring patterns in the operation of IAAS is the need to deploy a large number of VMs on many nodes of a datacenter at the same time, starting from a set of VM images previously stored in a persistent fashion. For example, this pattern occurs when the user wants to deploy a virtual cluster that executes a distributed application or a set of environments to support a workflow. We refer to this pattern as multideployment. Such a large deployment of many VMs at once can take a long time. This problem is particularly acute for VM images used in scientific computing where image sizes are large (from a few gigabytes up to more than 10 GB). A typical deployment consists of hundreds or even thousands of such images. Conventional deployment techniques broadcast the images to the nodes before starting the VM instances, a process that can take tens of minutes to hours, not counting the time to boot the operating system itself. This can make the response time of the IAAS installation much longer than acceptable and erase the on demand benefits of cloud computing. 1.2 PROJECT OVERVIEW Once the VM instances are running, a similar challenge applies to snapshotting the deployment many VM images that were locally modified need to be concurrently transferred to stable storage with the purpose of capturing the VM state for later use (e.g., for checkpointing or off-line migration to another cluster or cloud). We refer to this


pattern as multisnapshotting. Conventional snapshotting techniques rely on custom VM image file formats to store only incremental differences in a new file that depends on the original VM image as the backing file. When taking frequent snapshots for a large number of VMs, such approaches generate a large number of files and interdependencies among them, which are difficult to manage and which interfere with the ease-of-use rationale behind clouds. Furthermore, with growing datacenter trends and tendencies to federate clouds, configurations are becoming more and more heterogeneous. Custom image formats are not standardized and can be used with specific hypervisors only, which limits the ability to easily migrate VMs among different hypervisors. Therefore,

multisnapshotting must be handled in a transparent and portable fashion that hides the interdependencies of incremental differences and exposes standalone VM images, while keeping maximum portability among different hypervisor configurations. In addition to incurring significant delays and raising manageability issues, these patterns may also generate high network traffic that interferes with the execution of applications on leased resources and generates high utilization costs for the user. 1.3 PROJECT DESCRIPTION This paper proposes a distributed virtual file system specifically optimized for both the multideployment and multisnapshotting patterns. Since the patterns are complementary, we investigate them in conjunction. Our proposal offers a good balance between

performance, storage space, and network traffic consumption, while

handling snapshotting transparently and exposing standalone, raw image files (understood by most hypervisors) to the outside. Our contributions are can be summarized as follows: • We introduce a series of design principles that optimize multideployment and multisnapshotting patterns and describe how our design can be integrated with IaaS infrastructures. • We show how to realize these design principles by building a virtual file system that leverages versioning-based distributed storage services. To illustrate this point, we describe an implementation on top of BlobSeer, a versioning storage service specifically designed for high throughput under concurrency.



2.1 GENERAL REQUIREMENT  The system should be able to provide means for efficient cryptographic computability.  To decrease redundancy of information coding and increase the efficiency compared to traditional encoding methods.  By using the technology of DNA digital coding, the traditional encryption method such as DES or RSA could be used to preprocess the plaintext.  The digital coding of DNA sequence convenient for mathematical operation and logical operation

2.2 PERFORMANCE REQUIREMENT  The response should be fairly fast.  The user should not face any error or improper pages.  Since the system incorporates vast features, regression testing has to be performed.


a. e. The calculation has to be correct and accurate. Security Security has to be enforced at two levels. The modules have to be clearly documented and possess the least degree of copying so that they can be independently modified. Reliability The project is guaranteed to provide reliable results. Maintainability The project has to be developed with ease of maintenance in the mind. no other than the person with the correct key pairs cannot get the plain text . 6 . Reusability Code pertaining to the general functionality can be reused since the key are generic. d. c. b. At the first level. However the reliability depends on the plain text and key pairs and the way the user uses the system. Design Constraint Database has to be designed in such a way so that the computation is very much reduced. It should be highly efficient in providing the intended functionality through the efficient code it utilizes less time and storage.At the next level only if the correct key pairs have been used. then it recovers the text.

2.4 SOFTWARE REQUIREMENTS • Operating system : Windows XP. 2. • Coding Language : C#. : Logitech.3 HARDWARE REQUIREMENTS • System • Hard Disk • Floppy Drive • Monitor • Mouse • Ram : Pentium IV 2. : 1. : 40 GB.44 Mb. : 15 VGA • Data Base : SQL Server 2005 7 . : 512 Mb.4 GHz.

from book or from websites. This support can be obtained from senior programmers. Once the programmers start building the tool the programmers need lot of external support. and other electronic means. economy and company strength. Before developing the tool it is necessary to determine the time factor. We have to analysis the Cloud computing 3.1 DATA CENTRE SECURITY  Professional Security staff utilizing video surveillance. Once these things are satisfied. the next steps are to determine which operating system and language can be used for developing the tool.  When an employee no longer has a business need to access datacenter his privileges to access datacenter should be immediately revoked. 8 .CHAPTER 3 LITERATURE SURVEY Literature survey is the most important step in software development process. state of the art intrusion detection systems. Before building the system the above consideration are taken into account for developing the proposed system.

3. used.  Control of Administrator on Databases. 9 . protected.  Data that is generated during running of program on instances is all customer data and therefore provider should not perform backups. All physical and electronic access to data centers by employees should be logged and audited routinely.  Audit tools so that users can easily determine how their data is stored. user probably won't know exactly where your data is hosted.  Data-centered policies that are generated when a user provides personal or sensitive information.that travels with that information throughout its lifetime to ensure that the information is used only in accordance with the policy. what country it will be stored in?  Data should be stored and processed only in specific jurisdictions as define by user.3 BACKUP OF DATA  Data store in database of provider should be redundantly store in multiple physical locations.  Provider should also make a contractual commitment to obey local privacy requirements on behalf of their customers.2 DATA LOCATION  When user uses the cloud. and verify policy enforcement. 3.

XDoS attacks  QoS Violation: through congestion. 3.  What happens to data stored in a cloud computing environment once it has passed its user’s “use by date”  What data sanitization practices does the cloud computing service provider propose to implement for redundant and retiring data storage devices as and when these devices are retired or taken out of service.  Man in the Middle Attack: To overcome it always use SSL  IP Spoofing: Spoofing is the creation of TCP/IP packets using somebody else's IP address.4 DATA SANITIZATION  Sanitization is the process of removing sensitive information from a storage device. Routing Table “Poisoning”.  Solution: Infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own.  Like DNS Hacking. or through resource hacking. 10 .3. delaying or dropping packets.5 NETWORK SECURITY  Denial of Service: where servers and networks are brought down by a huge amount of network traffic and users are denied the access to a certain Internet based service.

11 . and the Secure Sockets Layer (SSL) enables secure authentication and communication over computer networks.  Secure communication issues include those security concerns that arise during the communication between two entities.  These include confidentiality and integrity issues. authentication.  Solution: public key encryption.  This issues pertaining to secure communication.509 certificates. and issues concerning single sign on and delegation.6 INFORMATION SECURITY  Security related to the information exchanged between different hosts or between hosts and users.3. X. and integrity indicates that all data received should only be sent/modified by “legitimate” senders. Confidentiality indicates that all data sent by users should be accessible to only “legitimate” receivers.

such that they are able to host the VMs. in terms of both hardware and software. 12 . Disk storage (cheap hard-drives with capacities in the order of several hundred GB) is attached to each machine.1 CLOUD INFRASTRUCTURE IAAS platforms are typically built on top of clusters made out of loosely-coupled commodity hardware that minimizes per unit cost and favors low power over maximum speed.CHAPTER 4 MODULE DESCRIPTION 4. a dedicated repository is deployed either as centralized or as distributed storage service running on dedicated storage nodes. The machines are configured with proper virtualization technology. In order to provide persistent storage. while the machines are interconnected with standard Ethernet links.

2 AGGREGATE THE STORAGE In most cloud deployments. Since only small parts of the VMs are modified. Most of the time. 13 . 4. this would mean massive unnecessary duplication of data. which utilizes only a small fraction of the total disk size. such disks are used to hold local copies of the images corresponding to the running VMs. as well as to provide temporary storage for them during their execution. leading not only to an explosion of utilized storage space but also to unacceptably high snapshotting time and network bandwidth utilization.4. the disks locally attached to the compute nodes are not exploited to their full potential.3 OPTIMIZE MULTISNAPSHOTTING Saving a full VM image for each VM is not feasible in the context of multi snapshotting.

many small remote read requests to the same chunk generate significant network traffic overhead (because of the extra networking information encapsulated with each request).4 ZOOM ON MIRRORING One important aspect of on-demand mirroring is the decision of how much to read from the repository when data is unavailable locally. in such way as to obtain a good access performance. A straightforward approach is to translate every read issued by the hypervisor in either a local or remote read.4. While this approach works. 14 . More specifically. depending on whether the requested content is locally available. its performance is questionable. as well as low throughput (because of the latencies of the requests that add up).

some understanding of the major requirements for the system is essential.1. Three key considerations involved in the feasibility analysis are  Economical feasibility  Technical feasibility  Social feasibility 5.1 Economical feasibility This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the 15 . This is to ensure that the proposed system is not a burden to the company. For feasibility analysis.1 FEASIBILITY STUDY The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out.CHAPTER 5 SYSTEM STUDY 5.

The developed system must have a modest requirement. This will lead to high demands being placed on the client. The expenditures must be justified.3 Social feasibility The aspect of study is to check the level of acceptance of the system by the user.1.2 Technical feasibility This study is carried out to check the technical feasibility. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. instead must accept it as a necessity. the technical requirements of the system. that is. The user must not feel threatened by the system. 16 .1. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with can pour into the research and development of the system is limited. This includes the process of training the user to use the system efficiently. 5. Any system developed must not have a high demand on the available technical resources. as only minimal or null changes are required for implementing this system. This will lead to high demands on the available technical resources. Only the customized products had to be purchased. 5.

One such requirement is the need to efficiently cope with massive unstructured data (organized as huge sequences of bytes . 17 .BLOBs that can grow to TB) in very large-scale distributed systems while maintaining a very high data throughput for highly concurrent. fine-grain data accesses. We addressed several major requirements related to these challenges. load balancing. Moreover.CHAPTER 6 SYSTEM ANALYSIS 6. and data and processing outsourcing. Cloud Computing is a “buzz word” around a wide variety of aspects such as deployment. The role of virtualization in Clouds is also emphasized by identifying it as a key component. Clouds have been defined just as virtualized hardware and software plus the previous monitoring and provisioning technologies.1 EXISTING SYSTEM The huge computational potential offered by large distributed systems is hindered by poor data sharing scalability. provisioning.

Our proposal offers a good balance between performance. we describe an implementation on top of Blob Seer. raw image files (understood by most hypervisors) to the outside. high-performance distributed data-storage service that facilitates data sharing at large scale. Since the patterns are complementary. We show how to realize these design principles by building a virtual file system that leverages versioning-based distributed storage services. and network traffic consumption. 6. It is not possible to build a scalable.Drawbacks To give a less performance and storage space. storage space. we investigate them in conjunction. Network traffic consumption also very high due to non-concentrating on application status. while handling snapshotting transparently and exposing standalone.2 PROPOSED SYSTEM A distributed virtual file system specifically optimized for both the multi deployment and multi snapshotting patterns. To illustrate this point. a 18 . We introduce a series of design principles that optimize multi deployment and multi snapshotting patterns and describe how our design can be integrated with IAAS infrastructures.

raw image files. 19 . while handling snapshotting transparently and exposing standalone.versioning storage service specifically designed for high throughput under concurrency. and network traffic consumption. Advantage A good balance between performance. storage space.


1.1 ARCHITECTURE DIAGRAM Figure 8.1 Cloud architecture diagram 21 .CHAPTER 8 SYSTEM IMPLEMENTATION 8.

8.2 ZOOM ON THE FUSE MODULE DIAGRAM Figure 8.1 Fuse module diagram 22 .2.

1 IMPORTANCE OF TESTING Testing is difficult. A key item in successful systems testing is developing a testing methodology rather than relying on individual style of the test practitioner. any errors or omissions in the system will affect several groups of users. There is always pressure to complete systems testing promptly to meet the deadline. The systems testing effort must 23 . Nevertheless. The test conditions. Software developers must deliver reliable and secure systems that satisfy the user’s requirements. The majority of the preparation work is tedious. and expected results are generally created manually. test data. Any savings realized in downsizing the application will be negated by costs to correct software errors and reprocess information. System testing is also one of the final activities before the system is released for production.CHAPTER 9 SOFTWARE TESTING 9. It requires knowledge of the application and the system architecture. systems testing are important. In mainframe when the system is distributed to multiple sites.

controlled and is done after the completion of an individual unit before integration. It must be thorough. completely. All decision branches and internal code flow should be validated. It is critical process in the system development. It must have an objective. It is the testing of individual software units of the application . Testing is not a one-step activity. System testing is the most extensive testing of the system. This is a structural testing. 9. Testing is not an art. The tests should be reviewed prior to execution to verify their accuracy and completeness. rather than what is tested and how it’s tested. It is therefore the most expensive testing level. It verifies that the system performs the business requirements accurately. It requires more manpower and machine processing time than any other testing level. and within the required performance limits. and an approach. execute the test.follow a defined strategy. Testing is generally associated with the execution of programs. It requires planning and design.1 TYPES OF TESTS 9. They must be documented and saved for reuse.1. a scope.1 Unit testing Unit testing involves the design of test cases that validate that the internal program logic is functioning properly. it is a skill that can be taught. The emphasis is on the outcome of the testing. that relies on knowledge 24 . and that program inputs produce valid outputs.

application. key functions. system documentation. Unit tests perform basic tests at component level and test a specific business process. as shown by successfully unit testing. Organization and preparation of functional tests is focused on requirements.2 Integration testing Integration tests are designed to test integrated software components to determine if they actually run as one program. and successive processes must be considered for testing. predefined processes.of its construction and is invasive. 9. additional tests are identified and the effective value of current tests is determined. Integration testing is specifically aimed at exposing the problems that arise from the combination of components.3 Functional testing Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements. 9. and user manuals.1. systematic coverage pertaining to identify Business process flows. Integration tests demonstrate that although the components were individually satisfaction. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results. Testing is event driven and is more concerned with the basic outcome of screens or fields. Before functional testing is complete. and/or system configuration. In addition. or special test cases. the combination of components is correct and consistent. 25 .1. data fields.

emphasizing pre-driven process links and integration points. as most other kinds of tests. It tests a configuration to ensure known and predictable results. The test provides inputs and responds to outputs without considering how the software works. structure or language of the module being tested. Black box tests.5 White box testing White Box Testing is a testing in which in which the software tester has knowledge of the inner workings. must be written from a definitive source document. as a black box you cannot “see” into it. 9. It is a testing in which the software under test is treated. such as specification or requirements document.4 System testing System testing ensures that the entire integrated software system meets requirements.9. structure and language of the software. It is used to test areas that cannot be reached from a black box level. such as specification or requirements document. System testing is based on process descriptions and flows. or at least its purpose. An example of system testing is the configuration oriented system integration test. It is purpose.1. 26 .1. 9.6 Black box testing Black Box Testing is testing the software without any knowledge of the inner workings.1.

 Pages must be activated from the identified link.9.2 Test objectives  All field entries must work properly. 9.3 Features to be tested  Verify that the entries are of the correct format.2.g. components in a software system or – one 27 .3 INTEGRATION TESTING Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects. e. although it is not uncommon for coding and unit testing to be conducted as two distinct phases. The task of the integration test is to check that components or software applications. messages and responses must not be delayed.  No duplicate entries should be allowed.  The entry screen.2. 9. 9.2 UNIT TESTING Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle.  All links should take the user to the correct page.1 Test strategy and approach Field testing will be performed manually and functional tests will be written in detail.2. 9.

Test Results: All the test cases mentioned above passed successfully. It also ensures that the system meets the functional requirements. 28 . No defects encountered. 9. No defects encountered.4 ACCEPTANCE TESTING User Acceptance Testing is a critical phase of any project and requires significant participation by the end user.step up – software applications at the company level – interact without error. Test Results: All the test cases mentioned above passed successfully.

and which contains certain extra Information . for instance. and garbage collection. one can have both managed and unmanaged data in .NET. such as C#.NET. depending on the language you’re using. Whilst both managed and unmanaged code can run in the runtime. impose certain constraints on the features available. Targeting CLR can. only managed code contains the information that allows the CLR to guarantee.2 DATA MANAGEMENT With Managed Code comes Managed Data. Some . 10.CHAPTER 10 SYSTEM MAINTENANCE 10.1 CODE MANAGEMENT The code that targets . Visual Basic. safe execution and that doesn’t get garbage collected but instead is looked after by unmanaged code. do not. CLR provides memory allocation and Deal location facilities.NET and JScript.NET applications . As with managed and unmanaged code.“metadata” . namely C++. whereas others.NET languages use Managed Data by default. 29 .to describe itself.

more. Further. efficient management of VM images.CHAPTER 11 CONCLUSION AND FUTURE ENHANCEMENT 11. thus reducing the pressure on the VM storage service for heavily concurrent deployment requests.2 FUTURE ENHANCEMENT The propose a lazy VM deployment scheme that fetches VM image content as needed by the application executing in the VM. To introduced several techniques that integrate with cloud middleware to efficiently handle two patterns: multi deployment and multi snapshotting. is critical. we leverage object versioning to save only local VM image differences back to persistent 30 . such as image propagation to compute nodes and image snapshotting for check pointing or migration. 11.1 CONCLUSION The cloud computing becomes increasingly popular. The performance of these operations directly affects the usability of the benefits offered by cloud computing systems.

This has two important benefits. it handles the management of updates independently of the hypervisor. thus greatly improving the portability of VM images and compensating for the lack of VM image format standardization. interesting reductions in time and storage space can be obtained by introducing de duplication schemes. greatly simplifying the management of snapshots. yet provide the illusion that the snapshot is a different. when a snap. With respect to multi snapshot ting. fully independent image. it handles snapshot ting transparently at the level of the VM image repository. We also plan to fully integrate the current work with Nimbus and explore its benefits for more complex HPC applications in the real world.shot is created. Second. 31 .

using System. namespace cloudproject { 32 . using System. using System. using System.Data.ComponentModel.APPENDICES APPENDIX 1 SAMPLE CODING A. using System.Drawing.Data.Generic.Windows.1 Login using System. using System.SqlClient.1.Data.Linq. using System.Forms.Text. using System. using System.Sql.Collections.

SqlCommand cmd. public Form1() { InitializeComponent().Integrated Security=True").public partial class Form1 : Form { Form2 a = new Form2(). SqlConnection cn = new SqlConnection("Data Source=ARULPC\\SQLEXPRESS. EventArgs e) { } 33 . SqlDataAdapter ad = new SqlDataAdapter(). EventArgs e) { } private void Form1_Load(object sender.Initial Catalog=cloudproject. } private void label1_Click(object sender.

cn). } else 34 . dr = cmd.private void button1_Click(object sender. cn).Text + "' and password='" + textBox2. dr = cmd. SqlDataReader dr. EventArgs e) { Form3 a=new Form3().Read()) { a. cmd = new SqlCommand("select * from regform where name='" + textBox1.Open().Close(). cmd = new SqlCommand("select name from regform where name='" + textBox1. if (dr.Text+ "' ". cn.ExecuteReader().Read()) { dr.ExecuteReader().Text + "'".Show(). if (dr.

} } private void button2_Click(object sender.Show("Check your user name and password"). } else { MessageBox.ShowDialog().{ MessageBox. } cn. EventArgs e) { a.Show("Check your user name and password").Close(). } } } 35 .

Initial Catalog=cloudproject.2 Create New User using System.Generic. using System. SqlDataAdapter ad = new SqlDataAdapter().SqlClient. using System. using System. using System.Data. using System. using System.Integrated Security=True").Sql.Linq.Forms.Drawing.Collections. using System.ComponentModel. using System.Data.Data. 36 .Windows.1.A. namespace cloudproject { public partial class Form2 : Form { SqlConnection cn = new SqlConnection("Data Source=ARULPC\\SQLEXPRESS. using System.Text.

Text.InsertCommand = new SqlCommand("INSERT INTO regform VALUES(@name.public Form2() { InitializeComponent().VarChar). ad. ad.Add("@name".Add("@network".Add("@retypepassword".Text.Value = textBox3. SqlDbType.Parameters.VarChar).Parameters. 37 .Parameters.Value = textBox1. ad.@retypepassword. SqlDbType. } private void label4_Click(object sender. SqlDbType.InsertCommand.InsertCommand. SqlDbType. ad. EventArgs e) { Form1 k = new Form1().Parameters.Text.@network)".Value = textBox2.VarChar). cn).InsertCommand.InsertCommand.VarChar).Add("@password".@password.Text. EventArgs e) { } private void button1_Click(object sender. ad.Value = textBox4.

MessageBox.Show("Registered Successfully").3 Cloud Infrastructure using System. k.Linq.Show().Open().cn. using System.Data. ad.Collections.ToString()). cn.Close(). using System.Drawing. //MessageBox.Show(cn. using System. 38 . } private void Form2_Load(object sender. EventArgs e) { } } } A.Generic.InsertCommand. using System.State.ComponentModel. using System.ExecuteNonQuery().1.

Form8 c = new Form8(). Form6 f = new Form6(). Form7 b = new Form7().Windows. } private void label1_Click(object sender. EventArgs e) { } private void button1_Click(object sender. public Form3() { InitializeComponent(). Form4 d = new Form4().Forms. EventArgs e) { 39 .using System.Text. namespace cloudproject { public partial class Form3 : Form { Form5 a = new Form5(). using System.

} private void button4_Click(object sender.ShowDialog().ShowDialog().ShowDialog().ShowDialog(). } private void button5_Click(object sender. EventArgs e) { b. EventArgs e) { d.a. } private void button3_Click(object sender. EventArgs e) { c. EventArgs e) { f. } 40 . } private void button2_Click(object sender.ShowDialog().

private void Form3_Load(object sender.Windows.1.Text. using System.Forms.Drawing. using System. EventArgs e) { } } } A.4 Aggregate the Storage and Image Mirroring using System. using System.Data. using System.Generic.Collections. namespace cloudproject { public partial class Form3 : Form 41 . using System. using System.ComponentModel. using System.Linq.

EventArgs e) 42 . Form4 d = new Form4(). Form6 f = new Form6(). } private void button2_Click(object sender.{ Form5 a = new Form5(). EventArgs e) { } private void button1_Click(object sender. Form8 c = new Form8(). } private void label1_Click(object sender.ShowDialog(). EventArgs e) { a. Form7 b = new Form7(). public Form3() { InitializeComponent().

EventArgs e) { f. } private void button4_Click(object sender. } private void button3_Click(object sender. EventArgs e) { } } } 43 . } private void Form3_Load(object sender.{ b.ShowDialog().ShowDialog(). EventArgs e) { d. } private void button5_Click(object sender.ShowDialog(). EventArgs e) { c.ShowDialog().

1 Login 44 .APPENDIX 2 SCREENSHOT Figure A.2.

2.2 Create User 45 .Figure A.

3 Cloud Infrastructure 46 .2.Figure A.

Figure A.4 Aggregate the Storage and Image Mirroring 47 .2.

5 Optimize Multi Snapshotting 48 .2.Figure A.

Figure A.2.6 Zoom on Mirroring 49 .

7 Backup the Data to Gmail 50 .2.Figure A.

B. New York. 51 . Carns. D. Ligon and W. In Proceedings of the 4th Annual Linux Showcase and Conference. Fox. G. Joseph and A. H. M. pages 91–100. Kakulapati (2007) ‘Dynamo: Amazon’s highly available key-value store’. Claudel.CHAPTER 12 REFERENCES 1. In Proceedings of 21st ACM SIGOPS Symposium on Operating Systems Principles. Armbrust. DeCandia. Griffith. Hastorun. Atlanta. Richard (2009)‘Adaptive deployment of remote executions’. R. pages 13–22. Huard and G. 3. P. In SPAA ’92: Proceedings of the 4th Annual ACM Symposium on Parallel Algorithms and Architectures. pages 205–220. Katz (2010) ‘Designing broadcasting algorithms in the postalmodel for message passing systems’. A. B. New York. 2. Ross (2000) ‘A parallel file system for Linux clusters’. pages 317–327. 4. Association. Jampani and V. New York. In Proceedings of the 18th ACM International Symposium on High Performance Distributed Computing.