You are on page 1of 29

ABSTRACT Cloud computing is an internet based model that enable convenient, on demand and pay per use access

to a pool of shared resources. It is a new technology that satisfies a users requirement for computing resources like networks, storage, servers, services and applications, without physically acquiring them. It reduces the overhead of the organization of marinating the large system but it has associated risks and threats also which include security, data leakage, insecure interface and sharing of resources and inside attacks. Introduction Cloud computing provides customers the illusion of innite computing resources which are available from anywhere, anytime, on demand. Computing at such an immense scale requires a framework that can support extremely large datasets housed on clusters of commodity hardware. Two examples of such frameworks are Googles MapReduce and Microsofts Dryad. First we discuss implementation details of these frameworks and drawbacks where future work is required. Next we discuss the challenges of computing at such a large scale. In particular, we focus on the security issues which arise in the cloud: the condentiality of data, the retrievability and availability of data, and issues surrounding the correctness and condentiality of computation executing on third party hardware One of the most demanding aspects of the enterprise-architect role is to balance the constantly changing business needs with the ability of the IT organization to meet those needs consistently. At the highest level, enterprise architects must establish a means of determining the core competencies of the organization and then establish a process for determining which applications support those core competencies, as well as which should perhaps remain in-house and which should not. Organizations that use functional SOA-based business services can consider migrating these services to the Cloud, which is discussed in greater detail in the next section. However, in some cases, business applications cannot easily be partitioned into service contractdriven clients and

service-provider components. This might be the case when the system involves complex legacy processes and human-driven workflows. In these cases, it might be possible to move the workflow into the Cloud and support a hybrid mode of operation in which the workflow can span both online and offline scenarios. Designing a service that will run in the Cloud requires a service provider to consider requirements that are related to multitenant applications. Multitenant applications require alternative schema designs that must be flexible, secure, and versioned. Key to this approach is the ability to manage the use of computational and storage resources on individual machines, by the cloud infrastructure, to the extent that the cloud is sufficiently nonintrusive that individuals will permit its operation on their machines. This project will investigate how this may be achieved. There are three main ways to think about extending the current portfolios of on-premises technology with cloud computing: consume the Cloud, use the Cloud, and embrace the Cloud: Consume the Cloud is fundamentally about an enterprise outsourcing applications and IT services to third-party cloud providers. The key business drivers that push enterprises to consume online services are reducing IT expenses and refocusing valuable bandwidth on enabling core business capabilities. Cloud providers can usually offer commodity services cheaper and better because of their economy of scale. They can pass-on the cost savings and efficiency to enterprise customers. For Microsoft customers, some good examples are the Microsoft Business Productivity Online Suite (consisting of Exchange Online and Office SharePoint Online), CRM Use the Cloud enables enterprises to tap into cloud platforms and infrastructure services and get an unlimited amount of compute and storage capacity when they need it, without having to make large, upfront capital investments in hardware and infrastructure software. Such a utility computing model gives enterprises more agility in acquiring IT resources to meet dynamic business demands. In addition, by using cloud services, enterprises can avoid affecting the existing corporate infrastructure and speed up the deployment of Web-based applications to support new business initiatives that seek closer communication with customers and partners. For Microsoft customers, some good examples include Windows Azure and SQL Azure. Embrace the Cloud occurs when enterprises deploy technology that enables them to offer cloud services to their customers and partners. Service-oriented enterprises are best positioned to take advantage of this model by transforming their core business assets into information cloud services that differentiate them from their competitors. For Microsoft customers, on-premises

technologies such as the BizTalk Server Enterprise Service Bus Toolkit can integrate data feeds and orchestrate workflows that process information exchange via cloud services.

Statement of the problem and hypothesis

The idea of an ad-hoc cloud is to deploy cloud services over an organization's existing infrastructure, rather than using dedicated machines within data centres. Key to this approach is the ability to manage the use of computational and storage resources on individual machines, by the cloud infrastructure, to the extent that the cloud is sufficiently non-intrusive that individuals will permit its operation on their machines. This project will investigate how this may be achieved. Implementers and users of cloud services may wish to consider various high-level emergent properties of those services. For example, the degree of replication of data items and the physical locations of the replicas both affect data resilience. The degree of replication of service processes and the fail-over mechanisms affect service availability. The management of computations on individual machines affects overall resource utilization and various QoS properties. The aim of this project is to develop and evaluate techniques to allow desired highlevel properties to be specified, mapped into appropriate low-level actions, and the results to be measured and reported in terms of the high-level properties. Aims and objectives The aim of this project is to investigate how underused computing resources within an enterprise may be harvested and harnessed to improve return on IT investment. In particular, the project seeks to increase efficiency of use of general purpose computers such as office machines and lab computers. As a motivating example, the (small) University of St Andrews operates ten thousand machines. In aggregate, their unused processing and storage resources represent a major untapped computing resource. The project will make harvested resources available in the form of ad-hoc clouds, the composition of which varies dynamically according to supply of resources and demand for cloud services.

Computer Science is highly suited to experimental science. Unfortunately, many Computer Scientists are very bad at conducting high quality experiments. The goal of this project is to make experiments better by using the Cloud in a number of ways. The core idea is that

experiments are formed as artifacts, for example as a virtual machine than can be put into the cloud. For example, a researcher might want to experiment on the speed of their algorithms in different operating systems. They would make a number of different virtual machines containing each version, which would be sent to the cloud, the experiment run, and the results collected. As well as the results of running the experiment being stored in the cloud, the experiment itself is also there, making the cloud into an experimental repository as well as the laboratory. This enables reproducibility of experiments, a key concept that has too often been ignored. While using the cloud, the project can feed back into research on clouds by investigating how experiments involving the cloud itself can be formulated for use in our new Experimental Laboratory. The MapReduce skeleton, introduced by Google to provide a uniform framework for their massively parallel computations is proving remarkably flexible, and is being proposed as a uniform framework for high peformance computing in the cloud. This project would investigate a range of problems in the established area of computational abstract algebra in order to see whether, or how, they can be effectively parallelised using this framework. The cost and time to move data around is currently one of the major bottlenecks in the cloud. Users with large volumes of data therefore may wish to specify where that data should be made available, when it may be moved around, etc. Furthermore, regulations, such as the data protection regulations, may place constraints on the movement of data and the national jurisdictions where it may be maintained. The aim of this project is to investigate the practical issues which affect data migration in the cloud and to propose mechanisms to specify policies on data migration and to use these as a basis for a data management system. The aim of this project is to investigate how a migration of applications may result in changes to the way that work is actually done. We know from many years of ethnography that work practice in a setting evolves to reflect the systems and culture of that setting and that people develop work-arounds to cope with system problems and failures. How might current work-arounds change when the system is in the cloud rather than locally provided? Do the affordances of systems in the cloud differ from those that are locally provided? What 'cloud-based systems' (e.g. Twitter) might be used to support new kinds of work around and communications

Review of literature. As soon as electricity became purchased as a utility (rather than being generated locally), electricity meters were developed to measure usage. Similarly, if utility computing is going to

become popular, suppliers will need to measure usage for provisioning, and users will need to measure usage to determine if they are receiving the service for which they are paying. The aim of this project is to develop new techniques and metrics for measuring cloud computing performance and verifying service-level agreements within clouds, which may be difficult when the cloud itself is designed to be "invisible". Cloud computing is a virtual, service-oriented, location independent computing architecture which provides on demand servers, computing resources, software and storage just like an electricity grid. Customers using cloud services do not own the physical infrastructure and hence avoid capital expenditure and other overheads. They pay as they consume resources and fee is charged only for the used resources. Cloud computing has penetrated many areas from real-time entertainment services in the automobile sector to a complex computing for health care research. It has many more possibilities to change the way organizations make use of technology to supply goods and service customers. It has the potential to limit the limitation of current complex constraints like time, speed, space power, scalability and cost. It has immense usage and can completely change the traditional technological landscape including hardware infrastructure, network, servers, storage, interfaces and applications. In fact, small to medium sized organizations have already started adopting cloud services to boost their businesses to reach customers globally and reduce the cost of running the services. Supply chain management applications largely comprise of backend hardware infrastructure, network, storage etc., and most importantly in-house developed customized supply chain applications. Excluding the custom developed supply chain applications, all other components are very much already included in cloud based services. The cost of managing those services inhouse is far more than that of getting services by cloud providers. The companies face tremendous pressure and challenges to scale up the hardware infrastructure at the time of need and it comes at pretty hefty cost and time and ongoing operation adds more cost. In case of cloud services, company can just pay for the increased usage of computational resources and that can be provided on-demand. Thats the one reason, small and mid size companies are embracing the cloud based services and the rate of growth is expected to reach double digit this year, according to one of the technology research companies. But the most important component, custom developed supply chain application, is the biggest obstruction in adopting a cloud based services because to use a cloud the prerequisite is to use an internet based application which is very general and standard and does not require frequent

changes. This is more a problem for the big sized businesses which have their own in-house setup to develop and maintain application and use own data center. But slowly and gradually cloud providers would develop a service and solution which will be attractive enough in terms of cost and service so that even big organizations can embrace. The other important aspect is security of proprietary data which will be vulnerable for the internet based services. Most of the supply chain applications are also integrated with other transactional systems like financial, inventory management and order management and it will be difficult to manage one application in cloud and other on the company infrastructure. That might break the basic integration between different business processes, a benefit, provided by any ERP system. If an organization has to decide which application in supply chain management can be the best candidate for cloud services, supply chain planning & demand forecasting business function has the potential to use cloud services. The other like distribution, production and sourcing are more close to execution process and hence tightly integrated with other applications like finance and order management. The decision about which environment in the system landscape could use cloud services may be rather easy to take. Normally there are many environments companies keep and maintain to support productive environment and new project landscape. Sandbox environment, application development and configuration environment, quality assurance and productive environment are typically part of any productive landscape. At the same time environments like learning and training, regression testing, volume and stress testing are also common for big size companies. Some of them can be a good candidate to use cloud services than in-house infrastructure as those environments are not constantly in use but needs all kind of operational support and consumes energy. Regression testing, learning & training, volume test and sandbox environment for supply chain applications can be potential candidates to use cloud based services. In the past companies started embracing outsourcing of labor to level the uneven demand resulting in contract labor. Similarly, today if companies need huge computing resources just for few days, they can think of using cloud services as the cost of one computer for a thousand hours will be as much to use thousand computers for one hour. But it might take months to add, buy and install thousands computer in the company existing infrastructure. The wait time is almost negligible in case of cloud services. The scale up and down is easy to manage in less costly manner providing flexibility to the organizations and bringing more innovation friendly environment. The other important benefit of using cloud computing is platform independent for the user community as access will be based on internet. This allows users to access information by any device like smart phone, laptop, desktop etc. The benefit of cloud computing also brings the agility to the organization to tap any untapped geographical areas by quickly rolling out

systems and application required to enable businesses. This makes even small size company to do business globally. The most important concern is security as the cloud services are based on internet and we all know how vulnerable the internet is today. Many government websites have been hacked in the past resulting in the loss of proprietary data. Knowing this concern, many organizations, and specifically big organizations, may not take risk of loss of private data. The regulatory requirement may also prevent companies to use internet based computing services for proprietary data. The data which are in the public domain are already getting hosted and used on cloud resources. The big companies which provide cloud services like Microsoft, Amazon, Google, AT&T will certainly look into the concern and try to alleviate or reduce the risk level so that companies can think of using services. The future of cloud based supply chain systems looks bright and it will be fun to watch how companies decide about which part of supply chain applications goes first and becomes part of cloud.

Social cloud computing (Dr T. Henderson) Many web2.0 and applications of cloud computing have a social aspect to them - e.g., groupware, e-mail, virtual worlds. The aim of this project is to investigate the use of social network analysis to improve such applications. For instance, is it possible to determine where to cache data depending on which members of a social network are more likely to access particular information? Mobile data archiving in the cloud (Dr T. Henderson) Data archives such as CRAWDAD ( aim to archive terabytes of wireless and mobile network data that are used by thousands of researchers around the world. More recent projects such as Movebank ( further this by allowing researchers to collect data in realtime and conduct analysis on the data archive's servers. The aim of this project is to investigate the use of cloud computing for mobile network data archiving: this will a variety of topics in distributed systems including network measurement, privacy, anonymisation/sanitisation, data protection and computation caching. Abstractions of Cloud Computing (Dr K. Hammond)

The MapReduce progamming model has proved to be highly successful for implementing a variety of real-world tasks on cloud computing systems. For example, Google has successfully demonstrated the use of MapReduce to automatically parallelise computations across the largescale clusters that form the basic components of cloud computing systems [1] and IBM and Google are jointly promoting the Apache Hadoop system for University teaching/research. The key to this success is to develop powerful abstractions, or skeletons, that capture the essence of a pattern of computation, and which allow software algorithms to be plugged in to perform the required computations. By doing this, we can separate *what* the cloud computing system is required to do, from *how* it achieves that effect. This allows tremendous flexibility in placing computations within a cloud, including automatic mapping and reconfiguration to match dynamically changing cloud computing resources. Cloud Security (Dr I. Duncan) A major concern in Cloud adoption is security and the US Government has just announced a Cloud Computing Security Group (Mar 4 2009) in acknowledgement of the expected problems such networking will entail. However, basic network security is flawed at best. Even with modern protocols, hackers and worms can attack a system and create havoc within a few hours. Within a Cloud, the prospects for incursion are many and the rewards are rich. Architectures and applications must be protected and security must be appropriate, emergent and adaptive. Should security be centralized or decentralized? Should one body manage security services? What security is necessary and sufficient? How do we deal with emergent issues? There are many areas of research within the topic of Cloud Security from formal aspects to empirical research outlining novel techniques. Cloud Privacy and Trust are further related areas of potential research. Cloud VV&T and Metrics (Dr I. Duncan) Verification, Validation and Testing are all necessary to basic system evaluation and adoption but when the system and data sources are distributed, these tasks are invariably done in an ad hoc or random manner. Normal test strategies for testing code, applications or architecture may not be applicable in a cloud; software developed for a non distributed environment may not work in the same way in a cloud and multiple thread, network and security protocols may inhibit normal working. The future of testing will be different under new environments; novel system testing strategies may be required to facilitate verification and new metrics will be required to describe levels of system competence and satisfaction. There are many areas of research within the topic of Cloud VV&T from formal verification through to empirical research and metric validation of multi part or parallel analysis. Testing can

be applied to systems, security, architecture models and other constructs within the Cloud environment. Failure analysis, taxonomies, error handling and recognition are all related areas of potential research. Constraint-Based Cloud Management (Dr I. Miguel, Prof. A. Dearle & Dr G. Kirby) A cloud may be viewed as comprising the union of a dynamically changing set of cloudlets, each of which provides some particular functionality. Each cloudlet runs on a potentially dynamically changing set of physical machines. A given machine may host parts of multiple cloudlets. The mappings between cloudlets and physical resources must be carefully managed in order to yield desirable high-level properties such as performance, dependability and efficient resource usage. To be practical, such management must be automatic - but producing timely high-quality management decisions for a cloud of significant scale is a difficult task. The aim of this project is to apply constraint programming techniques to solve this problem efficiently. The Green Cloud (Prof Saleem Bhatti) Cloud computing requires the management of distributed resources across a heterogeneous computing environment. These resources typically are, from the user viewpoint, "always on". While techniques exist for distributing the compute resources and giving a viewpoint of the user of "always on", this has the potential to be highly inefficient in terms of energy usage. Over the past few years there has been much activity in building "green" (energy efficient) equipment (computers, switches, storage), and energy efficient data centres. However, there has been little work in trying to model and demonstrate a capability that allows a heterogeneous, distributed compute cloud to use a management policy that also tries to be as energy efficient as possible. This project will explore the use of virtualisation in system and network resources in order to minimise energy usage whilst still meeting the service requirements and operational constraints of a cloud. Cloud Based Virtual Worlds (Colin Allison and Alan Miller) Although Virtual Worlds such as Second Life are immensely popular (they have over fifteen million registered users) the performance of a region deteriorates quickly with density and activity of participants. This deterioration manifests itself in significant delays in updating perspectives, rendering objects in the shared environment, movements of avatars and responsiveness to user initiated actions. Indeed, once an "island" has over fifty avatars on it can become unusable. To what extent can Cloud Computing address this problem of dynamic provision of resources? In practice this research would use Open Sim, the open source version of Second Life, as the basis of experiments. One of the experiments may be to compare a

federated set of Open Sim grids explicitly hosted on separate installations, with a single large grid, hosted on the Cloud. Businesses are adopting cloud 2.5 times faster than IT operations, recognizing the key advantages of cloud: In speeding innovation Accelerating business processes Reducing time to revenue

Enterprises and service providers are confronted with fragmented, piecemeal cloud solutions, which lead to the same complexity, security issues and management costs they are seeking to avoid. Cloud computing solutions need to address some critical challenges: Support for on-premise, public and hybrid cloud Unified service delivery across cloud and traditional IT Broad ecosystem of applications, hypervisors, operating systems Automated infrastructure to application lifecycle management Scale to meet unpredictable demand End to end security

Literature Survey: Today, the most popular applications are Internet services with millions of users. Websites like Google, Yahoo! and Facebook receive millions of clicks daily. This generates terabytes of invaluable data which can be used to improve online advertising strategies and user satisfaction. Real time capturing, storage, and analysis of this data are common needs of all high-end online applications. To address these problems, a number of cloud computing technologies have emerged in last few years. Cloud computing is a style of computing where dynamically scalable and virtualized resources are provided as a service over the Internet. The cloud refers to the datacenter hardware and software that supports a clients needs, often in the form of datastores and remotely hosted applications. These infrastructures enable companies to cut costs by eliminating the need for physical hardware, allowing companies to outsource data and computations on demand. Developers with innovative ideas for Internet services no longer need large capital

outlays in hardware to deploy their services; this paradigm shift is transforming the IT industry. The operation of large scale, commodity computer datacenters was the key enabler of cloud computing, as these datacenters take advantage of economies of scale, allowing for decreases in the cost of electricity, bandwidth, operations, and hardware. EXECUTIVE SUMMARY

As organizations cope with a dynamically changing business environment, IT managers look to cloud computing as a means to maintain a flexible and scalable IT infrastructure that enables business agility. In June 2009, F5 Networks conducted a study examining the adoption of cloud computing by enterprise IT managers. The study found that although significant confusion regarding the definition of the cloud exists, IT managers are aggressively deploying cloud computing initiatives to accomplish business objectives. Additionally, the study found that widespread enterprise adoption is contingent upon solving access, security and performance concerns. Key findings of the 2009 Cloud Computing Research Report include the following:








Cloud computing has gained critical mass Cloud computing is more than SaaS Core technologies for building the cloud Influencers go beyond IT


F5 Networks surveyed 250 companies. Applied Research was selected to perform the survey and targeted the following personnel:


Enterprise IT (at least 2,500 employees) Manager, Director, VP, SVP (no

o Network o Information Security o Architecture o Development

The survey was conducted via telephone and was performed in June and July 2009.


F5 Networks spoke with 250 companies.

All companies included in the survey had at least 2,500

employees worldwide, with a median of 75,000 employees. 37 percent of respondents were IT managers. 24 percent were VPs, 23 percent were IT directors, and 16 percent were SVPs. No CIOs were included in this study. Of all respondents, 46 percent manage an IT department, 41 percent work in an IT department, and the IT department reports to 13 percent.


Cloud computing is pervasive within the enterprise, but respondents had little agreement on how to define the term. Applied Research tested six industry definitions of cloud computing and found the study participants were unable to choose any of them as being just right. The study tallied how many respondents marked Almost there or This is perfect for each definition. Based on definitions reported by respondents, the two most popular, each with 68 percent were:

Cloud computing is on-demand access to virtualized IT resources that are housed outside of your own data center, shared by others, simple to use, paid for via subscription and accessed over the Web. computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the cloud.


Defining the cloud

F5 Networks also conducted a focus group of IT managers, network architects and cloud service providers in order to establish a firm definition of cloud computing. Focus group participants debated the merits of each definition in the survey, and agreed upon the following as a standard definition for cloud computing: Cloud computing is a style of computing in which

dynamically scalable and often virtualized resources are provided as a service. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that employs supports a them. for Furthermore, enabling (e.g., cloud computing of model available, networks,

convenient and on- demand network access to a shared pool configurable computing resources

servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. .


Though IT managers may be confused about the exact definition of cloud computing, the technology has become widespread. 99 percent of respondents claim they are currently discussing or implementing public and private cloud computing solutions. 82 percent of respondents report they are in some stage of trial, implementation, or use of public clouds. Furthermore, 83 percent of respondents claim they are in some stage of trial, implementation, or use of private clouds.

Budgetary allocation As managers move to incorporate cloud computing into their IT strategy, budgets are being adjusted to accommodate the shift. 66 percent of respondents report they have a dedicated budget for the cloud. Additionally, 71 percent of respondents expect cloud computing budgets to grow over the next two years.


IT managers commonly equate Cloud Computing with Software-as-a- Service (SaaS). Although SaaS is an important element of cloud computing, IT managers do not see it as the most important element.

Three-fourths of respondents reported that Platform-as-aService (PaaS) is usually or always included in the cloud. Additionally, two-thirds said Infrastructure-as-a-Service (IaaS) is usually or always included in the cloud. By way of comparison, three-fifths said SaaS was usually or always included in a cloud deployment.


As budgets for cloud computing increase, IT managers are examining critical technologies for building the infrastructure behind the cloud. 90 percent of respondents named access control as somewhat/very important for building the cloud. An additional 89 percent listed network security as a core technology. 88 percent of respondents listed both server and storage virtualization as essential technologies in the cloud.

Needs driving the cloud The key cloud computing technologies listed by respondents fall in line with needs that drive IT managers interest in the cloud. 77 percent of respondents reported that efficiency is a driver for public clouds. Additionally, respondents claim that reducing capital costs (68 percent) and easing staffing issues (61 percent) are key drivers behind public clouds. For private cloud computing, respondents listed reducing capital cost (63 percent), agility (50 percent) and easing staffing issues (50 percent) as drivers.


Though IT is intrinsically a part of cloud computing, it is not the only influencer over an organizations cloud computing policies. Survey respondents claimed that IT generally controls the cloud computing budget (64 percent compared to the 13 percent each held by application development and network architects). According to respondents, the top IT (45 percent), influencers for public clouds include stakeholders (41 percent). On a similar note, respondents claimed the top three influencers in the implementation process for private clouds are IT (45 percent), LOB business unit stakeholders (36 percent) and application development teams (24 percent).

application development (41 percent) and LOB business


Organizations should look beyond SaaS offerings when evaluating cloud computing options. IaaS and PaaS services are key cloud computing technologies that can be leveraged to accomplish business objectives. technologies. Organizations should invest time understanding how the cloud will affect access control, network security, virtualization and other core network components before implementing a cloud environment. Cloud computing deployments should be a cross-functional effort with IT, application developers, network architects, and other critical business stakeholders weighing in prior to cloud purchasing decisions Cloud computing touches many different

II. Advantage of Cloud Computing. Cloud computing offers lots of advantages: Cost- As in the clouds the user need not own the resources, it just need to pay as per the usage in terns of time, storage and services. This feature educes the cost of owning the infrastructure [1], [2]. Performance-the performance is improved because the cloud is not a single computer but a large network of powerful computers resulting in high processing power [1], [3], [5]. Freedom from up gradation and maintenance- the upgraded by the cloud service provider [2], [3]. Scalability- The user is can request to increase the resources if the area of application grows or new functionality is added. On the other hand if requirement shrinks the user can request to reduce the resources as well [3], [4]. Speedy Implementation- Time of Implementation of cloud for an application may be in days or sometimes in hours. You just need a valid credit card and need to fulfill some online registration formalities [2], [3], [5]. Its Green- The cloud computing is a green technology since it enable resource sharing among users thus not requiring large data centers that consumes a lot of power [4], [5]. Mobility- We dont need to carry our personal computer, because we can access our documents anytime anywhere [2][3]. cloud infrastructure is maintained and

Increase Storage Capacity- In Cloud computing we have extreme resources for storing data because our storage consists of many bases in the Cloud. Another thing about storing data in the Cloud is that, because of our data in the Cloud can automatically duplicated, they will be more safety [1], [3]. III. Disadvantage of Cloud Computing There are certain security threats and issues of implementing Cloud computing: Data Loss-Customers are responsible for the security of their data, thus in any case if data is lost the customer is in deep trouble [1], [3], [4],[7]. Account Hijacking-Since No native APIs are used for login and anyone can easily register himself as cloud service user chances of hijacking ones account are very high.[1], [3]. Control over the process in cloud computing the user have very less or no control over the services [3], [4]. Insider attacks by Cloud Service Provider-It may possible that a fraudulent employee may do the fishing and steal the data [1], [4], [7]. Legal aspects-In case of Data loss the user may suffer if there is no Service Level Agreement (SLA), the loss will be of user, because he is not able to put claims against the cloud service provider [1], [4], [7]. Jurisdiction-If the service provider and the user are from different countries then in case of any dispute which countrys laws were enforced? This is a big drawback from users point of view, since the user has opted for the cloud computing for saving costs and the cost of getting legal services is very high [1], [4], [7]. Portability/Migration from one service provider to other-Different Service providers have different architecture hence it is difficult for a user to migrate from one cloud service provider to another[3], [4], [7]. Reliability of cloud service provider-There is lack of standards for cloud computing, hence the reliability of a cloud service provider is largely dependent on its past functioning in similar or other fields [4], [7]. Auditability - The cloud service provider is not under any kind of audit net. It is possible that the service provider has outsourced some services to a third party and the functioning is not transparent and the user can not inspect the process [3], [4], [7], [8]. Quality of Service (QoS) in clouds-At present the focus of cloud service providers is on cost

effectiveness and fast services therefore QoS in Cloud Computing is an unattended area[4], [7], [8]. IV. The tradeoff between cost and security Following approaches may be adopted to make cloud computing more secure and convenient while keeping the cost of implementing cloud low [4], [5], [7], [8]: When a user registers for any cloud computing services, strict validation check should be applied about the users background. The Cloud Service Provider and the user must sign a Service Level Agreement (SLA), clearly defining the role and responsibilities of both parties and terms and conditions of contract breakup. Accountability for data loss (if any), because of the cloud service provider need to defined and measures of data backup should be there. The Cloud Service Provider must ensure strict authentication and validation policy for employees. There should be an audit process for the cloud service providers. Frame a minimal set of standards for cloud computing. The cloud service providers should be accredited.