You are on page 1of 14

A report on

Investigate Data Center Performance of various cloud providers in


terms of Cost, Performance and Quality of Service

Submitted in the partial fulfilment of


Project assignment

Submitted By:

NAME PRN
Ananya Chauhan 20030241154

Khyati Dixit 20030241170

Vinayak Magdum 20030241172

Mayank Arora 20030241174

Mrityunjay Kumar Singh 20030241176

Division : D
Batch : 2020-22
Subject : IT Infrastructure
Semester : 1st
Date : 31-10-2020

Submitted to : Dr. Dhanya Pramod


Dr. Hemraj S. L.

Page | 1
1. Introduction

A data center is the physical facility that makes enterprise computing possible, and it houses the following:

● Enterprise computer systems.


● Networking devices and associated hardware are necessary to ensure that computer systems are consistently
connected to the Internet or other business networks.
● Power supplies and subsystems, electrical switches, backup generators and environmental controls that
protect and manage the data center infrastructure (such as air conditioning and server cooling devices) and
keep it up and running.

A data center is central to an organization’s IT operations. It's a repository for most business-critical software’s
where most business knowledge is stored, processed, and disseminated to users. To ensure an enterprise 's operating
continuity, preserving data center security and reliability is important; it is capable of conducting business without
interruption.

There are several types of data centers and models of service available. Their categorization depends on whether
they are operated by one or more companies, how they fit into the topology of other data centers (if they fit), what
computer and storage systems they use, and even their energy efficiency. Four major types of data centers are:

● Enterprise Data Center: These are developed, owned, and controlled by businesses and are customized
for their end users. They are most frequently located on the company's campus.

● Managed Services Data Center: These data centers are operated on behalf of a corporation by a third
party (or a managed services provider). Instead of purchasing it, the firm rents the equipment and facilities.

● Colocation Data Center: A business leases space in colocation ("colo") data centers inside a data center
operated by others and located off company premises. The colocation data center hosts the infrastructure:
building, cooling, bandwidth, security, etc., while the components, including servers, storage, and firewalls,
are provided and operated by the company.

● Cloud Data Center: In this off-premises data center type, a cloud service provider such as Amazon Web
Services (AWS), Microsoft (Azure), or IBM Cloud or other public cloud provider hosts data and
applications.

1. Cloud Data Centers

A cloud data center is not actually located physically at a particular site rather it is hosted online. This makes it
possible for the data stored in these data centers to be fragmented and duplicated automatically. There are three
types of cloud data centers: public, private and hybrid.

• Public Data Center: These types of data center provide the data for computing and information services to
the public via the internet. The data is for everyone’s use and no restrictions are there for accessing it.
• Private Data Center: These types of data center are intended for organizations and accessing the data
requires an authentication from the organization itself. The data stored in these data centers are hence more
organization specific and therefore accessing the data from these data center require some form of VPN
connectivity through an embedded firewall rule.

Page | 2
• Hybrid Data Center: These types of data center extend the private cloud environment to the public
environment according to the demand of the consumer. The main advantage of this data center is that
compliance is maintained while making use of public resources on which environment the cloud platform is
running. Organizations therefore tend to use hybrid cloud data centers as it enables them to maximize their
internal resources without taking an overload failure.

As businesses transfer their data and workloads to cloud data centers, such as those in best-in - class on-premises data
centers. The cloud customer is no longer needed to prepare, build, manage, fuel, staff, or protect a physical building.
Instead, the cloud provider assumes responsibility for providing highly open, fault-tolerant computing resources as a
service. This frees enterprise cloud clients to spend more time on their businesses. As cloud computing adoption
continues to expand, cloud data centers host an ever-larger percentage of business workloads. According to research
firm Gartner (link resides outside of IBM), 80 percent of businesses will have closed their traditional on-site data
centers by 2025.The cloud provider usually offers consumers shared access to virtualized computing resources ( e.g.
virtual machines ( VMs) or dedicated access to actual physical computers, storage , and networking hardware. You
can learn more about the different styles of arrangements for cloud computing here.

Figure 1: Data Center Architecture

2.1 Advantages of cloud data centers:

● Via economies of scale, cloud providers are able to provide tenants with up-to - date infrastructure, state-
of-the-art security, and greater connectivity and durability than tenant companies could afford to create in
their own data centers.
● The following are some of the key advantages of cloud data centers.
● Efficient use of resources: In public cloud architectures, many tenants share the same physical resources.
This means that only to make them available for high demand periods or to have failover facilities,
individual entities do not have to purchase, develop and maintain resources such as computing and storage.
● Rapid implementation and scalability: it is possible to deliver resources in only a few taps, so it takes just a
tiny fraction of the time to introduce new on-site services.

Page | 3
● Reduced capital expenditure (CAPEX) costs: There is no need to make substantial upfront investments in
new hardware, usually through a subscription model, because cloud tenants pay for services on an as-
needed basis.
● Freeing IT staff: The responsibility of the cloud provider is to secure and maintain the infrastructure,
freeing customers' IT departments from routine hardware maintenance tasks.
● Access to the global data center network: Major cloud providers have spread their data centers across
multiple regions and continents. This allows customers to satisfy their safety and regulatory standards and
ensures that production quality, regardless of where it is located in the world, is customized to their
customer base. The performance of global networks can be measured, which yields a potential, by
comparing the distance the data must travel with the speed that light can travel in fiber.

2. Overview of Data Center Cost Calculation:

The cost of a Data center is calculated through the following steps.

Step 1: Calculating the total load of the servers kept in the data center based on the number of servers operating at a
critical responsibility.

Step 2: Based on the total load calculation in step 1 the next step is to calculate the future critical Load as
envisioned in the organization’s data center vision statement.

Step3: Once the Future load is calculated the next step is to calculate the peak power load and peak power
adjustment.

Step4: In this step based on the inefficiency factor of UPS and Battery the total UPS Load power is calculated.

Step 5: Infrastructure costing is then calculated in this step based on the floor area and lighting factor. In this way
the total Lighting Load is calculated.

Step 6: The total cooling load is finally calculated on the cooling efficiency factor of the data center.

Step 7: The last and the final step is to calculate the total power consumption. This is achieved by adding all the
power components calculated from step 1 to step 6.

Page | 4
Figure 2: Typical Data Center Cost

3. COST AND PERFORMANCE ANALYSIS:

We’ll be analyzing Amazon AWS, Microsoft Azure and Google Cloud Platform on the basis of their performances
and Cost.

3.1 GOOGLE DATA CENTER:

Data centers are owned and run by Google around the world, helping to keep the internet running 24/7. Learn how
our relentless emphasis on innovation has made our data centers some of the world's most high-performing, secure,
efficient and effective data centers. Google Data Centers are large data center facilities used by Google to provide
services that combine large drives, rack-based computer nodes, internal and external networking, environmental
controls (mainly cooling and dehumidification) and software for operations (particularly load balancing and fault
tolerance).

There is no official data on how many servers are in Google data centers, but in a July 2016 article, Gartner reported
that Google had 2.5 million servers at the time. As the business increases capability and refreshes its hardware, this
number is changing.

4.1.1 The Need of Google Cloud:

Data centers are used by most businesses because they provide cost predictability, hardware security, and power. In
a data center, however, running and managing services often needs a lot of overhead, including:

● Capacity of resources to be used efficiently.


● Physical security to protect assets, network and Operation System.
● Includes Components of Network infrastructure such as wiring, switches, routers, firewalls, and load
balancers.
● Support team including skilled employees to perform installation and maintenance and to address issues.

Page | 5
● Suitable bandwidth for peak load.
● Physical infrastructure, including equipment and power.

4.1.2 Data Security:

Protection is part of the DNA of their data centers. For the data centers only, they custom-build servers, never selling
or distributing them externally. And their industry-leading security team operates around the world 24/7, making their
facilities one of the safest places to live with your data.

They already have in place robust measures for disaster recovery. For example, they transfer data access instantly and
seamlessly to another data center.

In the event of a fire or some other interruption, so that their users can keep working, uninterrupted. And in the event
of a power outage, their emergency backup generators help to power data centers.

Instead of storing the data of each user on a single machine or collection of machines, they spread all information,
including their own, through several computers in various locations. In order to prevent a single point of failure, they
then chunk and reproduce the data over several systems. As an extra measure of secrecy, they arbitrarily call these
data chunks, making them unreadable to the human eye.

The user vital data is backed up automatically by their servers as the user operates. So, the user can be up and running
again in seconds if things happen, if their machine crashes or gets stolen.

Finally, in their data centers, they track the location and status of each hard drive rigorously. In a comprehensive,
multi-step procedure, they destroy hard drives that have reached the end of their lives to prevent access to data.

To prevent any unauthorized access to user’s data, their data centers are secured by several layers of protection. They
use secure perimeter security systems, broad camera coverage, biometric verification, and 24/7 guard personnel. In
addition, at our data centers, we implement a stringent access and protection policy and ensure that all employees are
trained to be security-oriented.

They also have centers for local and regional security operations covering the entire fleet of data centers. At all of
their facilities, these SOCs track and respond to alarms, and are actively monitoring local and global events that could
affect operations at their data centers. In order to ensure that they are always prepared to adapt to any situation, the
security teams often run year-round testing. And the teams run a rigorous business risk management programmed
alongside routine monitoring to proactively evaluate and minimize any risks to the data centers.

4.1.3 Efficiency:

The servers in Google data centers do the job for users, around the clock and around the globe, when they use Google
products. Many goods at a time are served by their servers. That's "the cloud." They can do more for less by keeping
their servers busy: more searches, more Gmail, and more YouTube videos with less servers and less resources. They
also worked hard to minimize the environmental effect of these services so that users are still good for the world when
they use Google goods.

The cloud supports multiple items at a time, so it can share resources to many users more effectively. That means they
can do more with less resources, and corporations can do it as well. In 2013, Lawrence Berkeley National Laboratory
published research showing that it could minimize the resources consumed by information technology by up to 87
percent by shifting all office staff in the United States to the cloud.

Page | 6
A case study of the U.S. directly linked to Google products The General Services Administration (GSA) showed that
they were able to reduce office computing costs, energy usage, and carbon emissions by 65-90 percent by moving to
Google Apps. In addition, our research has shown that organizations that use Gmail have reduced their email service's
environmental effect by up to 98 percent relative to those that run email on local servers.

The cloud is better for the environment thanks to their energy conservation efforts. This implies that companies which
use our cloud-based products are also greener.

4.1.4 How Google does it:

Google data centers use far less resources than the average data center. They increase the temperature to 80 ° F, use
cooling external air, and build custom servers. To help drive the entire industry forward, they also share comprehensive
performance data.

4.1.5 Measuring and improving our energy use:

Google is focused on reducing their use of energy while serving the web's exponential growth. "Most data centers use
just as much non-computing or" overhead "resources as they do to fuel their servers (such as cooling and power
conversion). They have reduced this overhead at Google to just 11%. That way, much of the energy they use is
powered by machines that support Google searches and items directly. In order to constantly push towards doing more
with less, we take comprehensive measurements, serving more users while consuming less resources.

4.1.6 Measuring Power Usage Effectiveness (PUE):

The efficiency of the entire data centers around the world of GCP is included in the estimates and is seen in the
operating cost of the entire fleet of data centers in terms of UPS, generators and the cooling infrastructure costing of
any facility. In addition, in its effectiveness measure, GCP has considered all sources of overhead. A look at the loosest
interpretation of the Green Grid's PUE calculation criteria, a more lesser number is recorded as interpreted.

A standard uniform interpretation which is widely accepted by the industry standards rates the PUE less than 1.06.
They are sticking to a higher level, however, because they agree that measuring and optimizing everything on their
platform is better, not just part of it. Therefore, in all seasons, including all sources of overhead, they record a
comprehensive trailing twelve-month (TTM) PUE of 1.11 across all their large data centers (once they achieve stable
operations).

Since its first published results in 2008, the fleet-wide PUE has fallen dramatically. For all Google data centers, the
TTM energy-weighted average PUE is 1.11, making them among the world's most powerful.

4.1.7 Environmental impact:

Google's most effective data center uses only fresh air cooling to operate at 35 ° C (95 ° F), requiring no electrically
driven air conditioning.

Google announced in December 2016 that, beginning in 2017, 100 % renewable energy would power all its data
centers, as well as all its offices. The pledge will make Google "the world's largest clean energy corporate buyer with
commitments of 2,6 gigawatts (2,600 megawatts) of wind and solar energy."

4.2 Amazon Web Services (AWS):

AWS pioneered cloud computing in 2006, creating a cloud infrastructure that makes it simpler and more reliable for
users to grow and innovate. They are continuously innovating the infrastructure and structures of their data centers to
protect them from man-made and natural threats. Then, we implement controls, establish automated processes, and

Page | 7
undergo third-party audits to confirm security and compliance. As a result, the most highly-regulated organizations in
the world trust AWS every day.

Amazon Web Services (AWS) provides consumers with the flexibility to satisfy their business needs, access to cloud
services at reasonable rates. Whether it's a small startup or a large organization, AWS features and functionality can
be leveraged by all businesses to boost efficiency and increase productivity.

4.2.1 Advantages of AWS:

● Availability: With 7x less downtime hours than the next largest cloud provider, AWS has the highest network
capacity of any cloud provider. * Every area is completely isolated and consists of several AZs, which are
fully isolated partitions of our infrastructure. You can partition applications through multiple AZ's in the
same region to help isolate any problems and achieve high availability. Furthermore, AWS control aircraft
and the AWS management console are distributed across regions and provide regional API endpoints
designed to operate securely for at least 24 hours while isolated from global control aircraft functions without
requiring customers to access the region or its API endpoints during any isolation via external networks.

● Security: Security at AWS starts with our core infrastructure. Our infrastructure, customized for the cloud
and built to meet the world's most rigorous security standards, is monitored 24/7 to help ensure your data 's
confidentiality, integrity, and availability. Before leaving our protected facilities, all data flowing through the
AWS global network that links our data centers and regions is automatically encrypted at the physical layer.
You can build on the most stable global infrastructure, recognizing that your data is still controlled, including
the freedom to encrypt, transfer, and maintain retention at any time.

● Performance: For results, the AWS Global Infrastructure is constructed. Low latency, low packet loss, and
high overall network quality are provided by AWS Regions. This is accomplished with a fully redundant
backbone of the 100 GbE fiber network, also providing bandwidth for several terabits between regions. With
our telco providers, AWS Local Zones and AWS Wavelength provide output for applications requiring
single-digit millisecond latencies by providing AWS networks and services closer to end-users and connected
5 G devices. You can easily spin up assets when you need them, deploying hundreds or even thousands of
servers in minutes, whatever your application needs.

● Global Footprint: AWS has the largest global infrastructure footprint of any provider, and at a substantial
pace, this footprint is constantly growing. You have the flexibility to choose a technology infrastructure that
is nearest to your primary consumer target while deploying your software and workloads to the cloud. On the
cloud that offers the best support for the widest range of applications, including those with the maximum
throughput and lowest latency requirements, you can run your workloads. And you can use AWS Ground
Station, which provides satellite antennas in close proximity to AWS Infrastructure Regions, if your data
lives off of this earth.

● Scalability: The AWS Global Architecture helps enterprises to be highly scalable and take advantage of the
cloud's conceptually limitless scalability. Customers used over-provision to ensure that they had adequate
capacity at the peak level of activity to handle their business operations. Now, they can have the amount of
support they really need, understanding that they can scale up or down immediately along with their
company's needs, which often decreases costs and increases the capacity of the customer to satisfy the
demands of their customers. Companies, deploying hundreds or even thousands of servers in minutes, can
easily spin up resources as they need them.

● Flexibility: The AWS Global Infrastructure gives you the freedom to choose how and where you want your
workloads to run and when you use the same network, manage aircraft, APIs, and AWS services. User can
switch between either of the AWS Regions and AZ's if you want to run your applications globally. User can

Page | 8
select AWS Local Zones or AWS Wavelength if you need to run your applications with single-digit
millisecond latencies for mobile devices and end-users. Or you can select AWS Outposts if you plan to run
your applications on-site.

4.2.2 Controls:

AWS data centers are secure by design and our controls make that possible. Before we develop a data center, we spend
countless hours considering potential threats and designing, implementing, and reviewing controls to ensure that the
systems, infrastructure, and people we deploy counteract risk. To help you meet your own audit and regulatory criteria,
we are providing you with insight into some of our physical and environmental controls below.

4.3 Microsoft Azure:

More than 200 cloud services are offered by Microsoft Corp., including Bing, MSN, Outlook.com, Office 365,
OneDrive, Skype, Xbox Live and the Microsoft Azure platform. These services are housed in the cloud infrastructure
of Microsoft, consisting of more than 100 data centers, edge computing nodes, and service operations centers that are
globally distributed. With an extensive dark fiber footprint, this system is enabled by one of the world's largest multi-
terabit global networks, which links them all.

Microsoft has both operated and leased the ability of data centers to serve clients in regions around the world. More
than one million servers in more than 100 data centers are used in Microsoft's global datacenter network, including
Amsterdam; Australia; Boydton, VA; Brazil; Cheyenne, WY; China; Chicago, IL; Des Moines, IA; Dublin, Ireland;
Hong Kong; Japan; Quincy, WA; and San Antonio, TX.

4.3.1 What Azure Does:

Microsoft offers customers 24x7x365 cloud services, and the team of Microsoft Cloud Technology and Operations
designs, develops, runs and helps protect all aspects of the infrastructure. They have invested more than $15 billion in
our infrastructure since opening our first data center in 1989 and remain focused on providing secure, scalable and
security-enhanced online services while managing operations and costs effectively as we expand.

4.3.2 Microsoft’s cloud infrastructure by the numbers:

● Microsoft opened the first data center on its campus in Redmond, Washington in 1989.
● The number of marketplaces that their cloud services are available currently is 90.
● The number of online services delivered by Microsoft’s data centers 24x7x365 is more than 200.

● Investment from Microsoft in developing their massive cloud infrastructure is over $15 billion.

● The number of servers hosted in their data centers is over 1 million.


● The number of datacenters that Microsoft has in its global portfolio of cloud infrastructure is over 100.
● The number of data items in our data centers that we store is greater than 30 trillion.
● The average number of requests per second that their networks process exceeded 1.5 million.
● Microsoft’s fiber optic network, which is the biggest in North America, can reach the moon and back 3
times on the basis of its length.

Page | 9
● Average PUE by Microsoft for its new data centers is 1.125. Power utilization efficiency (PUE) is a metric
of energy efficiency in data centers and is the ratio of the overhead power and cooling needed to sustain the
load of our server. The average for the industry is 1.8.
● As part of their carbon-neutral target, the amount of green power purchased by Microsoft is the third most
purchased by any U.S. corporation i.e. 2.3 billion kWh, according to the U.S. Agency for Environmental
Conservation.
● The amount of carbon offset ventures in which Microsoft has invested, including projects in the United
States, Brazil, Cambodia, China, Guatemala, India, Kenya, Mongolia, Peru and Turkey is 16 (including an
investment announced on November 4, 2013 for Keechi Wind Power)
● They share 100 percent of their servers and electrical equipment that they send for recycling and/or
reselling to a third-party vendor after it has been safely decommissioned.
● In 2007, The year started when Microsoft shared its best practises with the industry for cloud infrastructure.
Download our most recent white paper on the Top Ten Best Corporate Practices for Environmentally
Friendly Data Centers of Microsoft.

4. Cost Price Overview:

The focus of this review is on the three major suppliers of public infrastructure as a service (IaaS) and application as
a service (PaaS): Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP). A good first
place to start a comparison of cloud pricing is with the respective websites of the vendors. Here is a brief summary
of the pricing philosophy and key discounting schemes of each vendor:

5.1 AWS Pricing:

Amazon offers four basic pricing arrangements:

● On Demand: The regular list price that you pay if you sign up for AWS and start using it without any
discounts is On-Demand pricing. For a vehicle or a big appliance, you can think of it as approximately
equivalent to the MSRP.

● Spot Instances: If you don't care when your workloads are run, Spot Instances are discounts available. If
you have a batch job without a particular deadline, when it has spare room, AWS will run it, and you will
get a discount of up to 90% off the on-demand price.

● Reserved instances are for companies that know that a lot of cloud storage would be needed. For a
discount up to 75 per cent lower than the on-demand price, businesses may agree to a one-year or three-
year deal.

● Dedicated Hosts are for companies that have already paid costly license fees for applications. Companies
will sometimes reduce their software costs, depending on their software contracts, if they operate on a
dedicated host as opposed to on-demand servers. The prices of AWS for dedicated hosts are the same as for
cloud hosting on-demand, unless you reserve instances.

Page | 10
AWS often has a free tier of limited services, usually for a fixed time limit, that are available at no charge.

5.2 Microsoft Azure Pricing:

Microsoft, like Amazon, has published rates, but also provides various categories of customers with a number of
cloud pricing discounts.

● Anyone can save money by signing up for reserved VM instances for a one- or three-year commitment.
Microsoft publishes these reserved VM prices, unlike some of its other discounts.
● With the Azure Hybrid Gain, companies that run Microsoft software on-site in their own data centers can
be able to save. Up to 40 percent discounts may be eligible, but they depend on which software you run in
your own data centers and which software you're running in the cloud.
● For Azure instances they use for dev / test purposes, developers can get special rates. For individual Visual
Studio users or for larger teams, these discounts are valid.
● Large companies with a Microsoft Enterprise Agreement (EA) are able to negotiate cloud computing
service discounts. These discounts, however, are not released, so it is hard to decide what exactly large
corporations are paying for Azure.

Like AWS, Azure also has a free tier with minimal services, typically for a fixed time period, that are available at no
charge.

5.3 GCP Pricing:

Customer-friendly pricing "is promised by Google Cloud Platform and aggressively competes on price against AWS
and Azure." GCP offers three different types of discounts off its list prices:

● If you keep using the same cases for most of a given month, Sustained Usage Discounts kick in
automatically. These could be up to 30 percent off the price of the list.
● Preemptible VM Instances are identical to Spot Instances from AWS. These are for batch jobs that can be
disrupted and resumed later, with discounts of up to 80% off list prices.
● AWS Reserved Instances or Azure Reserved VMs are close to Committed Usage Discounts. If they make a
long-term commitment to using GCP, clients will save up to 57 per cent.

A free tier that includes some services that are free for a year and some that are always free is also provided by GCP.

5. Published Cloud Computing Prices:

The unpublished discounts and hybrid licensing variables from Microsoft make it difficult to get a fully accurate
image of what cloud computing services a given organization would pay for. However, a summary of the reported
prices for IaaS services was published by cloud benchmarking vendor Cloud Spectator.

This report compares, including the top three, a host of different cloud vendors. Its groups together similar cases
and compares the average rates for the different services, including any related long-term commitment discounts.

The report is from September 2017, so as they adjust, check the rates. This study, however, offers a decent market
overview that allows for some generalizations relevant to this specific moment in time. The graphs below illustrate
how for Linux and Windows instances of different sizes and commitment contracts, AWS, Azure and GCP stack
up. And to make the charts a little easier to read, green is the lowest price in each category, yellow is the second
lowest, and pink is the most expensive.

Page | 11
Table 1: Comparison of Cloud Spectator IaaS Industry.

Table 2 : Comparison of Cloud Spectator IaaS Industry Pricing

Page | 12
Next, a caveat: the study warns that not exactly the same are the cases used for the comparison. If you need a particular
performance level, you will need to ensure that the instances you test all fall within an acceptable range for your
applications. "Customers should compare pricing and performance offerings when purchasing cloud services to assess
a cloud product that best fits their needs, as advised in the report."

Having said that, from the charts above, a few generalizations emerge:

● Prices vary widely, especially with long-term commitments: some businesses have the mentality that their
prices are all about the same because the cloud vendors are in a price war. This, obviously, isn't real. For
instance, with a three-year commitment on a 2X wide Linux instance, the published price of Microsoft is
more than twice that of AWS or GCP.
● Google is almost always the lowest-cost provider among the top three: it is often never the costliest
alternative. Explicitly on its website, Google states that it plans to compete on price. The cloud computing
offerings of GCP are not as broad as either Amazon or Microsoft, so it probably makes sense to use this
approach.
● Microsoft Azure is never the cheapest: In the Azure column, there's a lot of pink. And where Azure is the
second cheapest, its prices are usually similar to those of Amazon. Although it is possible that when you
add in unpublished discounts, Microsoft is far more affordable, it seems more likely that clients choosing
Azure do so for reasons other than price.
● Amazon's long-term deals are particularly good: Amazon's rates look a lot like Microsoft's if you pay by
the hour or by the month. But if you sign up for a year or three, Amazon's rates are closer (and even
cheaper in some cases) than Google's.
● The big takeaway is that, unless you know exactly what you need, you would not know which vendor is the
least expensive.

6. Conclusion:

Information Technology has been evolving ever since its inception. Technological advancement in the field of data
storage continues to unfold at an exceptional pace. But it is the introduction of the cloud computing model that has
transformed the shape of the data center in the last twenty years. Cloud provides on demand applications and
commuting resources. The model provides a variety of resources that can be used by consumers as services which is
opposed to the transitional method of dedicating infrastructure for each application.

Data centers can be leveraged by CIOs through the use of clouds as it gives the benefit of Business Continuity, Ease
of Scalability and the Reduction in Cost. Cloud is not mutually exclusive from traditional data center models. The
data center transformation will continue and cloud inclusion even in the traditional data center will be necessary
depending on the ever-increasing requirements. Cloud is necessary to store the massive amount of data that the user
generates. The costs of operating their own centralized computer networks and servers are minimized by data centers.

AWS has been in the market for a long time as compared to other services and is doing fantastic with maximum
market share but GCF is doing great in terms of services provided and is growing at an alarming rate of 130%.in terms
of market share.

As per the analysis and comparison of the pricing model of all the three cloud platforms it was found that the Google
cloud platform is best in its class to choose due to the following reasons:

● The Cloud rate for google in both Unix and windows platform are comparatively low compared to AWS and
Azure. As per the report the hourly rate of GCP is $0.081. In windows instance which is much lower than
the price of Azure ($0.097) and AWS ($0.130). Google Cloud offers 95 different services which can be run
on any platform whereas the services offered by Amazon though large in number are platform specific.

Page | 13
● The Google Cloud Platform also has a cutting-edge advantage in terms of flexibility wherein it provides the
customer for customization of compute instances as compared to other Cloud platforms which provide
specific limited customization. Google also offers a 300$ credit which can be used across all services.

● Google cloud platform offers free benefits of different services for a longer period of time as compared to
other cloud services which offer its services for 30-day,12 month based on consumption limits.

Figure 3: On Demand Price Comparison of AWS and GCP

7. References:

• Ali Hammadi, Lotfi Mhamdi “A survey on architectures and energy efficiency in Data Center
Networks.” Computer Communications vol.40 pp.1–21, 2014
• M. Kutare, G. Eisenhauer, C. Wang, K. Schwan, V. Talwar, and M. W. Matthew, “Monalytics: online
monitoring and analytics for managing large scale data centers,” in Proceeding of the 7th international
conference on Autonomic computing, 2010.
s

Page | 14

You might also like