CLOUD COMPUTING

SMART DATA COLLECTIVE ARTICLE COLLECTION
Cloud computing. Some argue it‘s nothing more than a re-dressed virtualization trend from the 1960s. Others say cloud is the same concept as computer ―timesharing‖ and that analysts and pundits should move along as there‘s nothing new here. That‘s pure hogwash. Think of cloud as a service of computational power, storage and more, much like the service you‘d get from a utility company. The cloud allows you to plug into a required capability—whether it‘s for print servers or analytics. Cloud computing is typically available on a metered basis when demanded, and can be accessed via self-service methods—simply plug in via a portal and access what you need. And it‘s delivered via a host of technologies, software, processes, devices and physical locations that power this ―service‖. This collection includes articles on cloud computing concepts, trends, delivery models and challenges. All articles are written by Paul Barsch, a technology marketing executive at a $2.4B analytics company based in the United States.

ABOUT THE
AUTHOR
Fortune 500 marketing director Paul Barsch has worked in technology for fifteen years at companies such as:  Terayon Broadband  BearingPoint Management Consulting  HP Enterprise Services  Teradata

Connect with him on Twitter
Or LinkedIn

TABLE OF CONTENTS
Where is the Cloud? .............................................................................................................................. 2 Cloud OPEX vs. CAPEX – Which is the Better Choice? ........................................................................ 4 Data Warehouse as a Service – a Good Pick for Mid-Sized Businesses ............................................... 6 Will Pay Per Use Pricing Become the Norm? ......................................................................................... 9 Everything But Faster .......................................................................................................................... 11 Private Clouds are here to stay ............................................................................................................ 13 Top Financial Risks of Cloud Computing ............................................................................................. 16 Want Cloud Success? Aim for Simplicity.............................................................................................. 18 Private Clouds for Analytics OVER Public Clouds................................................................................ 20 Are Public Clouds Hurtling Towards Disaster? ..................................................................................... 21 Could Your Cloud Platform Become a Competitor? ............................................................................. 23 Is Bigger, Better in Cloud Computing? ................................................................................................. 25 From Complexity to Simplicity in the Cloud .......................................................................................... 27 Should Public Clouds Be Considered Complex Environments? ........................................................... 29 Will Cloud Computing Change Your Business Model? ......................................................................... 31 ABCs of Elasticity: Always Be (Thinking) Cloud .................................................................................. 33 CLOUD NOT JUST ABOUT COST SAVINGS ..................................................................................... 35 Moving to public cloud? Do the Math First ........................................................................................... 37 CAPEX FOR IT – WHY SO PAINFUL? ................................................................................................ 39 What the sharing economy means to cloud computing ........................................................................ 41

1

WHERE IS THE CLOUD?
Published November 19, 2012

When the term ―cloud computing‖ comes to mind, it’s fair to say that most people think of it as some nebulous group of computers in the sky delivering content to mobile devices and workstations whenever it’s required. How far off is that definition, and where exactly is ―the cloud‖?

Image courtesy of Flickr. By M Hooper

I

n a dusty corner of San Antonio, Texas, the cloud is about to come to life. As a

Microsoft corporate VP takes a shovel and firmly plants it into the soil, she proclaims; ―The cloud is not the cloud in the sky, it‘s what we are about to break ground on (right here).‖* That‘s because San Antonio, Prineville (OR), Quincy (Wash), among many other cities across the globe, are now host to massive data centers filled with tens of thousands of blinking computers owned by Microsoft, Rackspace, Facebook, Amazon and others. Imagine this: racks upon racks of Intel based servers. Multi-colored wires networked from computer to computer. Huge vaults of pipes for cooling and airconditioning massive computer farms. A few sleepy network engineers scurrying from machine to machine checking connections. Is this the cloud? Thomas M. Koulopoulos, author of ―Cloud Surfing‖ says that‘s part of the story. ―(The cloud is) is a heavily monitored, fortified, and secure array of computers that are built with the objective of securing data with multiple layers of physical and cyber security,‖ he says. But those asking ‗where‘s the cloud‘ aren‘t asking the right question Koulopoulos argues. ―This is sort of like asking, where does electricity exist?‖

2

That‘s because cloud computing is much more than the device in your hand streaming music, the corporate dashboard on your wirelessly connected tablet, or even megawatt powered data centers. Instead, think of cloud as a service of computational power, storage and more, much like the service you‘d get from a utility company. The cloud allows you to plug into a required capability—whether it‘s for print servers or analytics. The clouds is typically available on a metered basis when demanded, and can be accessed via self-service methods—simply plug in via a portal and access what you need. And it‘s delivered via a host of technologies, software, processes, devices and physical locations that power this ―service‖. Thomas M. Koulopoulos asserts that where the cloud physically exists doesn‘t matter; ―What counts instead is the question ‗is it there when I need it?‘‖ he says. For people like me, this is too much of a utilitarian approach. I want to know ―the where‖ of cloud computing. Coming back to the original question then, the cloud exists—in your connected handheld device, on your laptop, in your data center, in another company‘s data center, across millions of miles of fiber optic cables, and wirelessly in the air. The cloud then, really isn‘t just a place, it‘s more of a system, a massive investment in people, dollars, infrastructure, time and talent. So where is the cloud? The answer is places seen and unseen. In short – everywhere.

*as told in ―The Shadow Factory‖ by James Bamford.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

3

CLOUD OPEX VS. CAPEX – WHICH IS THE BETTER CHOICE?
Published August 15, 2012

Among CIOs and CFOs debate swirls regarding how to best budget for and acquire IT resources. The key questions are; should companies own, lease or essentially ―rent‖ IT services via cloud computing? It’s actually a tougher choice than you may think.

A

n Economist article explains the ―Big Data‖ conundrum facing global

enterprises. Data volumes are increasing faster than many companies have the capacity to store much less mine them for insights. In this exploding ―data revolution‖ many companies are also finding their internal processes—much less budgets—for acquiring technology are not keeping up with business user needs. That‘s why cloud computing is so attractive. With the public cloud model, compute, memory and storage can be acquired on a ―pay per use‖ basis. In the public cloud there is typically no hardware/software to buy upfront, thus companies can use operating expense budgets (OPEX) to fund their needs, giving them plenty of budgeting flexibility. The alternative is to purchase needed hardware and software outright—thus capitalizing assets (CAPEX).

4

On the surface, going the OPEX route seems to be the better choice, but it‘s a more complex decision than it seems. One primary factor in the CAPEX vs. OPEX debate really boils down to how much of each budget a company has (as determined by the CFO). Moreover, plenty of small to medium size businesses are capital constrained. They simply don‘t have tons of dollars to invest in assets. For these companies it makes sense to discover options such as leasing or cloud that can convert a given investment into an operating expense that would flow from SG&A on the income statement. Larger companies usually have more significant capital budgets. That said they still must balance various and competing alternatives, seeking the best return on investment. These companies have capital budgeting processes completed on an annual basis and they‘re usually only capital constrained for unexpected mid -year requirements or restricted based on investor and/or industry expectations/guidelines for ratios such as current ratio or Return on Assets (ROA). Another consideration in deciding CAPEX vs. OPEX for IT acquisitions is resource utilization per Chief Information Officer Bernard Golden. In a CIO Magazine article, Golden provides an example of the decision to buy or rent a car. If you plan on using the car full-time, then purchasing/financing the car likely makes the most sense. However, if you plan on using the automobile only for a week a month, then perhaps renting is the better choice. In the same way, if you plan on running IT resources at near or full resource utilization for long periods of time then it probably makes best economic sense to purchase assets (if you have the CAPEX to do so). Using IT resources for just a little while, and then shutting them down? This is probably the best use case for cloud computing according to Golden. There are many facets to the OPEX vs. CAPEX debate in terms of cloud computing, and certainly more than can be discussed in a short article. However more than discussions of which funding model (CAPEX vs. OPEX) for IT is best, a more realistic choice architecture is 1) which budget do you have available (i.e. some companies have little to no capital budgets)? and 2) do you plan to fully utilize the IT asset for long periods of time (i.e. 1-3 years or more) or do you need it on a temporary basis (days, weeks, months)? Answers to these questions will initially help you determine which (if any) cloud computing options are the best business choice for your company.
— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

5

DATA WAREHOUSE AS A SERVICE – A GOOD PICK FOR MID-SIZED BUSINESSES
Published December 5, 2012

Plenty of mid-sized businesses don’t have the time, talent, or investment dollars to manage a data warehouse environment, much less monitor and maintain these services within their own data center. That’s why companies seeking analytic capabilities are increasingly looking at cloud-based options to shift responsibilities for data warehouse ownership, administration and support to a contracted ―as a service‖ provider.

Courtesy of Flickr. by Henk Achtereekte

F

or mid-sized businesses, cloud computing makes a lot of sense. With cloud, no

longer do such businesses have to worry about procuring and maintaining and continually investing in IT resources. Now, companies that might not have been able to afford world-class infrastructure and talent can access such capabilities on a monthly or subscription basis. Previously, a mid-sized business had three selections in terms of acquiring analytics. With the right talent, they could build their own solution from scratch, or utilize open source applications – a very impractical approach for small to medium enterprises (SMEs). Another common alternative was to procure ‗off-the-shelf‘ applications and/or consulting resources from mid-tier system integrators to cobble together a working solution to meet business needs. These two choices (build vs. buy) in most cases still required a company to staff and manage the service within their own data center. A third option was to get out of the service support business altogether by contracting with a hosting provider to provide network connectivity, security, and monitoring of the data warehouse environment.

6

While hosting is an attractive choice, mid-sized companies still must maintain responsibility for purchasing technology assets, and application DBA support, providing backup/archive/restore activities, application tuning, and the security protection of their business intelligence assets, among other things. These data warehousing options – build, buy, and/or host are still available today. However, some medium sized enterprises are looking to cloud computing models as a fourth option. For companies seeking analytics capabilities to manage and optimize their business with the ultimate goal of delivering value to their business and their customers, another intriguing delivery model is acquiring data warehouse resources ―as a service‖. More than hosting, cloud based Data Warehousing ―as a Service‖ (DWaaS) typically provides an integrated solution of hardware, software and services in a bundled package. These as-a-service offers may include monitoring, securing, maintenance and support for entire data warehouse environment including data integration, core data warehouse infrastructure and business intelligence applications. DWaaS is seen as a good choice for enterprises seeking an alternative to owning, managing and investing upfront for information technology. And CIOs and CFOs for mid-sized businesses are finding ―as a service‖ delivery models especially valuable because many lack capital budgets to acquire technology, or have trouble affording the expertise needed to run a data warehouse environment. These are all good reasons to think about the ―as a service‖ delivery model for data warehousing. But there are added benefits in terms of a shifting of responsibility to an ―as a service‖ provider. First, solution ownership in terms of capital expenditures becomes a thing of the past. No longer must CFOs worry about keeping data warehousing assets on the corporate books. With DWaaS capabilities, ―solution ownership‖ transfers to the service provider, thus freeing up capital budgets to acquire other business assets and ultimately reduces investment risk in buying rapidly depreciating information technology. In addition, with DWaaS, data warehouse support should be included in the service bundle. This means that DBA activities such as database and system administration, backup/archive/recovery, security and performance and capacity management are all likely included in one monthly price. ETL and BI support might also extend to monitoring of data integration jobs for completion and ensuring delivery of daily/weekly/monthly reports. Thus, DWaaS should be a complete, integrated and managed service offer—a very appealing choice for mid-sized companies. These types of cloud based service offers are appropriate for companies that don‘t have the time, resources or upfront

7

capital expenditures to acquire advanced capabilities that were once limited to much larger type companies. In terms of taking advantage of the power of analytics, who says big companies should have all the fun?

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

8

WILL PAY PER USE PRICING BECOME THE NORM?
Published December 20, 2012

CIOs across the globe have embraced cloud computing for myriad reasons; however a key argument is cost savings. If a typical corporate server is utilized anywhere from 5-10% over the life of the asset, then it’s fair to argue the CIO paid ~10x too much for that asset (assuming full utilization). Thus to get better value, a CIO then has two choices – embark on a server consolidation project—or use cloud computing models to access processing power and/or storage, when needed, on a metered basis.

C

loud computing isn‘t the only place where utility based pricing is taking off. An

article in the Financial Times shows how the use of ―Big Data‖ in terms of volume, variability and velocity, is stoking a revolution in real-time, pay-per-use pricing models. The FT article cites Progressive Insurance as an example. With the simple installation of a device that can measure driver speed, braking, location and other data points, Progressive can gather multiple data streams and compute a usage based pricing model for drivers that want to reduce premiums. For example, rates may vary depending on how hard a customer brakes, how ―heavy they are on the accelerator‖, or how many miles they drive. The installed device works wirelessly to stream automobile data back to Progressive‘s corporate headquarters, where billing computations take place in near real time. Of course, the driver must be willing to embark upon such a pricing endeavor, and possibly lose some privacy freedoms, however this is often a small price to pay for the benefit of a pricing model that correlates safer driving habits with a lower insurance premium.

9

And this is just the tip of the iceberg. Going a step further to true utility based pricing, captured automobile data points also make it possible to create innovative pricing models based on other risk factors. For example, if an insurance company decides it is riskier to drive to certain locales, or from 2am-5am, they can attach a ―premium price‖ to those decisions, thus letting a driver choose their insurance rate. Even more futuristic, it might be possible to be charged more or less based on discovery of how many passengers are driving with you! Whether it is utility based pricing of electricity based on time of day, cloud computing, or even pay as you go insurance, with the explosion of ―big data‖ and other technologies, it‘s already possible to stream and collect various data, calculate a price and then bill a customer in a matter of minutes. The key consideration will be consumer acceptance of such pricing models (considering various privacy tradeoffs) and adoption rates. If the million ―data collection‖ devices Progressive has installed are any indication, much less the general acceptance of utility priced cloud computing models, it appears we‘ve embarked upon a journey in which it‘s far too late to go back home.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

10

EVERYTHING BUT FASTER
Published September 18, 2012

Speed might not be the only way to win in the marketplace, but it sure does help. Companies are discovering that the element of speed—in decision making, delivering products and services, and communicating is one of the few bastions of competitive advantage still remaining.

Image courtesy of Flickr. the prodigal untitled13.

Z

ero latency is the process of removing/reducing the time between an event

and action. Those companies that can react and respond to changing market conditions faster than competitors (in a value creating manner) usually end up the first to the lunch table. Amazon is a great case study of a company pursuing zero latency wherever possible. Take for example Amazon‘s initiative to roll out new warehouses on a global basis to compete with retailers. Currently on Amazon.com, when a customer orders a product they have shipping options including overnight delivery. However, if new warehouses are closer to customers, it‘s feasible for Amazon to offer same-day shipping (by 4pm). This of course will put pressure on brick and mortar retailers that currently have the advantage of ―get it right now‖. Amazon has also worked out this ―time to value‖ concept with cloud computing solutions. In the past, if a particular company wanted to acquire hardware/software solutions, it was necessary to negotiate with vendors, sign contracts, and get products shipped, installed and turned on. Such a process could take anywhere from two weeks to two months or more. With cloud computing, it‘s now a lot easier to acquire similar capabilities from Amazon Web Services with just a credit card and a checkbox for the customer agreement for use of services. With

11

cloud computing, the time between event (the need for IT solutions) and action (gaining IT solutions) is down from weeks to minutes. High frequency trading is another area where speed equals advantage. With this mode of trading, the key for hedge funds and investment banks is to co-locate servers at stock exchanges to reduce the roundtrip time needed to complete an equity trade. Now traders are competing with faster machines, better algorithms and faster pipes into stock exchanges. In a field where trades are made in microseconds, those who can trade faster than others gain significant advantage to the tune of millions of dollars. Of course, there‘s also a downside to speed. As business processes are cut and paste to reduce steps, and decisions are made faster and faster (nearing the speed of light) it‘s much easier to make mistakes. And when mistakes are made (see Knight Capital), there is little to no time to correct them. Speed wins, but losing slow. The where customer event and value there‘s definitely a careful balance between winning (too) fast and key for each business is to find that balance and discover areas needs aren‘t being met, then work to reduce the time between to as close to zero as possible.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

12

PRIVATE CLOUDS ARE HERE TO STAY
Published July 17, 2012

Some cloud experts are proclaiming private clouds ―are false clouds‖, or that the term was conveniently conjured to support vendor solutions. There are other analysts willing to hedge their bets by proclaiming that private clouds are a good solution for the next 3-5 years until public clouds mature. I don’t believe it. Private clouds are here to stay (especially for data warehousing)—let me tell you why.

F

or starters, let‘s define public vs. private cloud computing. NIST and others do

a pretty good job of defining public clouds and their attributes. They are remote computing services that are typically elastic, scalable, use internet technologies, self-service, metered by use and more. Private clouds, on the other hand are proprietary and typically behind the corporate firewall. And they frequently share most of the characteristics of public clouds. However, there is one significant difference between the two cloud delivery models –public clouds are usually multi-tenant (i.e. shared with other entities/corporations/enterprises). Private clouds are typically dedicated to a single enterprise – i.e. not shared with other firms. I realize the above definitions are not accepted by all cloud experts, but they‘re common enough to set a foundation for the rest of the discussion. With the definition that private clouds equate to a dedicated environment for a single or common enterprise, it‘s easy to see why they‘ll stick around—especially for data warehousing workloads. First, there‘s the issue of security. No matter how ―locked down‖ or secure a public cloud environment is said to be, there‘s always going to be an issue of trust that will need to be overcome by contracts and/or SLAs (and possibly penalties for breaches). Enterprises will have to trust that their data is safe and secure— especially if they plan on putting their most sensitive data (e.g. HR, financial, portfolio positions, healthcare and more) in the public cloud.

13

Second, there‘s an issue of performance for analytics. Data warehousing requirements such as high availability, mixed workload management, near realtime data loads and complex query execution are not easily managed or deployed using public cloud computing models. By contrast, private clouds for data warehousing offer higher performance and predictable service levels expected by today‘s business users. There are myriad other reasons why public clouds aren‘t ideal for data warehousing workloads and analyst Mark Madsen does a great job of explaining them in this whitepaper. Third, in the multi-tenant environment of public cloud computing, there is increasing complexity which will lead to more cloud breakdowns. In a public cloud environment there are lots of moving pieces and parts interacting with each other (not necessarily in a linear fashion) within any given timeframe. These environments can be complex and tightly coupled where failures in one area easily cascade to others. For data warehousing customers with high availability requirements public clouds have a long way to go. And the almost monthly ―cloud breakdown‖ stories blasted throughout the internet aren‘t helping their cause. Finally, there‘s the issue of control. Corporate IT shops are mostly accustomed to having control over their own IT environments. In terms of flexibly outsourcing some IT capabilities (which is what public cloud computing really is), IT is effectively giving up some/all control over their hardware and possibly software. When there are issues and/or failures, IT is relegated to opening up a trouble ticket and waiting for a third party provider to remedy the situation (usually within a predefined SLA). In times of harmony and moderation, this approach is all well and good. But when the inevitable hiccup or breakdown happens, it‘s a helpless feeling to be at the mercy of another provider. When embarking on a public cloud computing endeavor, a company or enterprise is effectively tying their fate to another provider for specific IT functions and/or processes. Key questions to consider are:
    

How much performance do I need? What data do I trust in the cloud? How much control am I willing to give up? How much risk am I willing to accept? Do I trust this provider?

There are many reasons why moving workloads to the public cloud makes sense, and in fact your end-state will likely be a combination of public and private clouds. But you‘ll only want to consider public cloud after you carefully think about the above questions. And inevitably, once answers to these questions are known, you‘ll also conclude private clouds are here to stay.

14

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

15

TOP FINANCIAL RISKS OF CLOUD COMPUTING
Published June 13, 2012

Cloud computing definitely has upside as adopters can speed delivery of analytics, gain flexibility in deployments and costs, and transfer IT headaches to another company. However, with all the advantages of cloud, it’s important to keep in mind there are financial risks to cloud computing including potential costs from lawsuits and reputational damage from cloud provider security/privacy data breaches, and possible revenue losses from cloud provider downtime/outages.

F

or any type of business decision, there are various risks that should be

considered– strategic, operational, financial, compliance and reputational (brand). These risks should also be criteria for any decision to move workloads to cloud computing. However, for sake of discussion, let‘s focus on financial risk. First, for cloud computing there are financial risks in terms of potential data or privacy loss, especially in complex multi-tenant environments. If there are data breaches of unencrypted personally identifiable information (PII), many US states have laws that require consumer notification. Companies accused of data breach also typically provide consumer credit monitoring services for up to one year. One research firm estimates total costs due to a data breach average $7.2 million (USD). In addition, such breaches may open up companies to class action lawsuits that could total millions more in damages. To mitigate risks of data loss or privacy breach, cloud providers do everything in their power to safeguard data including: server hardening, user provisioning and access controls, enforcement of policies for passwords and data privacy, monitoring/logging for intrusion detection, self-auditing, third party security audits (when specified), mandatory training for personnel and in some cases encryption of tables and/or columns. And while in many cases the above practices are more robust in public cloud computing environments than in most corporate data centers, there are still lagging trust concerns of possible cloud data loss or privacy breach. Perhaps this is

16

why, at least for the next 2-3 years, companies will increasingly choose private cloud over public cloud environments. To mitigate financial risks some companies seek indemnification where the cloud provider agrees to take on or share liability of security breach including costs associated a breach. However, cloud financial indemnifications are extremely rare, and even if offered, the risk associated with such breaches is often transferred to insurance companies via purchase of cyber insurance. And of course, such insurance costs will be baked into cloud service fees. Other financial risks for companies doing business in the cloud include loss of revenues if there are significant availability issues. If cloud environments are down for hours or days, this could adversely impact a business‘ ability to perform analytics or reporting and thus may affect revenue opportunities. To offset possible lost revenues, most cloud providers will sign up for availability SLAs and associated penalties (usually redeemable as service credits). Cloud computing has so much upside, that it‘s very easy for business managers to declare ―all things must be cloud‖. That‘s well and good, but one must also carefully consider cloud risks. And while risk cannot be eliminated, it can surely be mitigated with proper planning and execution when things go wrong. Companies considering cloud computing must remember that just like in outsourcing, there‘s no such thing as transference of responsibility. In moving workloads to the cloud, carefully document upsides and downsides, examine your decisions in terms of risk (including financial ones), and then make the best decision possible for your particular organization.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

17

WANT CLOUD SUCCESS? AIM FOR SIMPLICITY
Published May 30, 2012

Retailers have long followed the mantra of ―stack it high and watch it fly‖. In fact, stores often pile goods to the ceiling, make shoppers navigate in-aisle displays, and price everything with bright and obnoxious signage. However, some progressive retailers have discovered that reducing ―choice‖ can actually boost sales. And in terms of cloud computing, one successful vendor has taken a page from this retailing playbook by removing confusing computing choices.

Image courtesy of Flickr

I

n ―Less is More in Consumer Choice‖, I cited a 2007 study in which researchers

conducted experiments in a shopping mall aimed at understanding mental fatigue associated with too much choice. The studies concluded that when faced with too many buying options, study participants couldn‘t stay on task in completing projects—in effect their brains were overwhelmed by choice overload. The folks at Amazon Web Services (AWS) have figured this out. Cloud computing can already be avery complex endeavor with behind the scenes infrastructure consisting of interconnections among servers, networks, applications, controllers and more. So, by abstracting the complexity of cloud architectures via a simple user interface, AWS makes cloud computing easy to consume. But AWS has taken simplicity a step further by actually reducing mental clutter and choice. Cloud Scaling CTO, Randy Bias notes AWS reduces choice by simply providing infrastructure as a servicewithout all the bells and whistles associated with offering the entire cloud stack. AWS provides and maintains virtualized storage and compute resources, AWS users need to provide whatever else they require. AWS, Bias says, has ―reduced choice by simplifying the network model,

18

(and) pushing onto the customer responsibility for fault tolerance‖ as server instances are not persistent. Bias also explains that AWS‘ EC2 service requires developers to fit their applications to the infrastructure, not the other way around. Amazon is effectively saying to developers, ‗Build your applications with our infrastructure in mind‘ so they are cloud ready, instead of ‗build your application‘ first, and then leave it to AWS to figure out how to scale it. Going forward, there will be plenty of technology savvy buyers with the ability to sort through myriad complex cloud computing options. However, there will also be large segments of cloud buyers (i.e. those in lines of business) that will want to sign up for cloud computing with corporate credit cards. These buyers will appreciate simple user interfaces, easy to access resources, and less mental clutter and exhaustion for buying decisions. When it comes to choice architecture for cloud computing, AWS shows us less really is more.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

19

PRIVATE CLOUDS FOR ANALYTICS OVER PUBLIC CLOUDS
Published in Teradata Magazine, May 23, 2012

Cloud solutions are useful when additional computing power is needed. And cloud capabilities are relatively easy to procure because customers can sign up with a credit card—via an online portal—and start using services within minutes. This is the public cloud delivery model of Amazon and Google, among others.

P

ublic clouds, however, can have a ―not-so-silver lining,‖ with documented

concerns over security, privacy, availability, data loss and latency issues. Organizations wishing to mitigate risks associated with storing and analyzing sensitive data in public clouds are increasingly turning to private clouds. Public cloud solutions generally satisfy user expectations for applications like sales force automation or marketing campaign management. However, data warehousing requirements such as high availability, mixed workload management, near real-time data loads and complex query execution are not easily managed or deployed using public cloud computing models. By contrast, private clouds for data warehousing offer the higher performance, better security and predictable service levels expected by today‘s business users. Read the Teradata Magazine article

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

20

ARE PUBLIC CLOUDS HURTLING TOWARDS DISASTER?
Published April 12, 2012

The freight train of cloud computing has left the station and is on an exponential growth tear. But just like in the cartoon strips, there might be broken tracks or a chasm ahead with emerging complexity and ensuing fragility in the public cloud. In terms of cloud computing, should enterprises enjoy the fast paced ride, or think cautiously about what lies ahead?

C

loud computing makes sense for companies seeking to deploy IT solutions

faster and more flexibly on a pay-as-you go basis. However, it‘s not all upside for cloud computing as complexity in cloud environments (especially public) increase, there‘s also potential for catastrophic failure within systems. And while IT and business users should expect occasional downtime, as public cloud complexity rises there‘s also potential risks for much worse. Previous columns have examined architectures and technological complexities of cloud computing environments. We‘ve also examined how the moving pieces in cloud computing are often interdependent and tightly coupled where failure in one component can affect the performance of others. We have also seen how it‘s wise to not assume large cloud providers offer a safer choice in terms of keeping data secure, and protected from loss. As cloud environments inevitably experience technological advances, increased multi-tenancy, colossal system sizes, tight coupling of processes and components, and myriad IT personnel decisions (and errors) these systems will grow more risky to the point where system accidents will become commonplace.

21

Charles Perrow, author of ―Normal Accidents‖, says of such environments; ―Given (these system) characteristics, multiple and unexpected interactions of failures are inevitable. This is an expression of an integral characteristic of the system, not a statement of frequency.‖ Solutions then to these challenges then include adding more redundancy and buffers for components and processes and also additional data protections (i.e. backups on and offsite) to prevent temporary or worse – complete data loss. It should also be a goal to lessen the chances of failure through better training of personnel managing such systems and disaster planning with the expectation that system failure isn‘t just possible, it‘s very likely. Public clouds are extremely beneficial to many organizations–allowing them to obtain compute and storage resources with just a few clicks and a credit card. However, there are certainly risks and other considerations (e.g. operations, data, security, privacy, legal) as well–and these should not be overlooked. Perrow reminds us that great events have small beginnings. With data as the lifeblood of an organization, it‘s good to enjoy all the benefits that cloud computing brings, however it‘s also wise to pay attention to the little details and dependencies that could turn a small hiccup into a severe case of heartburn.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

22

COULD YOUR CLOUD PLATFORM BECOME A COMPETITOR?
Published September 4, 2012

As more companies turn to cloud computing, social media and online selling platforms to avoid spending budgets on infrastructure, it is also likely that they are sharing a key business enabler—behavioral data. So what happens when your cloud-based provider shifts from providing infrastructure or a platform for your business to actually competing with you?

Image courtesy of Flickr. Kevin H.

O

nline business Zynga understands while they develop games such as

Farmville, Mafia Wars and others that run on Facebook‘s platform, it‘s also not farfetched that Facebook could get into the profitable business of gaming. In fact, Zynga is turning away (though not completely) from Facebook to also support games for Google plus and other social/mobile platforms. That‘s also because there‘s nothing that prohibits Facebook from doing exactly what Zynga does – making great games. As a Forbes article points out while the relationship between Facebook and Zynga is currently symbiotic, ―There are many things Facebook could do to damage Zynga‘s business such as limiting the Facebook access of game developers, modifying Facebook‘s terms of service, giving more favorable treatment to Zynga‘s rivals and building Facebook‘s own games.‖ The same is true for businesses that operate using Amazon.com‘s infrastructure for warehousing and shipping. A Financial Times article cites Amazon.com as

23

offering sellers access to a marketplace of 173 million people. However, the article also mentions ―the downsides include giving Amazon direct access to these customers for information and communication and the potential for conflicts of interest.‖ Platforms such as Amazon, Facebook, and even cloud computing services such as Salesforce.com, NetSuite or Google all provide services in the cloud that make it easier for companies to avoid paying for software and hardware upfront. In addition, platform as-a-service providers (PaaS) such as Force.com offer cloud based delivery models for deploying applications. And while for small to medium sized businesses there is plenty of upside in avoiding hard to come by capital expenditures for infrastructure, the downside of cloud computing is that companies may be sharing information with their cloud providers that gives them insights into marketplace segments. For example, infrastructure and platform cloud providers have access to online traffic patterns and consumption or development needs. And in most instances, online selling platforms (i.e. Amazon.com) have behavioral data on customer shopping patterns, market baskets and purchases. Armed with massive amounts of behavioral data, it‘s becoming easier for such companies to analyze large marketplaces and decide if they want to ―get in the game‖ themselves or at the very least use such data to improve recommendation systems or category profitability. Information is power and giving up too much information, especially behavioral data is definitely a downside of using cloud computing providers. An article in Technology Review sums it up nicely, ―Hundreds of thousands of developers know building apps that rely on the Facebook or Twitter platforms comes at a risk—at any time, the companies can change their access rules or launch competing features.‖ Some companies have little choice in using cloud computing providers to quickly get to market by avoiding capital outlays and shortening development times. However, it‘s also important to acknowledge risks of sharing valuable behavioral and other data types with your ―friendly‖ cloud provider.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

24

IS BIGGER, BETTER IN CLOUD COMPUTING?
Published April 2, 2012 Bigger is better – is a phrase that’s widely assumed to be self-evident. However, whether it is cruise ships capsizing, or international banks catching a major cold/flu in the 2008 Financial Crisis, we know that while there’s presumably safety in the concept of ―size‖ there can also be inherent complexity and subsequent risk.

Courtesy of Flickr/Serhat Demir

R

isk management is a critical topic business and IT professionals must take

into account in terms of cloud computing. And especially for mission critical data such as human resources, payroll, financial or even patient data, security and privacy of sensitive data is a paramount concern when considering cloud delivery models. But in cloud computing, risk comes in other forms as well including financial viability, especially when there seems to be a new cloud entrant in the marketplace every week. New and attractive markets usually attract entrants at a sizzling pace, however, when the eventual market shakeup comes due, there‘s also a chance your cloud provider might go out of business completely, taking your data and applications with them. And let‘s not forget operational risk in the cloud, where it might be assumed large cloud providers might have the upper hand in hiring the talent and expertise necessary to manage inherently complex cloud environments. However, all the talent in the world is not going to save an environment that‘s poorly architected, tightly coupled, and one operational mistake (or bad decision) away from catastrophic meltdown. Ultimately, one cannot master risk. Instead, management of risks is about all we can hope for.

25

Mark Twain once famously said; ―put all your eggs in one basket – and then watch that basket.‖ However, Mr. Twain surely didn‘t have cloud computing on the mind when he spoke. For IT and business professionals considering cloud computing solutions, it‘s probably tempting to short-list those providers that have a sizable and large cloud computing presence (e.g. the top ten largest and established vendors). However, for a few of these companies, cloud computing is an ancillary business and there‘s no guarantee that strategic plans won‘t shift to the point that spin-offs aren‘t a possibility. In addition, with cloud computing margins already thinning by some estimates, there‘s also a good chance investor pressures may force cloud providers to skimp on redundancy or recklessly cut corners elsewhere. Long way of saying, when it comes to cloud computing, I‘m not convinced there‘s safety in numbers, nor that a bigger presence and/or market share signals a fundamentally better offer.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

26

FROM COMPLEXITY TO SIMPLICITY IN THE CLOUD
Published March 22, 2012

The inner workings of cloud computing can be quite complex. That’s why the founders of Dropbox are on the right path—make cloud computing as simple as possible with easy to understand user-interfaces to mask ―behind the scenes‖ infrastructure and connections.

O

pen up the lid of ―black box‖ cloud computing and what you‘ll see is anything

but simple. Massive and parallel server farms that never sleep, algorithms worming and indexing their way through global websites, large data sets waiting in analytical stores for discovery, message buses that route, control and buffer system requests, and massive processing of images, text and more on a grandiose scale. That‘s why companies that take the complexity out of cloud computing are thriving. Take for instance, Dropbox, a company that allows users to access their personal or corporate files from any internet connected device. A Technology Review article featuring Q&A with CEO Drew Houston cites the efforts of Dropbox to mask behind the scenes efforts of ―having your stuff with you, wherever you are.‖ With various operating systems, incompatibilities, file formats and more, Dropbox engineers had to wade through mountains of bugs and fixes to make the user experience as seamless as possible. ―There are technical hurdles that we had to overcome to provide the illusion that everything is in one place…and that getting it is reliable, fast and secure,‖ Houston says. Looking at Dropbox from the outside, a user only sees ―visual feedback‖ via a folder, icon or the like on his/her desktop. But underneath the hood there‘s a whole gaggle of technologies and code that makes Dropbox work. And to create a seamless experience, painstaking efforts are involved down to the tiniest components says Houston; ―Excellence is the sum of 100 or 1000 of…little details‖.

27

If information technology leaders plan to bring ―BI to the masses‖, simplicity will be a necessary requirement to mask the inherent complexity of cloud computing. Ultimately, there are plenty of business users that won‘t care how their particular applications are delivered, only that they are carried out with efficiency, reliability and security. Thus, user interfaces designed with clarity, elegance and ease-of-use in mind will ultimately put a ―wrapper‖ on complexity and drive further adoption of cloud computing delivery models. And it‘s also likely that business users will never appreciate the hard work that goes into designing, delivering and sustaining their applications on a 24x7x365 basis, and accessible from any internet enabled device. But then again, perhaps that‘s the point. Application availability, security, reliability, simplicity and productivity are now the expectations of business users – it‘s best to deliver ―in the cloud‖ exactly what they want.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

28

SHOULD PUBLIC CLOUDS BE CONSIDERED COMPLEX ENVIRONMENTS?
Published March 13, 2012

A recent analyst report suggests public clouds are prone to failure because they are inherently complex. However, just because there are multiple interacting objects in a particular environment, this doesn’t necessarily imply complexity.

C

loud computing is all the rage for business users and technology buyers. And

why not, especially because it provides a fast and flexible option for delivering information technology services. In addition, cloud computing also drives value through higher utilization of IT assets, elasticity for unplanned demand, and scalability to meet business needs today and tomorrow. However, there are risks in the cloud, especially in the public cloud where business and news media regale with case studies of data loss, security issues, failed backups and more. Perhaps one reason that public clouds are prone to failure— and perhaps always will be—is that some analysts consider these environments to be complex and tightly coupled. And if indeed this is the case, then IT buyers must consider that failure isn‘t only possible, it‘s inevitable. Yet, first we must ask, are public clouds really complex environments? To understand if a particular system is complex, we must understand if it has characteristics such as connected objects (nodes and links with interdependencies), multiple messages and transactions, hierarchies, and behavioral rules (instructions). Public cloud services available from companies such as Microsoft, Google, Amazon Web Services (AWS) etc., often consist of various components such as applications

29

(front end and backend such as billing), controllers and message passing mechanisms, hardware configurations (disk, CPU, memory), databases (relational and NoSQL), Hadoop clusters and more. In addition there are various management options (dashboards, performance monitoring, identity and access) and these environments typically operate with multiple users, multiple tenants (compute environments shared with more than one application and/or company), and sometimes span multiple geographies. And from a complexity standpoint we haven‘t even yet discussed processes in building cloud environments much less operating them. In summary, in a cloud environment there‘s lots of moving pieces and parts interacting with each other (not necessarily in a linear fashion) within any given timeframe. Multiple interacting agents can help define whether a particular environment is complex or not, however another key determinant is also very important—whether processes are tightly or loosely coupled. Richard Bookstaber, author of Demon of Our Own Design, writes that tightly coupled systems have components critically interdependent with little to no margin for error. ―When things go wrong, (an error) propagates linked from start to finish with no emergency stop button to hit,‖ Bookstaber says. So a tightly coupled system is one where linkages (dependencies) are so ―tight‖ that errors or failures cascade and eventually cause the entire system to fail. This discussion is important from a risk management perspective for cloud computing. If we believe that data is one of the most valuable assets of a corporation and if we believe public clouds are complex environments with tightly coupled components that have little to no slack (buffers) to stop failures, then there should be a set of practices and processes set in place to manage the potential risk of data breach, theft, loss or corruption.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

30

WILL CLOUD COMPUTING CHANGE YOUR BUSINESS MODEL?
Published December 21, 2012

Cloud computing is changing the manner in which consumers and businesses buy, manage and use technology. However, the impact of cloud on technology providers is causing an even more pressing adjustment—as business models shift from simply selling and servicing technology to instead helping companies consume it.

T

he business model for plenty of technology companies hasn‘t changed much

over the last fifty to sixty years. Sell equipment or software, install it, provide a bit of training, and reap contracts from subscriptions and/or maintenance. And if it isn‘t broken don‘t fix it, right? The rise of cloud computing is changing this paradigm. In an October 2011 report, Bo Di Muccio and Thomas Lah of TSIA Research suggest a drastic change is coming to technology service providers. Instead of simply installing and managing technology (via shared or managed services), cloud computing will force companies help users ―consume‖ or use technology to achieve business benefit. Prior to cloud computing, companies buying technology had no choice but to accept ―complexity‖, say Di Muccio and Lah. To reap benefits of technology, business line managers enlisted system integrators or consulting firms to install, integrate and manage technology on their behalf. In addition, companies had to write an upfront check (capex) for hardware, software, training and implementation. Cloud changes this model. Instead, as more business managers get comfortable with cloud and its inherent benefits, Di Muccio and Lah argue technology service providers will be forced to adopt a ―consumption economics‖ model where they no

31

longer receive payment for shipping, installing and servicing a box, but instead receive revenues based on usage (pay per use). Di Muccio and Lah also mention that cloud based computing shifts ―risk‖ from buyers to technology service companies. For example, in previous years a business line manager might pay half a million dollars to a vendor, whether he or she uses the technology or not. With cloud computing‘s pay-per-use model, the risk shifts to the vendor which only gets paid when technology is consumed. To be sure, the shift in mix of complexity vs. consumption is not occurring overnight. However, with adoption of cloud computing increasing exponentially, technology and service providers must make plans today to meet tomorrow‘s business expectations. This means development of new pricing models, products, services, skills and training to make companies more than ―buyers‖ but also successful ―consumers‖ of technology.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

32

ABCS OF ELASTICITY: ALWAYS BE (THINKING) CLOUD
Published January 30, 2013

Sales personnel have a mantra, ―ABC‖ or ―Always Be Closing,‖ as a reminder to continually drive conversations to selling conclusions or move on. In a world where business conditions remain helter-skelter, traditional IT capacity management techniques are proving insufficient. It’s time to think different – or ―ABC‖: Always Be (Thinking) Cloud.

G

etting more for your IT dollar is a smart strategy, but running your IT assets

at the upper limits of utilization—without a plan to get extra and immediate capacity at a moment‘s notice—isn‘t so brainy. Let me explain why. Author Nassim Taleb writes in his latest tome, ―Anti-Fragility," about how humans are often unprepared for randomness and thus fooled into believing that tomorrow will be much like today. He says we often expect linear outcomes in a complex and chaotic world, where responses and events are frequently not dished out in a straight line. What exactly does this mean? Dr. Taleb often bemoans our pre-occupation with efficiency and optimization at the expense of reserving some ―slack‖ in systems. For example, he cites London‘s Heathrow as one of the world‘s most ―overoptimized‖ airports. At Heathrow, when everything runs according to plan, planes depart on time and passengers are satisfied with airline travel. However, Dr. Taleb

33

says that because of over-optimization, ―the smallest disruption in Heathrow causes 10-15 hour delays.‖ Bringing this back to the topic at hand, when a business runs its IT assets at continually high utilization rates it‘s perceived as a beneficial and positive outcome. However, running systems at near 100% utilization offers little spare capacity or ―slack‖ to respond to changing market conditions without affecting expectations (i.e. service levels) of existing users. For example, in the analytics space, running data warehouse and BI servers at high utilization rates makes great business sense, until you realize that business needs constantly change: new users and new applications come online (often as mid-year requests), and data volumes continue to explode at an exponential pace. And we haven‘t even yet mentioned corporate M&A activities, special projects from the C-suite, or unexpected bursts of product and sales activity. In a complex and evolving world, solely relying on statistical forecasts (i.e. linear or multiple linear regression analysis) isn‘t going to cut it for capacity planning purposes. On premises ―capacity on demand‖ pricing models and/or cloud computing are possible panaceas for better reacting to business needs by bursting into extra compute, storage and analytic processing when needed. Access to cloud computing can definitely help ―reduce the need to forecast‖ for traffic. However, many businesses won‘t have a plan in place, much less the capability or designed processes—at the ready—to access extra computing power or storage at a moment‘s notice. In other words, many IT shops know ―the cloud‖ is out there, but they have no idea how they‘d access what they need without a whole lot of research and planning first. By then, the market opportunity may have passed. Businesses must be ready to scale (where possible) to more capacity in minutes or hours—not days, weeks or months. This likely means having a cloud strategy in place, completion of vendor negotiation (if necessary), adaptable and agile business processes, identifying and architecting workloads for the cloud, and a tested ―battle plan‖ so that when demands for extra resources filter in, you‘re ready to respond to whatever the volatile marketplace requires.
— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

34

CLOUD NOT JUST ABOUT COST SAVINGS
Published February 15, 2013

In a very competitive macro-economic climate, companies seek to reduce costs and drive those savings towards either the bottom line or re-purpose savings towards innovative projects. Cloud computing is often seen as one such avenue towards cost reductions as companies can ultimately reduce capital expenditures and data center operating costs. However, as good as the concept of ―cost savings‖ sounds, you might be surprised to discover that according to one analyst firm, cost reduction isn’t the primary driver for cloud computing.

Too many under-utilized data marts and application servers and too many

wasted kilowatts. That‘s what consortiums like Energy Star say as they report most corporate servers are only utilized 5-15% of the time. Besides wasted energy, companies are saddled with a poorly utilized asset that‘s still costing precious IT dollars in maintenance and possibly software subscriptions. Cloud computing, then, can often ride to the rescue in terms of creating shared pools of system resources via virtualization technologies. This in turn helps reduce the number of servers needed, slashes application licenses, and ultimately trims power and cooling costs. While cost savings are no doubt important, a recent analyst survey cites the primary driver for CIOs to approve cloud computing are: ―(delivering results to the business) better,‖ followed by ―(delivering results to the business) faster.‖ Cost savings comes in a distant third. That‘s because global product lifecycles are speeding up as companies adjust to niche consumer demands, and more nimble/agile competitors. For evidence of faster product lifecycles, see GE appliances. In an Atlantic magazine article, GE appliance manager Lou Lenzi says that a refrigerator model design was formerly good for at least 7 years before a complete product refresh

35

was necessary. Now because of accelerated product cycles, models are only good for 2-3 years as customers regularly clamor for new features, colors, styles and models such as ―$3000 smart refrigerators." Cloud computing can help companies respond to faster product cycles. With cloud, customer demands can be met, almost instantaneously because analytic resources for product and customer analysis are ―at the ready‖ and can grow and/or shrink as business demands. No more waiting for capital budget refreshes, or IT to find cycles to accommodate immediate business needs. And best of all, these resources are generally available on a pay-per- use or subscription basis, so they‘re easier to fund from OPEX budgets. And user friendly cloud self-service options help enable business managers to create analytic development ―laboratories‖ where they can carve out system resources to work on special projects, test out new theories, or collaborate with product, marketing and R&D teams on a global basis. In short, cost savings are a very important aspect for CIOs, but not always the primary driver for cloud adoption. CIOs then, fundamentally seek cloud computing options to help them align more closely to business user demands and better meet customer needs. Indeed, with shrinking product lifecycles, the ability to source immediate compute, storage and analytics—for faster business results—could mean the difference between an extremely profitable or disastrous quarterly result.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

36

MOVING TO PUBLIC CLOUD? DO THE MATH FIRST
Published February 22, 2013

A manufacturing executive claims that many companies ―didn’t do the math‖ in terms of rushing to outsource key functions to outside suppliers. Are companies making the same mistake in terms of rushing to public cloud computing infrastructures?

Image courtesy of Flickr. By peachy177

The herd mentality—we know it well. Once a given topic (i.e. agile development,

Hadoop ―Big Data‖ implementation, etc.) becomes the darling of business and management publications, a gold rush usually follows to implement. Unfortunately, sometimes there isn‘t much, or any, thought put into gauging enterprise fit or building a business case for the latest and most fashionable idea. Take for example the concept of outsourcing. During the early 2000s, cheap labor rates in China and India caused senior managers to see dollar signs as they could cut labor costs nearly in half, while gaining a specialized workforce dedicated to developing and building products and/or servicing customers. There was a catch however. When considering topics such as delivery lag times, transportation costs, loss of corporate agility, language and communication barriers and more, the so-called cost savings often failed to materialize. ―About 60% of the companies that offshored manufacturing didn‘t really do the math,‖ says Harry Mosler, an MIT-trained engineer who runs the Reshoring Initiative. ―They looked only at the labor rate, they didn‘t look at the hidden costs.‖ The concept of shifting compute needs to public cloud computing infrastructures is an idea gaining traction. As the C-suite contemplates methods to deliver better, respond to market changes faster and reduce costs, cloud is an increasingly

37

tantalizing option. In fact, the market for public cloud computing is said to be $131B in 2013 and growing, according to a tier one analyst firm. While companies are choosing cloud for myriad reasons, it‘s readily apparent that procuring public infrastructure, development platforms or applications from a cloud provider is really just another form of outsourcing. This then brings some challenges to the forefront, specifically the need to understand the business case and use cases for cloud computing for your own company. And the needs must go beyond simple cost savings analysis. Don‘t make the same mistakes of those executives who rushed to outsourcing in the past decade. Tally up the cost savings, but also spend time diagnosing ―hidden risks‖ of public cloud in terms of well-known issues of costs of downtime/availability, data security/privacy in a multi-tenant environment and data latency. In addition, think about the level of control you want over your IT infrastructure. Are you comfortable relying on another vendor for critical IT infrastructure needs? In case of the inevitable IT failure or worse case cyber-attack, are you one of those who would want to start working a problem right away, rather than opening a trouble ticket and waiting for an answer? You‘ll also need to consider skill sets (tally those you have, and those you‘ll need), in addition to architecting your various workloads for cloud infrastructures. Please don‘t get me wrong. For many companies, sourcing computing needs to public infrastructures makes a lot of sense, but when only supported by a thorough business case, and detailed risk analysis. You‘ll need a thorough understanding of what you‘re jumping into before ―joining the herd,‖ especially when an on-premise solution might work better. In other words, ―do the math‖ (figuratively and literally).

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

38

CAPEX FOR IT – WHY SO PAINFUL?
Published June 10, 2013

CAPEX dollars reserved for investments in plant, property or equipment (including IT) are notoriously hard to secure. In fact, IT leaders often express dismay at the process involved in not only forecasting for CAPEX needs, but then stepping through arduous internal CAPEX budget approvals. What’s all the fuss with CAPEX, and why is it so difficult to obtain?

Courtesy of Flickr. By FCAtlantaB13 An investment analyst says that 2013 should be a banner year for capital investments. And another analyst, Mark Zandi of Moody‘s said in late 2012; ―Businesses are flush and highly competitive and this will shine through in a revival of investment spending by this time next year...‖ So where‘s the CAPEX? Apparently in short supply. A New York Times article says that companies are stock piling cash, and taking on debt, but investing very little in themselves. For now, if there are significant IT investments, it appears OPEX is the preferred route. First, let‘s be clear, the CAPEX vs. OPEX debate really is around a shift in cash flows and outlays, there are little to no other financial advantages. In choosing one vs. the other, it‘s a matter of company policy, as in one instance (CAPEX) assets are categorized on a balance sheet and depreciated, and in the other (OPEX) purchases are expensed through daily operations. Certainly, there are capital intensive businesses such as telecom, manufacturers and utilities that must continually invest in infrastructure. These types of companies will always spend significantly on CAPEX. On the other hand, there are companies that are CAPEX restricted such as start-ups, companies under the

39

watchful eye of private equity firms, and medium sized businesses that don‘t have much CAPEX as a matter of course. Obtaining CAPEX can also be painful for IT leaders. At the TDWI Cloud Summit in December 2012, one stage presenter in charge of IT mentioned that getting an idea from the ―back of a napkin to (capitalization budget) approval‖ could take 18 months. This is why cloud computing options are attractive. With cloud, companies that either have capital to spend (but don‘t want to), or those that are CAPEX constrained can take advantage of existing compute infrastructures on a subscription basis. With cloud, investments in IT capabilities are easier to digest via OPEX rather than front-loading a significant chunk of change in a business asset. And of course, there are also other reasons to choose cloud computing (such as elastic provisioning and full resource utilization) as listed here. Regardless, it appears that for the present day, CAPEX dollars (especially for IT) will be in short supply. Perhaps this is just one of the many reasons why there‘s a flurry of M&A activity in the cloud computing space?

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

40

WHAT THE SHARING ECONOMY MEANS TO CLOUD COMPUTING
Published July 1, 2013

The sharing movement is in full swing. Innovative ―collaborative consumption‖
companies are helping pool under-utilized assets such homes, boats, cars and then renting them out as services. With the rise of peer-to-peer sharing, it also makes sense that cloud computing—which is compute and storage ―resource pooling‖ and renting—would also gain traction. But just as there are risks in sharing property and other assets, there are also risks in sharing cloud computing infrastructures.

Courtesy of Flickr. By laura_m_billings Jessica Scorpio of Fast Company has it right when she says; ―A few years ago, no one would have thought peer-to-peer asset sharing would become such a big thing.‖ Indeed, since the launch of Airbnb, more than 4 million people have rented rooms—in their own houses—to complete strangers. And in San Francisco, a new company called FlightCar offers to park and wash your car at the airport –with a catch, that while you‘re away on a business trip your car is available as a ―rental‖ to others (at half the cost of other companies). Intrinsically, the rise of the sharing economy makes sense. Why not take underutilized assets and make them available to others for a temporary amount of time, thus gaining higher utilization and earning extra income? But to make a sharing economy work, a key issue of ―trust‖ is necessary. In the case of Airbnb, homeowners must trust the company has carefully vetted those who would rent out rooms, especially when security and privacy concerns are very real. However, while there have been a few scary tales in terms of sharing homes, cars, and other services, for the most part the marketplace has run smoothly.

41

In a similar vein, the big target on the back of cloud computing is trust. Cloud computing providers are still wrestling with perceptions that they are not as safe and trustworthy in terms of privacy, security and availability. And while it‘s true that cloud providers have greatly improved in these areas, myriad surveys show there‘s still significant work to do in overcoming initial perceptions that sensitive corporate data is often ―lost, corrupted or accessed by unauthorized individuals‖. For both cloud computing and the sharing economy, overcoming trust issues is job one. That said, the trend towards sharing is unmistakable. Neal Gorenflo, publisher of Shareable Magazine says; ―People don‘t want the cognitive load associated with owning.‖ The same mindset can also be attributed to global CIOs and CFOs who want someone else to do the work of capitalizing, maintaining, updating and running their IT systems in the cloud while they focus on driving business value. Forbes estimates that in 2013, $3.5 billion dollars will change hands in the sharing economy. We also know that cloud revenues are on torrid trajectory. If peer-topeer sharing and cloud computing providers can overcome trust issues, there are few constraints on how big these markets can really be.

— About the Author — Paul Barsch directs marketing programs for cloud, hosting and managed services for Teradata, a leader in data warehousing and analytics. Paul has also worked in senior marketing roles for global consultancies EDS (an HP company) and BearingPoint. Contact him at @paul_a_barsch on Twitter.

42

Sign up to vote on this title
UsefulNot useful