You are on page 1of 129

A Risk Assessment Framework for Evaluating Software-as-a-Service (SaaS)

Cloud Services Before Adoption

Lionel Bernard

A Thesis
Submitted to the
Graduate Faculty
of
University of Maryland University College
in Partial Fulfillment of
the Requirements for the Degree
of
Doctor of Management

Dr. Monica Bolesta, Dr. James Gelatte, Dr. Michael Evanchik


January 21, 2011
UMI Number: 3482543

All rights reserved

INFORMATION TO ALL USERS


The quality of this reproduction is dependent on the quality of the copy submitted.

In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.

UMI 3482543
Copyright 2011 by ProQuest LLC.
All rights reserved. This edition of the work is protected against
unauthorized copying under Title 17, United States Code.

ProQuest LLC.
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106 - 1346
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS i

Abstract

Software-as-a-service (SaaS) is rapidly becoming the standard software platform for


many organizations seeking to reduce their IT costs and take advantage of the inherent
flexibility, quick deployment, ready access, and scalability of the SaaS concept. SaaS is
part of the paradigm shift toward cloud computing in software, hardware, and IT
services acquisition. Swayed by its noted benefits, SaaS adopters may neglect to
consider the risks associated with SaaS and overall cloud-based services before
adoption. SaaS risks stem from its multi-tenancy, Internet dependency, and the
requirement to entrust cloud providers with confidential data. Existing standards for
selecting commercial off-the-shelf (COTS) or custom-built software and frameworks for
evaluating the risks of cloud-based services are either too broad to apply specifically to
SaaS or do not take into account some of the unique requirements of SaaS.
Furthermore, the issue of risk relevancy is significant if adopters are to formalize the
SaaS decision-making process. A review of existing cloud risk assessment frameworks
and cloud literature reveals higher-level risk dimensions of security, business continuity,
and integration as the prevalent concerns regarding SaaS adoption. To determine the
relevance of these risk dimensions, particularly to SaaS success, a web-based survey
was conducted of organizational cloud decision-makers. The results provide evidence
that certainty about some elements of security, business continuity, and integration
significantly influences the adopting organization’s level of satisfaction with its overall
SaaS experience. The findings serve as input to the development of a new SaaS-
tailored risk assessment framework. The SaaS Cloud Risk Assessment Framework (S-
CRA) is a questionnaire-based decision-making tool allowing cloud adopters to develop
a risk profile of candidate SaaS providers and solutions and make a normative and
rational decision to reduce organizational risk exposure. Despite its theoretical utility,
further qualitative research is needed to determine the viability of the S-CRA framework
in an empirical SaaS selection scenario.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS ii

Table of Contents

List of Tables ............................................................................................................................iv


List of Figures ...........................................................................................................................v
Chapter 1: Introduction ............................................................................................................1
Overview ................................................................................................................................................... 1
What Is SaaS? ......................................................................................................................................... 1
Factors Leading to SaaS Growth ............................................................................................................. 7
Impact of SaaS on the IT Industry ........................................................................................................... 9
What Is Cloud Computing? .................................................................................................................... 11
Disadvantages of SaaS .......................................................................................................................... 15
Research Purpose ................................................................................................................................. 16
Research Questions ............................................................................................................................... 18
Organization of This Research .............................................................................................................. 19
Chapter 2: Concept Development and Hypothesis .............................................................21
Introduction ............................................................................................................................................ 21
Basic Theoretical Framework ................................................................................................................. 30
Conventional Software Selection Methods ............................................................................................ 34
Analytical network process and analytical hierarchy process ................................................... 34
COTS software evaluation methods ......................................................................................... 36
Decision-making, Risk, and SaaS .......................................................................................................... 40
Gauging Successful Software ................................................................................................................ 49
Conceptual Model and Hypothesis ........................................................................................................ 50
Security risk dimension .............................................................................................................. 53
Business continuity risk dimension ............................................................................................ 53
Integration risk dimension .......................................................................................................... 54
Chapter 3: Research Method and Data Collection ...............................................................56
Target Population ................................................................................................................................... 56
Research Method ................................................................................................................................... 57
Research Instrument .............................................................................................................................. 57
Questionnaire Scales ............................................................................................................................. 59
Validity and Reliability ............................................................................................................................ 62
Content validity .......................................................................................................................... 62
Reliability ................................................................................................................................... 63
Construct validity ....................................................................................................................... 65
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS iii

Chapter 4: Data Analysis and Results ..................................................................................70


Introduction ............................................................................................................................................. 70
Descriptive Analysis ............................................................................................................................... 70
Factor Analysis of Dimension Construct Items ...................................................................................... 75
Hypothesis Testing ................................................................................................................................. 80
Hypothesis 1: Security risk ........................................................................................................ 80
Hypothesis 2: Business continuity risk ...................................................................................... 81
Hypothesis 3: Integration risk .................................................................................................... 82
Chapter 5: Conclusion ............................................................................................................84
Introduction ............................................................................................................................................. 84
Major Findings ........................................................................................................................................ 85
Limitations of Research Design ............................................................................................................. 91
Managerial Implications ......................................................................................................................... 92
Implications for Future Research ........................................................................................................... 94
References ..............................................................................................................................96
Appendix I: Institutional Research Board (IRB) Survey Approval Form ........................... 104
Appendix II: Survey Instrument (with Consent Form) ........................................................ 107
Appendix III: Survey Distribution Approval......................................................................... 115
Appendix IV: Construct Items Removed Based on Factor Analysis ................................. 116
Appendix V: S-CRA Framework Questions and Rating/Weight Sample ........................... 118
Appendix VI: Explanation of Risk Sub-Dimension Items in S-CRA Framework ............... 121
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS iv

List of Tables

1.1. Results from Survey of Current SaaS Selection Methods (n = 252) ............................... 18

2.1. Software Selection Decision Matrix ...............................................................................25

2.2. Software Selection Decision Matrix with Probability-Weighted Factors .......................... 25

2.3. S-CRA: Summary of Key Conceptual Model Components............................................. 34

3.1. Reliability Measures and Constructs..............................................................................64

3.2. Factor Analysis for SaaS Security Risk (n = 114) ..........................................................66

3.3. Factor Analysis for SaaS Business Continuity Risk (n = 114) ........................................ 67

3.4. Factor Analysis for SaaS Integration Risk (n = 114) ......................................................67

3.5. Correlation Matrix of Latent Variables (Security Risk Construct) .................................... 68

3.6. Correlation Matrix of Latent Variables (Business Continuity Risk Construct) ................. 68

3.7. Correlation Matrix of Latent Variables (Integration Risk Construct) ................................ 69

4.1. Cross-Tabulation of Level of Satisfaction and Organization Type .................................. 72

4.2. Cross-Tabulation of Level of Satisfaction and Job Function........................................... 73

4.3. Cross-Tabulation of Level of Satisfaction and Length of Adoption ................................. 74

4.4. Cross-Tabulation of Level of Satisfaction and Continued Usage.................................... 74

4.5. Correlation of Security Risk Dimension Construct Items ................................................ 75

4.6. Security Risk Dimension Reduction Factor Analysis ......................................................76

4.7. Correlation of Business Continuity Risk Dimension Construct Items .............................. 77

4.8. Business Continuity Risk Dimension Reduction Factor Analysis.................................... 78

4.9. Correlation of Integration Risk Dimension Construct Items ............................................ 78

4.10. Integration Risk Dimension Reduction Factor Analysis ..................................................79

4.11. Level of Satisfaction and Security Risk Certainty Hypothesis Test Analysis .................. 81

4.12. Level of Satisfaction and Business Continuity Risk Certainty


Hypothesis Test Analysis...............................................................................................82

4.13. Level of Satisfaction and Integration Risk Certainty Hypothesis Test Analysis............... 83
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS v

List of Figures

1.1. The SaaS paradigm.........................................................................................................2

1.2. Top 10 SaaS provider categories, 2010 ..........................................................................4

1.3. SaaS cost and benefit incentives to management ...........................................................9

1.4. The components of cloud computing .............................................................................12

2.1. Rational decision process ..............................................................................................23

2.2. Bayes’s theorem formula ...............................................................................................28

2.3. Proposed risk relevancy determination process.............................................................33

2.4. SaaS Cloud Risk Assessment (S-CRA) framework .......................................................33

2.5. Comparative analysis of COTS and SaaS selection processes ..................................... 40

2.6. Comparison of cloud risk factors, categories, and controls in Heiser’s cloud risk factors
and the ENISA, CSA, and FedRAMP cloud risk assessment frameworks ..................... 46

2.7. Sample risk assessment questions based on Heiser’s cloud risk factors and the ENISA,
CSA, and FedRAMP cloud risk assessment frameworks ............................................... 47

2.8. Synthesized SaaS risk dimensions and sub-dimensions ............................................... 52

3.1. Likert-like SaaS adoption satisfaction rating scale and explanation ............................... 60

3.2. Likert-like SaaS certainty rating scale and explanation ..................................................62


RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 1

Chapter 1

Introduction

Overview

Software-as-a-service (SaaS) was once viewed as merely a passing trend, one


among the many that proliferate in the jargon of the information technology (IT) sector.
The concept has now reached a much higher level of maturity and acceptance at a
speed unprecedented and unexpected even by those in the IT industry. The main
appeal of SaaS lies in its cost incentives. It helps organizations reduce IT costs by
channeling large software acquisition expenditures into essentially smaller payments
through “renting” software for a wide variety of business functions. The question now
facing most organizations is not whether to adopt SaaS but, rather, when to do so;
indeed, in some companies, the technology is quickly forcing the decision to either
adopt or perish in response to the need to cut costs and maximize efficiency. Despite its
cost appeal and other tangible benefits—convenience among them—SaaS comes with
its own unique risks. Organizations must resist the tendency to be blinded by the glow
of SaaS and consider these risks carefully when deciding whether or not to adopt the
technology . Hence, this research intends to highlight the importance of rational
decision-making in SaaS evaluation, determine the relevance of prevailing risk factors
to successful SaaS adoption, and propose a methodology for adopting SaaS that
considers the risks and helps reduce them. Considering risk in SaaS adoption secures
the organization’s decision to embrace this new technology and reduces the probability
that the business functions dependent on SaaS will be compromised.

What Is SaaS?

Software adoption in business organizations is experiencing a paradigm shift


from commercial or custom-built applications installed in-house to subscription-based
applications. This phenomenon, known by the acronym SaaS, is dramatically
transforming the way organizations evaluate, select, and integrate software into their
functions (Hayes, 2008). SaaS entails using software on a subscription basis via the
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 2

Internet. In this arrangement, the subscribing organization is considered a “tenant” and


incurs only usage costs rather than full ownership costs, as is the case with in-house
software (Miller, 2008). Gartner, a reputable information systems (IS) research firm,
defines SaaS as “software that’s owned, delivered and managed remotely by one or
more providers...consumed in a one-to-many model by all contracted customers
anytime, on a pay-as-you-go basis, or as a subscription based on use metrics” (Desisto
& Pring, 2010, p. 3). As illustrated in Figure 1.1, to qualify as a legitimate SaaS offering
based on the prevailing definition, online software must be web-based, provider-owned,
available only through a subscription or rental arrangement, allow the tenant to pay a
periodic and predetermined usage fee, and have a cloud-based underlying
infrastructure. In this SaaS pay-as-you-go arrangement, the software provider owns and
maintains the hardware, software, and systems that make up a specific SaaS
application.

Figure 1.1. The SaaS paradigm.

Given that the web browser is the primary medium used by subscribing
organizations to access SaaS applications, SaaS is poised to dramatically expand the
status of the web browser from its original role as the window to the Internet to the
standard operating system for computers. Google, one of the pioneers in SaaS
provision, is banking on the web browser’s expanded role as the gatekeeper to the
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 3

SaaS domain. Google introduced its heralded Chrome web browser in 2008 not only as
a means of supporting its SaaS offerings, such as Google Apps, an online word
processing and spreadsheet application, and Gmail, but also as a strategy to position
the Chrome browser as the predominant tool for accessing online applications. Unlike
Microsoft’s Internet Explorer (IE) and Mozilla’s Firefox browsers, Chrome is specifically
designed to optimize online applications, with faster response time, a friendlier interface,
and feature-rich offerings (Havenstein, 2008). As the use of online applications
increases, a free web browser that facilitates SaaS access and use could monopolize
the SaaS distribution channel by forcing providers to adjust their applications to
accommodate this browser. Microsoft embarked on a similar path more than a decade
ago. Sensing the growing availability and usage of the Internet, Microsoft purposely
linked its free IE browser to its Windows operating system (OS), thus annihilating its
main competitor, Netscape, and allowing Microsoft to leverage IE’s subsequent
widespread use to push its own content to Internet users. Google’s Trojan horse
strategy to benefit from the SaaS trend appears to be similar.

SaaS offerings are expanding exponentially and now include mission-critical


applications in almost any business interest category imaginable and catering to a wide
variety of organizations. SaaS has matured beyond its initial perception as a panacea
and “bleeding-edge” technology and is now considered a realistic IT investment, with
the added advantage of being cost effective, convenient, and a credible substitute for in-
house or custom-built applications (Desisto & Ping, 2008). Organizations considering
SaaS adoption have a menu of options to choose from to fit virtually any function or
process. Those seeking online accounting and financial management services, for
example, can turn to Intacct.com’s suite of SaaS offerings targeted at small and medium
businesses or large corporations; a version is even available for accounting firms.
Organizations considering a SaaS-based call-center application can look to the
offerings of UCN’s inContact, with features that include an interactive voice response
system (IVR), speech recognition, call blending and web-based administration,
reporting, monitoring, and recording. Automotive and tire retailers and wholesalers can
adopt Tireware’s enterprise resource planning (ERP) SaaS solution, which combines
point-of-sale, inventory, accounting, and parts catalog functions in a comprehensive,
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 4

subscription-based online application. Among the SaaS eLearning offerings are


Exambuilder.com, which allows educators to design, administer, and report on exams
for a fee ranging from $5 to $15 per student, and ScribeStudio.com, which enables
companies to upload video speeches and presentations and distribute the content to
conferences and seminars. SaaS-Showplace.com, a web directory of SaaS providers,
includes 45 categories ranging from asset management to talent management and
more than 2,200 SaaS providers as of late 2010. As shown in Figure 1.2, collaboration
(web-conferencing) functions, customer relationship management (CRM) applications,
and document management solutions are the most frequently adopted forms of SaaS
applications.

Source: SaaS-Showplace.com

Figure 1.2. Top 10 SaaS provider categories, 2010.

For most organizations, the adoption of SaaS is not a sudden leap of faith but,
rather, a creeping deployment entailing gradual integration and replacement of in-house
applications. This cautious approach to SaaS selection and integration stems from both
latent reservations about entrusting vital applications to an outside provider and the
perception that SaaS transitioning involves a fair amount of complexity. Once an
organization comes to understand the benefits of SaaS in its evaluation process, its
fears are diminished by the viable economic value of SaaS. As a litmus test, most
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 5

organizations initially commit to a single SaaS implementation to match a specific


business function, eventually establishing a mixed implementation environment
composed of SaaS and non-SaaS applications (Desisto, Paquet, & Pring, 2007). If the
initial test implementation is satisfactory, the organization will continue to assimilate
more SaaS applications into its operating environment. Shaklee, a California-based
manufacturer and retailer of all-natural beauty and cleaning products, had initial
concerns about the security and service risks of SaaS (Donston, 2008). After some
organizational self-reflection, careful planning, and a risk mitigation strategy using fine-
tuned contractual service-level agreements (SLAs) with SaaS providers, the company
decided to adopt the address verification features of StrikeIron.com to validate
addresses of customers in its online store. The success of this initial arrangement
encouraged Shaklee to eventually adopt several other SaaS provisions, including a web
analysis and marketing application, a data warehousing solution, and an event
management solution.

SaaS has evolved and matured as a technology to the point where it is now
growing in acceptance and adoption and becoming the platform of choice for many
organizations. McNee (2007) notes this evolutionary phenomenon and describes a
distinction between SaaS 1.0 and SaaS 2.0. SaaS 1.0 includes applications that
emphasize functionality and cost effectiveness but are limited in configurability.
Organizations gravitate to commoditized SaaS 1.0 applications because they are niche-
oriented and inexpensive, can be deployed quickly, and have a low total cost of
ownership (TCO). In contrast, the SaaS 2.0 applications described by McNee (2007),
which began to emerge only in 2005, are much broader in scope and offer organizations
greater flexibility in configuration and integration. Whereas SaaS 1.0 applications are
focused on a specific need, such as video conferencing, SaaS 2.0 offerings combine a
variety of high-end business functions, such as ERP and human resource management
(HRM), into a single integrated SaaS package.

The evolution of SaaS has been described as taking place in four distinct phases
of maturity. Miller (2008) notes that during the 1990s and at its first maturity level, SaaS
was embodied by the idea of the application service provider (ASP). ASP entailed a
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 6

many-to-many model wherein a provider hosted a unique and customized copy of an


application for each subscriber. At the second maturity level, each subscriber accessed
the same non-customized application. The third maturity level is similar to the second in
structure, but here, each subscriber is given the ability to implement minor configuration
changes, such as user access privileges. At the final and current maturity level, SaaS
applications are scalable and flexible, allowing subscribers to expand the applications
they use in response to demand, to configure applications to suit their operating
environments, and to access applications from mobile devices, such as cell phones,
personal digital assistants (PDAs), and laptops.

SaaS adoption by both small and large companies appears to have increased as
the technology has matured and become more credible. One estimate places SaaS
spending as of 2008 at 17% of software budgets for large companies (+1000
employees), approximately 11% for mid-sized companies (+100 employees), and about
26% for small companies (fewer than 100 employees). These numbers represent a
proportional increase from only 5% of software spending 3 years earlier among
companies of all sizes (SnapLogic, 2008). Just as the advent of the Internet ignited a
movement toward this medium as a repository of digitized information of all types, SaaS
is slowly luring companies and individuals alike to engage in such online activities as
storing photos and videos; using free or subscription-based web mail services; and
using online applications, such as Salesforce.com to manage sales contacts and
Elephantdrive.com to store documents. Despite its already impressive growth, SaaS
promises to grab an even bigger piece of the software spending pie of organizations’ IT
budgets. Surveys and insights by prominent IT trade publications, research
organizations, and many IT pundits (Donston, 2008; Orr, 2007; SnapLogic,
2008;eWeek, 2008; Weil, 2008a) conclude that between 70% and 84% of organizations,
both large and small, are currently considering adopting SaaS. Another research team
forecasts that SaaS growth will climb to 56% of the software market by 2011, with a
compound annual growth rate (CAGR) for the industry of 28% through 2012 (Mertz et
al., 2008). This projected growth rate will dramatically outpace conventional shrink-
wrapped software growth and far exceed the software industry’s CAGR of 11% for the
same time period (Mertz et al., 2008).
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 7

The merits of this rapid growth are readily apparent and serve to further increase
the appeal of SaaS, but as is the case in any paradigm shift, SaaS growth will also
prove to be disruptive to existing software systems, IT infrastructure, and IT staff in
organizations that have adopted or are considering adopting this technology. In
highlighting this disruptive tendency, Weil (2008b) notes that widespread adoption of
cloud computing services, including SaaS, not only causes organizations to reduce IT
staff and infrastructure—thus forcing some IT professionals to consider career
alternatives—but also shifts the high demand for IT resources, such as hardware and
labor, from non-IT organizations to cloud service providers.

Factors Leading to SaaS Growth

Several factors have supported and continue to support the high growth of SaaS.
One of these is the advent of high-speed Internet in the form of broadband. The
expansion of Internet speed and the decreasing cost of Internet access over the last few
decades have been key elements in the growth of SaaS. Some estimates indicate that
broadband access in the form of cable modem, DSL, and fixed wireless had increased
to 69% of the total U.S. population by 2007; this trend is expected to further increase to
71% by 2012 (Department of Labor, Bureau of Labor Statistics, 2008). This means that
two of every three individuals in the United States now have access to high-speed
Internet in the workplace or at home. Not surprisingly, existing and new computer
companies will continue to develop software service models that piggyback on the
Internet as a delivery medium. Other devices, such as cell phones and mobile devices
with high-speed Internet access, are quickly becoming feasible platforms for SaaS
delivery.

Beyond shrinking broadband costs, other cost and benefit incentives also drive
organizational IT decision-makers toward SaaS adoption. SaaS has evolved beyond a
mere computing trend. It helps to improve computing efficiency and the overall bottom
line of adopting organizations. In terms of cost, SaaS offers a strong incentive in
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 8

comparison to commercial off-the-shelf (COTS) software and custom-built applications.


The cost analysis for both of these solutions must usually consider such variables as
hardware, support, upgrades, maintenance, IT personnel, implementation, and the
actual software license itself in total cost-of-ownership calculations. In contrast, SaaS
costs usually include only the software license and implementation. These low upfront
costs are particularly attractive to small organizations with limited budgets and IT staff
(Info-Tech Research Group, 2006). Another cost incentive of SaaS is that the
associated subscription fees are considered non-capitalized operating expenses, thus
allowing the expense to remain within a department or division’s budget authority (Pring,
Desisto, & Bona, 2007). As an operating expense, SaaS acquisition can be expedited
because it does not normally under fall under the lengthy scrutiny required for
acquisitions of capitalized equipment and products.

Bajaj, Bradley, and Cravens (2008) note that some software benefits may be
“intangible and non-financial.” SaaS offers credible non-financial benefits, but opinions
vary on the extent and types of benefits inherent in SaaS; a summary of financial and
non-financial benefits is shown in Figure 1.3. Bangert (2008) emphasizes the high
degree of scalability, ease of accessibility, and low startup costs of SaaS, whereas
Blokdijk (2008) points out its cost predictability and low risk factors. Bielski (2008)
suggests that the anywhere availability of SaaS can improve coordination and
collaboration among team members. Orr (2006) describes additional tangible benefits of
SaaS, including the fact that SaaS enables users to generate and dispense information
at any time and from any location, that it is less disruptive than the installation of
traditional software, that it promises a quick adoption timeframe, and that it offers a
lower cost of ownership. Pring et al. (2007) also suggest that because SaaS
applications are usually built on open standards, they are easier to integrate with
existing in-house or other SaaS applications in an organization’s operating environment.

Cost Incentives Benefit Incentives


Low Fees High Scalability
Low Operating Costs Quick Deployment
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 9

Predictable Costs Anywhere Accessibility


Low TCO Simple Integration
Low Upfront Costs Access to Expertise
Change Management
Low Switching Costs Facilitation

Figure 1.3. SaaS cost and benefit incentives to management.

The financial and non-financial benefits of SaaS have an uneven and subjective
impact in different SaaS categories and in varying organizations. For example, web
conferencing, which is the leading SaaS growth category, is popular among
organizations because it offers a cost-effective and convenient alternative to traveling
for face-to-face meetings. In contrast, ERP SaaS applications, although just as cost
effective and beneficial as any other SaaS offering, are experiencing slower adoption
largely because of the integration complexity and perceived security risks of these
applications.

Impact of SaaS on the IT Industry

In his seminal discourse on the nature of revolutions in the scientific community,


Thomas Kuhn (1996) emphasizes the idea that new discoveries usually have a
disruptive effect on conventional theories and practices, in some cases rendering older
related theories obsolete. The SaaS revolution appears to be following a similar
disruptive pattern in its impact on the IT industry. IT professionals in particular are
discovering that as organizational demand for SaaS grows, the demand for IT staff to
support in-house applications and application infrastructure decreases (Weil, 2008b).
But this trend does not spell the demise of IT professionals. Rather, SaaS providers are
absorbing these displaced workers into their operations as their human resource
demands increase in tandem with increasing numbers of subscribers. In other words,
obsolete IT department resources are being recycled as input into the operations of
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 10

SaaS providers. SaaS providers are quickly emerging as the predominant IT employers
and purchasers of computing hardware and services.

Growing SaaS adoption is also forcing conventional software vendors to offer


their applications in the SaaS platform. Intuit, Inc., publishers of the popular COTS-
based QuickBooks and Quicken accounting and personal finance applications, has
released QuickBase, an online software suite that includes its flagship financial
applications, as well as additional CRM and HRM applications (Moltzen, 2008). Not to
be outdone by Google and the emerging SaaS provider threat, Microsoft, in an initiative
labeled “Azure,” has also released subscription-based versions of its market-leading
Exchange email server and SharePoint document management software.

A familiar phenomenon occurring in the SaaS industry is the tendency of large


providers to offer a suite of products and undermine smaller providers in economies of
scale. This trend will eventually lead to a consolidating effect in the SaaS industry as
smaller providers are absorbed into larger ones or simply fold (Davies, 2008). Many
industries have experienced a similar consolidation phenomenon, notably the retail
book industry, in which such giants as Borders and Barnes and Noble annihilated
smaller local bookstores by establishing a nationalized presence and discounting books
to attract customers away from mom-and-pop stores. The online survey category is one
of the SaaS sub-industries in which consolidation is predicted to start and subsequently
spread throughout the SaaS industry (Davies, 2008). Conventional survey, or
“enterprise feedback management,” tools typically address three categories of
requirements: survey channel, survey management, and integration. The survey
channel refers to the method of survey delivery, that is, via the web, email, text
message, and phone and voice response systems. The survey management framework
establishes provisions for design, collection, analysis, and reporting of survey data. The
integration requirement covers the portability of survey data to other systems, such as a
CRM system in the case of a customer survey or a human resource system in the case
of an employee survey. A primary issue leading to suggestions of consolidation in this
sector is that the popular SaaS survey tools, such as Keysurvey, SurveyMonkey, and
ZapSurvey, address only a subset of these requirements, particularly in the number of
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 11

channels supported, whereas larger non-SaaS vendors who are actively transitioning
their products to SaaS cover most of the feedback management requirements, including
telephone and snail-mail surveys. When these large vendors complete the transition to
SaaS, their more extensive platforms may well attract survey customers away from
smaller vendors.

What Is Cloud Computing?

Because access to subscriber software is accomplished via the Internet and the
subscriber is essentially blind to the implementation at the provider’s facility, the term
“cloud computing” is also used to refer to SaaS applications. However, cloud computing
encompasses a much broader concept of network-based computing. According to Miller
(2008), the term ”cloud” implies a significant number of high- and low-end
interconnected computers charged with unanimously processing, storing, and delivering
information to clients. No one knows for certain where the concept or the phrase “cloud
computing” originated. It may have evolved from a standard practice among IT
professionals of depicting the Internet medium in network diagrams using the image of a
cloud. The online retailer Amazon.com is frequently credited with pioneering cloud-
computing technology when it introduced its Elastic Computer Cloud (EC2) service in
2006, enabling customers to store and run online applications on its data center server
farms (Worthem, 2008).

Cloud computing is the underlying massive computing power that makes SaaS
possible. As shown in Figure 1.4, the SaaS paradigm is merely a subset of available
cloud-computing concepts that include service-oriented architecture (SOA), web
services, the platform-as-a-service (PaaS) concept, the infrastructure-as-a-service
(IaaS) concept, Web 2.0, private and public clouds, and on-demand computing (Miller,
2008). Data centers and high-speed Internet connectivity form the foundation of cloud
computing and, hence, SaaS. SOA, which is intended for software developers,
describes the underlying software structure of SaaS applications. SOA provides a
model requiring that software be developed with discrete functions that are exposed as
web services to an open environment through a standardized addressing scheme that is
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 12

published and used by a subscribing entity to perform a specific task, all via the web.
Web services are the end result of an SOA implementation, enabling organizations to
essentially “rent” software functionality piecemeal and integrate that functionality into
custom in-house applications. Ariba, a web services vendor, provides a set of
procurement web services that can be integrated into an in-house business application
to evaluate and calculate raw material costs from a variety of suppliers. Although SOA
has not made inroads with major application vendors that have SOA-enabled their
product offerings, it has achieved greater acceptance in mashup software development.
For example, developers are now using Google Apps web services to integrate
mapping features in their custom business applications (“Creating the Cumulus,” 2008).

Figure 1.4. The components of cloud computing.

PaaS and IaaS, two emerging cloud-based technologies, are being hailed as the
next phase of cloud-computing offerings that organizations can leverage to improve
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 13

their IT systems and reduce the costs associated with establishing and maintaining
these systems (Krill, 2008). With PaaS, an organization no longer has to procure the
hardware and network access required to host its mission-critical web applications. Like
SaaS, PaaS vendors provide storage space and an accessibility platform for lease to
organizations to host applications that can be accessed remotely through the Internet or
through a secure channel between the organization and the PaaS provider. PaaS
providers typically have data center facilities with stringent security parameters to
safeguard physical and virtual access to their client data. The cost structure of PaaS
offerings is similar to SaaS from a per-usage perspective, but the usage component in
PaaS is the amount of storage space required to store business applications and the
amount of bandwidth required to access those applications. IaaS extends the PaaS
concept a bit further, enabling organizations to rent not only storage space and
bandwidth to store and access their applications but also the entire support
infrastructure, including servers, connectivity, data center space, software, firewalls,
access control, and remote connectivity devices.

The key difference between PaaS and IaaS is the level of control provided to
adopters of these cloud-based services. PaaS limits control over leased cloud
resources to security access and storage allocation, whereas IaaS grants the leasing
organization more granular control over security, configuration, access, and equipment
(Miller, 2008). The amount of control and the level of flexibility available to the cloud
service subscriber have led to a categorization of cloud services as either public or
private (Bittman, 2010). Public cloud services entail a dependency on a pay-as-you-go,
provider-controlled framework, such as SaaS, IaaS, and PaaS. A strategy of
outsourcing IT systems without ownership applies to public clouds. Public clouds enable
organizations to leverage the competency, support, and maintenance of seasoned IT
providers at a reasonable financial cost but at the expense of marginalized control. In
contrast, private clouds represent a strategy of leveraging external IT resources without
sacrificing control. For example, an international organization that builds a cloud-like
infrastructure in-house but limits its use and tenancy to geographically disbursed
subsidiaries is regarded as providing a private cloud service (Bittman, 2010).
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 14

Web 2.0 is another cloud-computing concept parallel in context to SaaS. It entails


a departure from the original web environment comprising static content and search
capability to a rich environment characterized by social networking, web-based
applications, blogging, collaboration tools, and video and file sharing. Web 2.0 is
sometimes misused as a synonym for cloud computing; however, as Miller (2008)
notes, both terms describe the same concept but from different perspectives. Cloud
computing refers to how computers operate in a collaborative computing environment,
whereas Web 2.0 describes how individuals leverage this environment to collaborate.
Facebook is an example of a Web 2.0 platform that allows users to collaborate in a
social networking environment. Yet cloud computing is the root technology that enables
the millions of Facebook users to engage in Web 2.0 activities reliably and securely.
Both PaaS and IaaS are considered among the many components of next-generation
Web 2.0 computing.

On-demand computing, a throwback practice from the earlier days of timeshare


computing, is similar to cloud computing but usually involves only a few computers. In
contrast, cloud computing involves massive computing grids interconnected as a single
cloud (Miller, 2008).

Cloud-based computing is often viewed as just another form of outsourcing or


simply an expansion on the concept of networked computers. But the similarities end
when one considers accessibility and the sheer scale of cloud computing (Miller, 2008).
In a conventional sense, access to files in an outsourcing arrangement is restricted to
the outsourcing organization’s network. In contrast, cloud computing allows anyone with
an Internet connection to access cloud-based resources. Outsourcing may involve an
organization’s simply hiring an unaffiliated individual or another organization to perform
a specific task, whereas cloud computing involves using “rented” applications and
processing resources owned by an external entity to perform computing tasks. Unlike
networked computers, cloud computing involves a much greater magnitude of
computers, organizations, and networks.

Sensing a shift in demand from packaged software to SaaS-based applications,


key computing industry organizations are focusing their growth strategies on SaaS and
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 15

the underlying cloud-computing platform. Stakeholders, including Google, Microsoft,


IBM, and other major IT companies, are investing heavily in data centers, massive
server farms in remote warehouses with high-speed fiber communications and
professional IT staff to manage operations. These centers can be found anywhere on
the globe where an adequate power supply exists. Data centers are essentially the
heart of cloud-based computing and the underlying infrastructure of SaaS. Data center
owners, seeking to reduce the storage costs and significant power consumption of
these centers while increasing their processing capabilities, have invested heavily in
“green” or inexpensive energy sources and adopted an innovative new technology
called “virtualization” (Weiss, 2007). Microsoft has unveiled its 500,000-square-foot data
center in Illinois, with high-end servers placed in portable containers (“Where the Cloud
Meets the Ground,” 2008). The portable data center concept enables the cloud
infrastructure to be scaled rapidly and easily to meet unexpected sharp increases in
demand. Virtualization has enabled cloud operators to reduce power consumption by
packing multiple “virtual” servers into a single high-end server. SaaS providers and
users alike benefit from the innovation and cost reductions derived from the push by
cloud operators to render the cloud-hosting infrastructure ever more efficient,
globalized, flexible, reliable, and secure.

Disadvantages of SaaS

Despite the promising future of SaaS, the technology has some significant
downsides that, if left unaddressed, could negate the excitement among potential
organizational adopters. These disadvantages include the need for reliable Internet
access, the fact that SaaS may be slower than in-house systems, limited feature
offerings for some functions, and the potential for breaches of security (Miller, 2008).
Other SaaS shortcomings that have been suggested by IT industry insiders include the
following (Pring, Desisto, & Bona, 2007):

• The commoditized offerings of SaaS eliminate any advantages to adopting


companies over competitors who also have the option of adopting similar SaaS
offerings.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 16

• SaaS engagements typically entail template contracts or SLAs with little or no


room for negotiation or customization.
• SaaS engagements may entail hidden costs that emerge unexpectedly or
additional costs for add-on features, such as customization, integration, storage,
and phone support.
• SaaS leads to marginalization of internal IT staff, who may not have been
involved in the adoption decision-making process yet are held accountable for
adoption problems.
• A different kind of vendor oversight is required to manage the relationship
between the SaaS vendor and the organization.
• Security weaknesses can pose a viable risk to SaaS adoption.
• SaaS may present higher TCO in the long run if the provider increases
subscription fees when an initial contract expires.

A recent eWeek survey of 252 chief information officers (CIOs) from a variety of
organizations illustrates the empirical impact of SaaS risks on organizations that have
already adopted SaaS applications (eWeek, 2008). According to the survey, the
principal negative experience undergone by nearly half of the organizations represented
was that Internet downtime rendered SaaS applications inaccessible by the
organizations’ staff and reduced productivity. A close second in experienced risks was
the realization that SaaS applications were not as customizable as the organizations
had expected or as conventional software. Other significant risks experienced by the
survey participants in smaller numbers included high long-term costs, interface and
feature changes without prior notification, and increased information insecurity.

Research Purpose

At some point in the near future, nearly all business managers will be forced to
consider adopting SaaS as an IT solution because of the obvious cost and convenience
benefits that come with the technology. However, the typical decision process used by
management considers only the attractive incentives of using SaaS. The risks of this
new technology are largely ignored or not evaluated properly by IT decision-makers.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 17

The need for risk assessment during the SaaS evaluation phase of the adoption
process is critical to ensure information system sustainability and reduce uncertainties.
Risk assessment is widely used as a management tool; the process involves scanning
the environment, testing, or reviewing information sources to establish trustworthiness,
uncover threats, and inform the decision-making process. A key component of
organizational risk management, risk assessment includes such diverse areas as
commercial and political analysis prior to international investment, business continuity
planning, and evaluation of supply chain disruption vulnerabilities (Khattab, Aldehayyat,
& Stein, 2010; Lamarre & Pergler, 2010).

Despite its widespread use in other management areas, risk assessment is


seldom applied to software selection. Nevertheless, the SaaS model presents a host of
unique risks in entrusting sensitive organizational information and critical functionality to
an external entity. Currently, no concise methodology exists for selection and integration
of SaaS applications or for addressing the unique concerns of SaaS. This research
aims to determine an optimal and rational methodology for selection and integration of
SaaS applications that managers can leverage to reduce adoption risks and improve
adoption success. It is important to note that a SaaS selection process should consider
not only the inherent risks in this change initiative but also the value of the selection to
the organization. The ultimate goal in SaaS selection is to develop a process and a set
of selection criteria that minimize risk while maximizing value to the organization.

Management can turn to a variety of IS selection methods that may aid in the
SaaS selection process, but these methods are suited for adopting specific IS solutions
and may not address some of the unique requirements of a SaaS system. For example,
the business readiness rating (BRR), an open-source software (OSS) evaluation
method, proposes rating categories for OSS, including functionality, operational
characteristics, support, documentation, development process, and software attributes
(Wasserman, Murugan, & Chan, 2006). However, these categories exclude some
critical SaaS risk concerns, such as security, business continuity, and confidentiality.
Lacking a framework for SaaS evaluation, IT decision-makers are resorting to informal
measures in their evaluation of SaaS, measures that may not provide a comprehensive
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 18

picture of the risks, costs, benefits, and pitfalls of SaaS adoption. Eagerness to realize
the perceived value and necessity of SaaS may be the reason for this informality in
SaaS selection among organizations. Table 1.1 highlights a few of the most popular
evaluation practices used by IT decision-makers when considering SaaS adoption,
including speaking to current users, scanning trade publications and web sites,
checking references, attending seminars and webinars, and using search engines
(eWeek, 2008). The intended outcome of the current research is to develop a tool that
enables managers to more formally and reliably analyze the risks of available SaaS
choices before deciding on the best option.

Table 1.1
Results from Survey of Current SaaS Selection Methods (n = 252)
Selection Method % of
Respondents

Spoke with users at other companies 51%

Read trade publications/Web sites/newsletters 49%

Checked references from vendors 48%

Attended online/offline events/seminars about product category 43%

Did general web searches via search engines 43%

Read reports from or spoke with analysts 41%

Spoke with independent consultants 35%

Ran a financial check with a credit service 19%

Checked blogs/social networks 17%

Obtained opinions from nonusers at other companies 13%

Other 3%

(Source: eWeek, 2008.)

Research Questions

As SaaS applications gain widespread adoption across all types of organizations,


including risk-averse government entities, organizations must learn to look beyond the
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 19

impressive sales pitches by SaaS providers, the potential financial and non-financial
benefits, the provider references, the appealing sales webinars, and the articles in IT
trade journals espousing the merits of cloud computing and view SaaS adoption as an
engaging decision-making process vital to sustaining and improving organizational
operations. Given that SaaS is still an emerging technology, no proven methodology
exists for selecting the right options. Existing cloud evaluation guidelines and
frameworks are too broad in scope and advocate for evaluating cloud risk factors that
are sometimes irrelevant to SaaS selection. Legacy software selection methods
targeted at COTS selection and other methods targeted at OSS selection fall short by
omitting some steps that are unique to the SaaS selection process, including conducting
risk assessment and considering integration requirements.

The inherent risks and ownership model of SaaS lead this research to raise
several necessary pre-adoption questions concerning the overall viability of the SaaS
paradigm for supporting mission-critical organizational IT functions. These questions
include the following:

1. What are the legitimate risk factors in SaaS adoption?


2. What influence does risk awareness have on the success or failure of SaaS
adoption within an organization?
3. How can the decision-making process for adopting SaaS technology help
mitigate the risks associated with this technology?

This research intends to examine decision-making theories and risk assessment


methods, evaluate software selection practices that preclude successful adoption, and
conduct an empirical survey to determine answers to these questions.

Organization of This Research

Having established a clear understanding of SaaS itself and its implications to


business IT, the remainder of this dissertation focuses on developing a rational model
for leveraging risk relevance and assessment as the predominant decision-making
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 20

framework for SaaS adoption. Chapter 2 reviews existing decision-making and risk
assessment literature to support the argument for a rational risk-based approach to
SaaS selection. Chapter 2 also discusses conventional software and cloud-specific
selection approaches and presents a synthesized conceptual model and hypothesis for
this research that accommodates the unique characteristics of SaaS. Chapter 3
presents the methodology used in the research and gives an overview of the data
collection approach. Chapter 4 delves into the crux of the research with a
comprehensive analysis of the data collected and a determination of whether or not the
empirical facts support the underlying hypothesis. The final chapter explores the
implications of the findings on management decision-making as they pertain to online
software adoption, discusses limitations of this research, and highlights potential areas
for further study on this topic.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 21

Chapter 2

Concept Development and Hypothesis

Introduction

SaaS selection is essentially a decision process with the aim of maximizing the
value of the software solution adopted by the organization. Well-functioning
organizations typically use the decision process in a wide variety of business scenarios
to analyze and determine the most suitable solution from available options. Regardless
of whether the decision scenario involves developing a new product, selecting a new
supplier, or expanding internationally, the internal decision process is leveraged to
ensure consistency and to realize the organizational goal of maximizing value from
expenditure of revenue. The importance of consistency and completeness in the
decision-making process is supported by empirical research confirming that a prudent
decision process reduces irrational and erratic decision-making and improves outcomes
(Aven & Korte, 2003; Busenitz & Barney, 1997; Tasa & Whyte, 2005).

Rational decision-making embodies rules and habits that reduce complexity in


the decision-making process (Gilboa, 2009). Personal decision-making by an individual
can involve irrational and emotive elements, but in organizational decision-making,
ignoring the need for a guiding process, careful analysis of facts, and risk consideration
can have dire results and expose organizations to financial and legal consequences. If
an organization’s decision-making process lacks formality, predefined criteria, measures
for evaluating opportunities and options, and clearly defined outcomes, then the
likelihood of a successful outcome that maximizes shareholder or stakeholder value is
low. The difference between good decisions and poor decisions in business settings
can be found in the rationality of the methods used by the decision-makers based on
available information (Janus, 1989). A decision’s outcome in most business scenarios is
only as beneficial to the affected entity as the efficiency of the decision-making process
(Aven & Korte, 2003). This assertion is especially relevant in today’s complex and
dynamic business environment, in which decision-makers are inundated with
information from a wide range of sources. These sources vary depending on the type of
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 22

decision and the expectations of the decision-makers. Decision-makers must calibrate


decision processes with formal policies and guidelines to steer decision-making. Astute
business decision-makers typically leverage skill sets and decision-making tools to
optimize their decision processes and confirm a degree of control over outcomes
(D’Andrea, 2006; Nguyen, Marmier, & Gourc, in-press).

A vital component of any decision process is the information that serves as input
to the process. Information brings rationality to the decision process and allows for a
decision that takes relevant factors into account. Information can also positively and
negatively influence cost, success, timeliness, and risk outcomes (Zeng, An, & Smith,
2007). Reaching an informed decision after establishing a concise goal requires
concerted effort in gathering and processing information and selecting an action option
for application in a systemic manner. Figure 2.1 depicts a rational decision process that
is applicable to most decisions made in efficiently functioning organizations. A key
benefit of a rational decision-making process is that it enables replication of the process
to derive the same outcome by a different decision-maker intending to audit the
credibility of the original decision.

Research indicates that information gathering is essential to decision-making in


enabling the decision-maker to suspend preconceived beliefs and learn about the
environment encompassing the decision domain in an unbiased manner (Gilboa, 2009;
Janus, 1989). Integral to the information-gathering phase of the decision process is the
collection of decision-relevant information. A gathering process involving probing and
asking questions produces more accurate information (Zellman, Kaye-Blake, & Abell,
2010). Van Ginkel and Van Knippenberg (2008) use an analogy of jurors collecting
evidence to emphasize the significance of decision-relevant information in decision-
making. Not unlike jurors faced with a life-altering decision scenario, the business
decision-maker can optimize decision processes by gathering all relevant and available
information pertaining to the decision domain. Information as input into decision-making
is considered relevant if it is pertinent in context and content to the decision domain
(Janus, 1989). A decision process without thorough consideration of relevant and
available information can result in a low-value decision. Conversely, a process involving
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 23

information from multiple relevant perspectives contributes to an informed decision (Hall


& Davis, 2007).

Systematic collection of decision-relevant information serves as ideal input into


the analysis and processing phase of the decision-making process. Information
processing in decision-making entails organizational decision-makers actively engaged
in judging, reviewing, and choosing among the variety of decision models. Adequate
judgment in a decision situation is a factor of management experience and skill attained
in prior decision scenarios. Nevertheless, limitations on available information, cognitive
limits of the decision-makers, and time constraints can prevail as bounded rationality
during the decision process and limit the effectiveness of the decision (Simon, 1972).

Figure 2.1. Rational decision process.

Having gathered adequate information, the next challenge for the decision-maker
is processing that information in a formalized manner using one or more of the
axiomatic decision models available. Normative decision literature presents a host of
decision models to choose from in processing decision-relevant information. The
underlying objective of normative decision models is to provide pragmatic methods that
allow decision-makers to make rational decisions based on a reality that must conform
to the model (Gilboa, 2009; Peterson, 2009). Essentially, normative models tell
decision-makers how to make decisions using available information.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 24

Among the popular axiomatic decision models used for processing decision-
relevant information are the decision matrix, Bayes’s theorem, and game theory. The
common thread among these classical models is the concept of acts, states, and
outcomes (Gilboa, 2009). Acts are actions or behavior undertaken by the decision-
maker or a third party over whom the decision-maker has direct control. The decision
matrix in Table 2.1 shows an example of the acts in a software selection decision
process as buying, building, or—taking a more recent approach—outsourcing the
software to a SaaS provider. Non-action, that is, not buying the software, is considered
an equally significant state in the decision matrix. In contrast to acts, states are
conditional elements of the decision matrix over which the decision-makers cannot exert
any degree of control. States in a similar software decision process would include
functional aspects of the software, cost, and—in the case of outsourced software—risks
associated with adopting the software. The final component of the normative decision
model is the rational outcome of each intersecting act and state. Outcomes are a
consequence of the actions of the decision-maker given the prevailing conditions. The
outcome stated in the decision matrix is primarily based on a combination of the
expectation of the decision-maker, the nature of the state, and a probability factor
relating to the chances that the expected outcome will be realized. Outcomes can be
expressed in descriptive form or quantitatively.

Referring to the software decision matrix example in Table 2.1, buying software
package A will cost the organization $10,000 (the lowest purchase cost outcome), but
the software has only 50% of the desired functionality. The decision-maker has rated
the probability that security and other pertinent risks will impede the software buying
decision as low. The other factor used to express outcome in the decision model is the
probability that an outcome will occur. Probability may be expressed as a best guess
estimate of the likelihood of an outcome or calculated with an empirical approach using
the frequency of past observations of outcomes given similar conditions and actions
(Jeffrey, 1983). Outcome probability may be derived either objectively or subjectively.
The decision-maker relies on personal intuition, experience, and judgment in subjective
probability statements, whereas an objective approach relies on mathematical estimates
of probability based on the observed frequency of prior experiences.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 25

In software selection, the notion of probability is expressed as a weighted factor


that qualifies each outcome’s value based on internal experience or the external
aggregated prior experience of other organizations (Hollander, 2000). The probability-
weighted factor in software selection thus expresses the decision-maker’s estimate of
the likelihood that a given outcome will occur. This expression is distinct from those in
normative decision models used in purely financial scenarios, in which each monetary
outcome is assigned a factor based on the actual probability that the outcome will occur,
not the significance of the outcome to the decision-maker. Table 2.2 shows weighted
factors assigned to the software selection decision matrix in Table 2.1. Here, the
decision-makers have determined that weighted factors of 87% for functionality, 75% for
cost, and 92% for risks are ideal measures of the significance of each state type. These
factors can be interpreted as follows: The organization’s prior experience in software
acquisition determines that 87% of acquired software meets all functional requirements,
75% remains within estimated cost targets, and the presumed risk applies in 92% of
acquisition situations. The use of a decision matrix to process relevant information
objectifies the decision process and allows the decision-maker to determine optimal
outcomes based on desired levels of cost, benefits, and risk.

Table 2.1
Software Selection Decision Matrix
Required

Functionality Cost Risk

Buy Off-the-shelf Software 50% $10K Low

Build Custom Software 100% $1 million Low

Adopt SaaS 80% $35/month High

Table 2.2
Software Selection Decision Matrix with Probability-Weighted Factors
Required

Functionality (87%) Cost (75%) Risk (92%)


RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 26

Buy Off-the-shelf Software 50% $10K Low

Build Custom Software 100% $1 million Low

Adopt SaaS 80% $35/month High

The outcomes derived from the normative decision model, given specified
actions, states, and outcomes, are the impetus of normative decision-making in a
business environment. The outcomes allow for objective comparison of the value of
each action/state pairing and provide guidance on appropriate action given the decision
domain. At this point, the decision-maker is capable of selecting the option from among
the derived outcomes that expresses the highest value based on the decision-maker’s
expectations. With the selection of a specific action based on the optimal outcome to
realize the expected business value, the business decision cycle is complete. The focus
then shifts to the requirement on the part of the decision-maker to plan and implement
the action strategy and frequently monitor the application to gauge the effectiveness of
the decision in benefiting the organization. In most major organizational decisions
involving financial, time, and labor asset commitment, the decision process of gathering
and processing relevant information and selecting the optimal outcome using
established rational decision methods enables the organization to maximize the
expected utility of outcomes by reducing the costs and risks associated with unfavorable
outcomes and increasing the benefits associated with the optimal outcome. This well-
tested and axiomatic approach to rational decision-making has evolved to become the
standard in organizational settings (Peterson, 2009).

To understand the foundational assumptions underlying standard decision


models, it is necessary to examine relevant decision theories. Classical and heuristic,
which use contrasting approaches, are the two main subsets of decision theories.
Classical decision theory, used in developing the constructs for this research, leverages
several key approaches that all have an underlying aim of maximizing gain or expected
utility of outcomes (Gutnik, Hakimzada, Yoskowitz, & Patel, 2006). Classical theories
assume that the decision-maker will gather and weigh information consistently and
concisely and follow a rational pattern in the decision process. Classical theory also
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 27

suggests that the decision-maker will stop gathering information at some point and
commit to the option presenting the best value (Zellman, Kaye-Blake, & Abell, 2010).

Given that the ultimate goal of business decision-making in classical decision


theory is to maximize expected utility, a question arises regarding the contextual
meaning of this concept. “Utility,” as used in “maximizing expected utility,” stems from
the concept of logical positivism (Cummings, 1998; Gilboa, 2009). Logical positivism is
a branch of epistemology that argues that evidence attained through empirical
observations is the basis for understanding and rational thought. Hence, utility, from a
logical positivist perspective, implies that the decision-maker must consider directly
observable outcomes, rather than hunches or unsubstantiated estimates, in the decision
process. Expected utility as the decision objective suggests that the decision-maker has
a predetermined threshold of benefit, cost, and risk that should be used to gauge
outcomes. The expected utility of a specific action/state pairing in the decision domain
can be estimated as the product of action/state value and the probability of occurrence.
The onus of maximizing expected utility orients the decision-maker toward deriving as
much value from the decision process as possible given the information and parameters
of the decision domain. Classical decision theory emphasizes this maximizing approach
grounded in empirical knowledge to instill discipline and rationality to the decision
process.

Bayes’s theorem is one of the more widely used classical theories in business
decision-making (Aven & Korte, 2003; Damodaran, 2008; Reneke, 2009). The core
value in Bayes’s theorem is in determining the probability that an expected utility will be
realized. This probability is factored with the utility value to derive the outcomes for a
given action/state pairing in the decision matrix. The theory, illustrated in Figure 2.2,
states that the conditional probability of a utility X given state Y is a product of the
probability of the state (Y) and the probability of the state (Y) given the utility (X), divided
by the probability of the state (Y). For example, the software selection decision matrix
discussed earlier includes as a weighted factor the corresponding probability of the risk
state. However, if the utility of a specific state of risk (i.e., security) is used to evaluate
the software options and the probability of the security risk of an action option ranges
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 28

from low to high for each observable risk, the probability for a utility for state of risk
would depend on the historically observed probability of security incidents occurring on
any particular day with other adopters who bought, built, or leased the software and the
probability that the incident posed a palpable risk to adopters. To fully realize the value
of a decision analysis based on Bayes’s theorem, the decision-maker will need to
adequately assign probabilities and utilities to outcomes.

P (X) = probability of utility X (outcome) based on prior observations

P (Y) = probability of state Y based on prior observations

P (X|Y) = conditional probability of utility X (outcome) depending on the occurrence of event state
Y

P (Y|X) = likelihood: conditional probability of state Y depending on utility X

Figure 2.2. Bayes’s theorem formula.

The literature on decision theory highlights a clear distinction between subjective


and objective probability to caution the decision-maker to use prudent methods to derive
probability in decision scenarios. Subjective decision theory argues that subjective
probability is derived from the decision-maker’s intuition, experience, or mental state
and not grounded in mathematical analysis (Jeffrey, 2004; Aven, 2003). Subjective
probability varies from one individual to the next, depending on his or her emotive state
and perspective. In contrast, objective probability, used in quantitative decision analysis,
is derived from an unbiased analysis of the frequency of prior observation (Gilboa,
2009). A rational decision-maker should objectify assumptions as much as possible to
improve the value of the decision. Nevertheless, the decision-maker should be aware
that the probability derived from aggregation of prior observances cannot predict with
absolute certainty the probability of future outcomes (Hurst & Siegel, 1956). The
implication of a degree of probabilistic uncertainty in the decision process is an
acknowledgment that a rational process for reaching a decision is not a reassurance
that the anticipated outcome will result. As is the case in weather prediction, random
occurrences outside the norm of prior observation or other unknown factors can
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 29

influence outcomes in an unexpected manner that may be favorable or unfavorable to


the decision-maker. Essentially, the world is not always deterministic.

Heuristic decision-making is rooted in behavioral decision theories, such as


Simon’s (1972) bounded rationality thesis, which proposes that decision-making is not a
purely rational undertaking. Unlike normative decision processes, the aim of heuristic
decision-making is not to maximize expected utility. Rather, it is to reach a satisfactory
decision based on the available information. Psychological factors of the decision-
maker’s mental state, biases, personal motives, and intuition come into play in a
heuristic approach to decision-making. Simple rules and leveraging of judgment
competency are the basics of heuristics (Zellman, Kaye-Blake, & Abell, 2010). The
heuristics approach includes rules for guiding, stopping, and making decisions. Instead
of extensive data collection, analysis, and computation, rules guide the search for
information among options. Stop rules end the search process after reasonable options
are found, and decision rules are exercised to make the decision based on the research
and a limited set of criteria. Proponents of heuristic decision-making assert that purely
rational decision-making is impractical for several reasons, including the relatively high
cost of acquiring and processing ample information, the inherent limits in decision-
makers’ ability to process large amounts of information, the tendency of decision-
makers to adopt different decision processes, and differences in decision-makers’
values (Busenitz & Barney, 1997).

The scale and complexity of large organizations necessitate policies, guidelines,


and structure that support simplifying decision processes for managers, using
consistent and concise rational decision approaches. Heuristic decision-making
approaches may be used in small and mid-sized businesses (SMBs) as a result of time
and resource limitations (Busenitz & Barney, 1997). Decision-makers in SMB settings
may also be better off adopting a heuristic approach because of the rapidly changing
nature of their business environment and their familiarity with factors that encompass
the decision domain, as well as their proximity to various facets of the business. Most
SMBs eventually adopt the rational decision practices used by large organizations, but
the heuristic approach, based largely on hunches in lieu of substantiated probability,
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 30

presents an attractive alternative to rational decision-making. SaaS selection boils down


to a normative versus a heuristic approach. The theoretical framework for this research
takes the stance that use of the normative decision process in SaaS selection is the
more effective approach because it supports mission-critical business functions and
takes into account the risk concerns associated with this outsourced, cloud-based
technology.

Basic Theoretical Framework

Ample value can be found in both normative and heuristic decision-making


approaches, and justification can be made for adopting either approach. But the
process of selecting mission-critical software to run a business is notably more effective
if some formal information gathering and analysis is used to determine whether the
software system selected is a match for the organization (Starinsky, 2003). Many
software implementation projects have failed because decision-makers neglected to use
proper selection techniques, and such neglect may later manifest itself as over-budget
or dysfunctional software implementation (Liang-Chuan & Chorng-Shyong, 2007). The
commoditized nature of SaaS, with multiple clients sharing the same system resources,
necessitates a simplified methodology for evaluating and selecting this type of software
system. A standardized SaaS selection methodology based on normative rational
decision approaches reduces the risks associated with entrusting a third-party SaaS
vendor with the security, privacy, and accessibility of organizational information and
information processing.

Both Hollander (2000) and Starinsky (2003) emphasize that software acquisition
can affect the business and that the risk consequences of this acquisition must be
considered in the decision process by seeking out more information. Liang-Chuan and
Chorng-Shyong (2007) extend this observation further in their IS investment framework,
noting that although most IT acquisitions are risky endeavors, most of the available
frameworks for analyzing IT investments fail to consider the risk implications. Specific to
SaaS, Heiser (2010a) recommends that a risk assessment preceding SaaS adoption is
useful in gauging provider ability to meeting confidentiality, integrity, and availability
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 31

requirements, particularly for organizations with sensitive data. A valid framework for
assessing software risk must both be premised and expand on conventional software
selection and risk analysis practices.

The traditional risk analysis framework assumes that risk is measurable as the
probability that a failure event will occur. The first step in a conventional risk framework
involves identifying suitable or relevant risk elements and subsequently observing
occurrences of the risk element in the system to derive a predictive probability of future
impact of the risk element on the system. Implied in the risk analysis process is the
need to determine the relevance of potential risk elements to the system’s adequate
functioning or, in the case of IT acquisition, the relevance of the risk to the acquisition’s
outcome. Aven (2003) argues that the best method for determining the relevance of a
specific risk from among the pool of possible risks is to determine whether a correlation
exists between individual risk elements and the failure of the system. Engineers and
scientists typically rely on reliability analyses to determine the risk of each component’s
contributing to system failure (Corran & Witt, 1982). Koller (2005) notes that without this
concerted effort to define and accredit all relevant risks, it is difficult to determine risk
impact and probability in a normative decision approach.

The current research draws from a number of frameworks in the realms of


business management and IT, including the following:

• Liang-Chuan and Chorng-Shyong’s (2007) IT acquisition recommendations for


taking into account risk considerations

• Aven’s (2003) risk analysis framework, which recommends establishing risk


relevance

• Heiser’s (2010a) cloud risk factors

• The European Network and Information Security Agency’s (ENISA, 2007) cloud-
computing risk likelihood and impact risk assessment framework

• The Cloud Security Alliance’s (CSA, 2010) Consensus Assessment Initiative


Questionnaire (CAIQ) framework
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 32

• The FedRAMP (2010) cloud security assessment and authorization framework

• Risk concerns revealed in other notable cloud-related literature.

This research is premised on a synthesized theoretical framework that assumes that


risk relevance in SaaS acquisition is established by correlating successful
implementations of SaaS and awareness of specific risk elements during the decision
process (Bajaj et al., 2008; Bangert, 2008; Bielski, 2008; Blokdijk, 2008; Davies, 2008;
Orr, 2006; Pring, Desisto, & Bona, 2007). An additional assumption in this framework is
that decision-makers in successful adoption situations took risk mitigation measures
during the decision process after gaining awareness of risks, whereas decision-makers
in unsuccessful SaaS adoption situations were unaware of risks, which contributed to
acquisition failure. Figure 2.3 shows the process undertaken in this research to
determine the relevancy of the risk sub-dimension issues of security, business
continuity, and integration to successful SaaS adoption. The output from this relevancy
determination process served as input to a synthesized SaaS Cloud Risk Assessment
(S-CRA) framework. Figure 2.4 shows the final S-CRA framework, which requires using
the risk assessment questionnaire to evaluate and rate each potential SaaS solution
provider in the relevant risk areas of security, business continuity, and integration in
order to derive a composite risk score for each provider for use as the discriminating
selection factor.

The initial relevancy determination to validate the S-CRA framework conforms to


existing theories regarding the importance of ensuring risk significance to organizational
functions before integration into a risk analysis and risk management framework. Koller
(2005) states that defining relevant risks is major step in the right direction toward
effective risk assessment. This relevancy is only determined, as noted in the decision
theory literature, by empirical observations of prior and similar outcomes to validate their
influence on the expected outcome and estimate. The relevant risk dimensions can
serve as input into a SaaS decision process in the form of a detailed questionnaire
concerning risk elements. Management decision-makers can use the questionnaire
responses from each SaaS provider in an objective evaluation exercise by applying
certainty rating and/or probability weight elements to select a provider based on optimal
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 33

utility, expressed as an overall rating score for each provider. Table 2.3 summarizes the
key constructs of the proposed S-CRA model, including the risk dimension and sub-
dimensions, software satisfaction, risk certainty, relevant risks, risk rating, weight factor,
and overall risk rating score.

Figure 2.3. Proposed risk relevancy determination process.

Select Provider Based on


Adoption Overall Rating Score in
Organization Addressing Risk Dimensions

Security Dimension
Risk Assessment SaaS
Provider
#1

Risk Business Continuity SaaS Apply Rating/Weight


Assessment Dimension Risk Provider Per Sub-Dimension
Questionnaire Assessment #2 Item

.
Integration SaaS
Dimension Risk Provider
Assessment #x

Figure 2.4. SaaS Cloud Risk Assessment (S-CRA) framework.


RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 34

Table 2.3
S-CRA: Summary of Key Conceptual Model Components
Construct Description

Risk Dimension Risk factors that negatively influence the SaaS adoption experience.

Risk Sub-dimension Subfactors in risk dimensions that describe specific and unique vulnerabilities.

Software Satisfaction The adopting organization’s level of approval regarding its experience with the
SaaS application.

Risk Certainty The adopting organization’s level of awareness of certain risks during the
evaluation phase.

Relevant Risks Risk dimensions deemed to be particularly relevant to SaaS adoption.

Risk Rating A subjective certainty rating used by the adopting organization to evaluate the
providers’ ability to address relevant risks.

Weight Factor An optional subjective percentage weight factor applied to each risk sub-dimension
item based on the significance of the item to the adopting organization.

Overall Rating Score Composite rating and/or weight score for each relevant risk sub-dimension item for
each provider considered. Used by the adopting organization as a final measure of
outcome to distinguish SaaS provider options.

Conventional Software Selection Methods

Before delving into a discussion of the S-CRA framework proposed for this
research, it may be helpful to take a closer look at a few prominent software selection
methods and evaluate their shortcomings when applied to a SaaS adoption process.

Analytical network process and analytical hierarchy process.

The analytical hierarchy process (AHP) and analytical network process (ANP)
are both decision-making frameworks developed by Saaty (1980) to solve complex
decision problems involving multiple criteria, objectives, and goals. The frameworks
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 35

have been widely applied to decision-making in a variety of situations, ranging from


developing a supply chain management strategy to evaluating ERP software
alternatives (Meade & Sarkis, 1998; Razmi, Sangari, & Ghodsi, 2009). AHP and ANP
are distinguished by the existence of dependency among the criteria used in the
evaluation process. AHP relies on a simple hierarchy decision structure with no
dependency among criteria, whereas the more complex ANP framework relies on a
clustering dependency among criteria. Applying the AHP decision framework involves
structuring a decision hierarchically into smaller and independent sublevels to facilitate
analysis and understanding. At the top level is a statement of the decision goal. The
next level indicates criteria and priorities for each criterion stated as a subjective weight
factor or a score ranging from 1 to 9 by the decision-maker. The final level indicates the
alternatives and a priority or score for each alternative and allows the decision-maker to
easily select the best alternative based on its overall combined priority or score in all
criteria. ANP is used in more complex optimization decision scenarios, in which
dependency exists among various criteria, but the same methodology is applicable (Lee
& Kim, 2000). As a result of the clustering of interdependent criteria in a network format,
solving decision problems using ANP is markedly more complex and requires the use of
mathematical super-matrix formulas to derive an optimal alternative.

AHP and ANP are readily applied to many software selection processes and
naturally complement any process involving selection from among competing
alternatives. Ayagi and Ozdemir (2007), in their study on effective ERP software
selection, propose an approach that leverages the ANP/AHP framework. According to
the study, the first step when using AHP in an ERP software selection process is to
express requirements as a statement of goals. The next step involves outlining selection
dimensions, which might include system cost, vendor support, flexibility, functionality,
reliability, ease of use, and technical advantage. In the final step, the decision-maker
makes a pairwise comparison by ranking, on a scale of 1 to 9, his or her preference with
regard to the alternatives for each dimension. For example, the decision-maker would
rank alternative A versus alternative B on the 1-to-9 rating scale and do the same for
each alternative pairing. The AHP and ANP frameworks provide guidelines for
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 36

calculating the final priority score for each alternative and deriving the desirable
solution.

AHP and ANP are both proven and effective frameworks for software and
technology selection in general, but they have a major shortcoming that makes them
impractical as SaaS decision tools: Both frameworks require the decision-maker to
perceive each criterion as competing against another and to make a judgment call of
one priority over another. However, in SaaS selection, the risk dimensions of security,
business continuity, and integration are usually regarded as equally significant (Heiser,
2010a). Forcing the decision-maker to determine which risk is more important than
another reduces the effectiveness of the SaaS evaluation process.

COTS software evaluation methods.

Hollander’s R2ISC (“risk-squared”) is a comprehensive evaluation methodology


premised on a life-cycle approach to COTS software selection (2000). Risk-squared
suggests using four factors to gauge software adoptability, including the software’s
ability to meet established requirements, its ability to be customized and easily
implemented; the ability of the vendor to support the software; and the total ownership
cost of the software. This methodology provides an in-depth list of standardized sub-
criteria for each factor. The risk-squared process is initiated by compiling requirements
in a request for proposal (RFP) and inviting potential vendors to respond to the RFP
with formal proposals in a format dictated by the RFP. The risk-squared method then
requires the decision-maker to rate, on a 10-point scale, each software vendor’s ability
to meet each requirement as described in the RFP and provide a priority rating for each
requirement based on business needs. Like the AHP and ANP frameworks, risk-
squared uses the individual ratings as input into a formula that ultimately derives a final
and distinct risk rating for each software option.

Despite its detailed guidelines aimed at reducing mistakes in software selection,


risk-squared is more suited for legacy COTS software selection and for organizations
that can incur the time and cost commitment to implement this approach. The
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 37

formalized decision-making required, along with some elements of the risk-squared


selection guidelines, can be adopted for a SaaS selection undertaking, but the approach
does not consider security requirements or overall outsourced software risks. Moreover,
the acquisition paradigm of modern cloud-based software, mainly SaaS, which involves
renting the software and using it via a web browser instead of buying, owning, and using
it on an organization’s internal IT infrastructure, does not necessitate a prolonged
evaluation approach, as suggested by the risk-squared method.

Procurement-oriented requirements engineering (PORE) is another selection


method applicable to COTS software selection, proposed in research by Maiden and
Ncube (1998). The PORE method draws from a variety of conventional software
selection techniques and decision approaches, including the feature analysis technique
for scoring the ability of evaluated systems to meet requirements and both ANP and
AHP selection methods (Kitchenham, Pickard, Linkman, & Jones, 2005). PORE
introduces three distinct templates for evaluating software systems under different
circumstances. Template 1 is helpful in compiling initial customer requirements and
filtering product alternatives based on information provided by suppliers. Template 2
provides the same aid as template 1 but suggests using test cases for individual
requirements based on supplier demonstrations. Template 3 is similar to the other two
but provides acquisition guidance based on information from customer-initiated product
research. The core value of PORE is its requirements-centric approach and the use of
requirement goals to direct the process of selecting and rejecting candidate systems.
The requirement goals include essential, non-essential, complex, and user
requirements. PORE is a four-step process, beginning with acquiring information about
requirements and complementary products. The last three PORE process steps mirror
those of conventional COTS software selection processes, as well as normative
decision processes, These steps include analyzing the information attained, using
ranking and scoring techniques to evaluate compliance with requirements, and
iteratively rejecting and narrowing down the list of compliant systems until a selection is
determined.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 38

Although the PORE method is primarily targeted at COTS software selection, it is


generic enough to be adopted for a SaaS selection process for outsourced software.
The requirements goals could be supplemented with additional risk requirement goals.
Furthermore, PORE templates 1 and 2 could be useful in helping the SaaS decision-
maker organize his or her analysis of information from SaaS providers and develop test
system demonstrations to evaluate the software from a hands-on perspective. Although
PORE shares many similarities to the S-CRA framework proposed for this research, the
key differences between the two models are S-CRA’s risk-centrism versus PORE’s
requirements-centrism and the S-CRA model’s insistence on gauging the ongoing
adoption risk resulting from the longer-term relationship between the adopting
organization and the SaaS vendor, compared to the one-time purchase relationship
between the COTS vendor and the acquiring organization.

A final COTS selection methodology that merits review is the software package
implementation methodology (SPIM) introduced by Starinsky (2003). SPIM is a more
recent software selection methodology and presents a holistic approach that
incorporates many of the ideas introduced in earlier COTS selection methodologies,
including PORE and risk-squared. Like the risk-squared software selection method,
SPIM also suggests a formal initial solicitation process to communicate requirements
and attain feedback from potential software vendors. SPIM provides succinct guidelines
for devising a comprehensive RFP requirements document and soliciting responses to
the RFP from potential vendors. To evaluate proposals, SPIM advises using the
standard COTS evaluation process of scoring, rating, and weighting each proposal’s
response to each requirement. SPIM’s distinction lies in its unique system for rating and
weighting responses to requirements, allowing for more objectivity in the evaluation.
Responses are rated depending on whether the vendor provides a feature requirement
as standard (10), optional (5), custom (1), or not at all (0) in its software. The weight for
each feature is predetermined as essential (10), desirable (5), and future (1). Additional
unique elements of SPIM include recommendations for checking vendor references,
scoring vendor demonstrations in a manner similar to the requirements scoring, visiting
vendor sites, and negotiating requirements. The proposed software with the highest
composite score in all requirements becomes the optimal choice.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 39

SPIM and the risk-squared software selection method, both of which involve a
customer-driven, push-selection approach by soliciting interested vendors using an
RFP, stand in sharp contrast to the provider-driven, pull-selection approach natural to
SaaS selection. SaaS selection seldom involves using an RFP to solicit providers.
Instead, the provider’s web site typically provides a wealth of information on the
software, as well as opportunities for self-demonstrating the SaaS provision through a
trial offering. In SaaS selection, the onus is on the deciding organization to extract as
much information from providers as necessary to evaluate their offerings using non-RFP
information-gathering techniques that fast-track the SaaS evaluation and adoption
process (Longwood, 2009). The distinctions of push versus pull information attainment
between COTS selection methods and SaaS selection and the need for an expedited
SaaS selection process are worth highlighting as key to the argument that COTS
selection methods cannot be applied “as is” to SaaS selection. SPIM’s additional
recommendations to test before adoption, check references, make site visits, and
negotiate final terms with the SaaS provider are all prudent measures that can serve as
peripheral undertakings in the SaaS decision process.

At the core of the SaaS decision is the issue of risk in outsourcing critical
software functionality to systems owned by an independent provider and entrusting that
party to ensure the confidentiality, integrity, and accessibility of information generated
by the organization using the provider’s software. This issue of outsourcing risk is not
prevalent in COTS software selection because software function and information
ownership are both retained within the IS infrastructure of the acquiring organization. In
COTS selection, requirements are the focus because external risk becomes negligible
after acquisition. The software operates within the organization’s infrastructure; hence,
risks associated with confidentiality, integrity, and accessibility are not in the scope of
the initial evaluation process. In SaaS selection, external risks are always prevalent and
become integrated with requirements, necessitating a risk-requirements approach for
evaluation. As Figure 2.5 shows, the S-CRA framework can and does incorporate some
elements from COTS selection, such as the concept of systemic collection and analysis
of pertinent information (originally incorporated into COTS selection from normative
decision theories), but S-CRA is framed from a risk perspective to accommodate the
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 40

long-term prevalence of vulnerabilities in SaaS that could have negative consequences


for the adopting organization.

COTS

SaaS

Figure 2.5. Comparative analysis of COTS and SaaS selection processes.

Decision-making, Risk, and SaaS

The consensus in decision theory literature is that risk analysis helps reduce
uncertainty in rational decision-making, and as long as uncertainty exists in decision-
making, evaluating risk will be a necessary component of the decision-making process.
Hofstede (1991) defines uncertainty as the perception of threat and anxiety due to
unknown, unstructured, or ambiguous risk factors. In contrasting, Gopinath and Nyer
(2009) describe certainty as an individual’s level of confidence in the outcome of his or
her evaluation or actions. Shrivastava (1987) describes risk as a complex concept with
varying perceptions across social science and scientific disciplines. Psychologists
perceive risk as the potential for human exposure to mental or physical injury or
accident. Financial analysts associate uncertainty with the possibility of gain or loss and
use financial models to estimate that uncertainty as a probability. Scientific disciplines,
such as engineering and biology, regard the risk associated with uncertainty as a
physical element requiring quantitative analysis (Aven, 2003). To address uncertainty in
business, risk analysis and assessment are typically adopted and organized under the
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 41

umbrella of risk management (Koller, 2005). Probability and impact estimates


determined from the risk analysis provide valuable input into the decision process.

The definition of risk varies in literature on decision theory, but a common thread
is the expression of risk as a probability that an event will occur (Aven, 2003;
Damodaran, 2008; Holford, 2009). Some acknowledged attributes of risk include the
following: (1) It is a relevant event that can be positive and negative in consequence
(Koller, 2005); (2) it is an expression of the exposure to chance or probability of loss
(Aven, 2003); and (3) it is a quantifiable uncertainty (Damodaran, 2008). In terms of the
relationship between uncertainty and risk, uncertainty introduces elements of risk into
the decision process but does not contribute to the impact or probability of risk. For
SaaS acquisition, the uncertainty surrounding an organization’s adoption of outsourced
software technology and the potential disruptive impact that this action could have
necessitate consideration of risk elements in the software acquisition process. Reducing
uncertainty is only possible by attaining and processing more information about the
decision domain in a risk analysis initiative. Gilboa (2009) concurs in asserting that
rational decision-making requires gathering relevant information to reduce uncertainty.
Nevertheless, as noted by Holford (2009), businesses often resort to functional
decision-making without taking a closer look at the ambiguities of the decision-making
situation that may lead to increased risks and consequences.

The effort to determine risk elements relies on a straightforward risk analysis


process. This process begins with the identification of risk elements by assembling
subject-matter experts to discuss risks and uncertainties relevant to the decision domain
and respond to predetermined questions. This Delphi approach in building consensus
from experienced individuals can also be accomplished by attaining information from
credible secondary sources that describe potential risk elements relevant to the decision
domain (Nguyen, Marmier, & Gourc, 2010). Risk elements can take a variety of forms,
but efforts to organize risk elements into common themes have led to the following
categories: environmental, legal, technical, commercial, and financial (Koller, 2005;
Liang-Chuan & Chorng-Shyong, 2008). IT risk elements tend to fall into technical and
operational risk categories (Liang-Chuan & Chorng-Shyong, 2008). The next step in risk
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 42

analysis involves observing or studying empirical incidents of an event affecting the


system and recording these occurrences. The final step in risk analysis entails deriving
a subjective or objective probability of the risk event’s occurrence based on
computational analysis.

The resulting risk probability estimate serves as a predictive factor of outcomes


in a rational decision process. This estimate allows the decision-maker to adhere to the
rational decision rule of maximizing expected utility by selecting outcomes with the
highest or lowest probability of occurrence, depending on the decision scenario. The
overall risk analysis exercise can also help the decision-maker by increasing awareness
of problem areas, the probability of success, consequences, and time and cost
constraints (Aven & Korte, 2003). The risk analysis is only part of the equation for
evaluating IT investment; this evaluation also requires consideration of cost-benefit
implications and competent managerial judgment (Aven, 2003). Cost-benefit analysis
looks at the financial implications of each decision option weighted against the benefits.
Managerial judgment entails scrutinizing available information based on relevant prior
experience, preestablished criteria, and organizational decision policies to derive the
rational decision that maximizes expected utility.

The decision process for IT acquisition, which includes SaaS adoption as a


subcategory, requires input regarding costs, benefits, and uncertainties associated with
the new technology. Nevertheless, risk assessment is typically not a predominant
concern in conventional software selection. As discussed earlier, COTS software
selection models incorporate formal methods of matching requirements with vendor
offerings, but costs and benefits remain the other key decision drivers. The adopting
organization retains control over access privileges and overall security in COTS
acquisition because the software runs within the organizational IT infrastructure. Risk is
a predominant issue in SaaS selection because in this scenario, potentially sensitive
data are entrusted to the provider, and SaaS customers need reassurances that their
data are secure and accessible while residing in the provider’s IS infrastructure. The
fact that the SaaS application resides on and is accessed via the open Internet creates
a plethora of security and continuity risks, including susceptibility to malware, hacking,
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 43

and other forms of intrusion, that could expose confidential data to outsiders or shut
down access to the SaaS web site. Another risk concern associated with SaaS adoption
is the multi-tenant structure of SaaS providers’ data centers. Multi-tenancy is a cloud-
computing service trait, meaning that all customers share resources from the same
pool. In a SaaS environment, the web site being accessed, the server used to host the
software, and the underlying data storage are shared by multiple customers, thus
creating the risk of an unauthorized co-tenant or unknown party gaining access to
another SaaS customer’s private data.

What are the specific SaaS adoption risks, and how can these be addressed in a
risk assessment framework? Risk assessment is necessary in the SaaS acquisition
process primarily to address the uncertainties associated with relying on another party’s
platform, IT systems, and internal policies and procedures for specific computing
functions. Although research shows that organizations conduct risk assessment for
COTS software acquisition, the methods are mainly used to gauge financial outcomes ,
such as cost reductions or revenue generation, or are integrated in an audit process of
accounting and financial systems (Liang-Chuan & Chorng-Shyong, 2008; Rechtman,
2009). Partial answers to the questions of identifying and addressing prevailing SaaS
risks can be found in some existing proposals for assessing the risk of cloud computing
in general.

To properly vet cloud-computing risk factors of complexity, extensibility, and


accessibility, Heiser (2010a) suggests that organizations focus on assessing how well
the technical and functional components of a SaaS solution meet their needs. Risk
assessment can also help the organization determine whether or not a specific provider
is capable of meeting the organization’s security, integration, compliance, and
contractual requirements. Heiser (2010b) also identified the three most prevalent SaaS
risk assessment practices widely used by a variety of organizations:

• Sending questionnaires to providers that include both company-specific


questions and questions based on published industry standards, such as ISO
27000. About 55% of organizations use this method.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 44

• Sending representatives to inspect providers’ facilities to determine whether


adequate physical and virtual security access controls are implemented to
protect the hardware and systems comprised in the SaaS infrastructure.
• Requesting standard certification documents from providers, including SAS 70
and ISO 27000. Both certifications are based on independent audits of providers’
financial viability and security practices.

ENISA’s (2009) framework for assessing cloud-computing risk presents an in-


depth—albeit specific to security—risk assessment model for evaluating cloud-based
services. The ENISA framework is derived from a survey of a panel of cloud-computing
experts, who provided insight into prevailing cloud risks, and an evaluation of cloud-
computing literature. The framework organizes cloud risks into three categories:
political, technical, and legal. Risk subcategories include loss of governance, lock-in,
compliance, multi-tenancy, termination or provider failure, provider acquisition, and
supply chain failure. A predetermined impact level of low, medium, or high, as well as
the probability of impact on a similar scale, is given for each subcategory or risk. The
framework also provides a detailed list of potential vulnerabilities applicable to each
subcategory of risk. For example, both the impact and probability of compliance risk in
the political risk category are rated as high, based on research noting the consequences
of vulnerabilities associated with non-compliance. Compliance vulnerabilities involving
lack of certification and lack of audit accreditation by the cloud service provider indicate
that the provider is not trustworthy and could expose the adopting organization to a
variety of security risks. ENISA (2009) recommends using the framework’s list of risk
categories, subcategories, and respective vulnerabilities to compile a questionnaire that
can serve as a tool for assessing the risk inherent in the services offered by competing
cloud service providers.

FedRAMP (2010), the framework of the federal Chief Information Officers (CIO)
Council for security assessment and authorization for cloud computing, is aimed at
standardizing evaluation of cloud services before adoption by government agencies.
FedRAMP is premised on the assumption that cloud-computing acquisition deserves
distinction as a risk-based decision in comparison to conventional IT acquisition, which
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 45

is considered a technology-based decision. FedRAMP is actually a composite of three


frameworks: Special Publication 800-53 of the National Institute of Science and
Technology (NIST, 2010b), which outlines security controls for federal government IT
systems; the Federal Information System Management Act (FISMA) IT security
monitoring requirements; and NIST’s (2010a) IT risk management framework outlined in
Special Publication 800-37. Like ENISA (2009), the FedRAMP framework is focused on
the security implications of adopting cloud services, but it also provides guidelines for
continually monitoring cloud service provision and launching a comprehensive risk
management initiative to identify the vulnerabilities and impact of using each cloud
service provider under consideration by government entities.

The monitoring and risk management components of FedRAMP (2010)


designate the cloud provider as the responsible party in a government/provider
arrangement. As such, the provider is required to submit frequent reports to the
subscribing agency on its monitoring and risk management activities. The requirements
baseline proposed in the FedRAMP (2010) framework is unique in providing a detailed
list of security requirements to which a cloud service provider must adhere to gain
eligibility as a servicer of government entities. As in the ENISA framework, the
requirements are organized into 17 control categories and numerous subcategories
based on NIST’s Special Publication 800-53 (2010b). The evaluating agency can use
the cloud-related control requirements as a guide for determining whether or not the
provider meets federally mandated IT security standards in such areas as access
control, system backup, incident handling, system maintenance, physical access,
transmission integrity, and spam protection. For example, the access control baseline
(AC-19) for mobile devices requires that the provider “define inspection and preventive
measures” for accommodating mobile devices, such as cell phones (FedRAMP, 2010).

The Cloud Security Alliance (CSA, 2010), a consortium of cloud service


providers, introduced a security risk assessment questionnaire in 2010 aimed at
assisting cloud customers in evaluating the security risks inherent in various flavors of
cloud service offerings, including SaaS, PaaS, and IaaS. The questionnaire, called the
Consensus Assessments Initiative Questionnaire (CAIQ), includes 100 questions aimed
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 46

at assessing vendor compliance in specific cloud risk factors and subfactors. Each
question requires a simple yes or no response from the provider and gauges whether or
not specific security risk factors are mitigated by the provider or whether the provider
can attest to measures taken to address specific risks by providing credible
documentation to the customer conducting the assessment.

Figure 2.6 shows a comparison between Heiser’s (2010a) cloud risk factors and
the risk factors proposed in the ENISA (2009), FedRAMP (2010), and CSA (2010) risk
assessment frameworks. FedRAMP (2010) has the most extensive list of risk factors of
the three comparison frameworks and includes unique control factors to address such
risks as IS documentation and mobile device support that are not covered in-depth in
the other cloud risk assessment frameworks. Figure 2.7 shows a sampling of actual and
potential questions derived from Heiser’s (2010a) cloud risk factors and from the three
assessment frameworks that cloud customers could use to evaluate provider risk.

Figure 2.6. Comparison of cloud risk factors, categories, and controls in Heiser’s cloud
risk factors and the ENISA, CSA, and FedRAMP cloud risk assessment frameworks.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 47

Sample Cloud Risk Assessment Questions from Heiser’s Cloud Risk Factors

• Does the provider have a software notification policy in place for alerting or notifying customers
about software upgrades and potential downtime implications? (Extensibility Risk)
• Does the provider have security controls in place to monitor and log access to customer data?
(Accessibility Risk)
Sample Cloud Risk Assessment Questions from ENISA’s Framework

• Does the provider store your data in a known jurisdiction? (Legal Risk)
• Is your data isolated from other customers’ data? (Legal Risk)
• Does the provider have measures in place to prevent a malicious attack? (Technical Risk)
Sample Cloud Risk Assessment Questions from CSA’s Framework

• Do you provide tenants with documentation describing your Information Security Management
Program (ISMP?) (Information Security Risk)
• Are any of your data centers located in places that have a high probability/occurrence of high-
impact environmental risks (floods, tornadoes, earthquakes, hurricanes, etc.)?
(Resiliency/Business Continuity Risk)
Sample Cloud Risk Assessment Questions from FedRAMP’s Framework

• Does the provider have a plan in place for dealing with denial-of-service attacks? (System and
Communication Risk)
• Does the provider have controls in place to restrict physical access to the facilities where
information systems and client data reside? (Physical and Environmental Protection Risk)

Figure 2.7. Sample risk assessment questions based on Heiser’s cloud risk factors and
the ENISA, CSA, and FedRAMP cloud risk assessment frameworks.

Heiser’s (2010b) survey sheds light on risk assessment methods used


specifically by organizations to assess SaaS risk, but the research also acknowledges
the risk relevance downside of using questionnaires. Heiser’s (2010a) three risk factors
of complexity, extensibility, and accessibility are limited in depth, scope, and practicality
to SaaS risk assessment. According to Heiser (2010a), complexity in cloud computing is
described as the extent of software code used by the provider in the SaaS application.
Not only is this risk factor difficult to measure, but it may not necessarily be relevant to
determining how reliable or functional a SaaS application is in meeting users’ risk
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 48

demands. Heiser (2010a) describes accessibility as vulnerability to outside attacks


given the web exposure of SaaS applications, but he provides only cursory suggestions
regarding the type of information to be gathered from providers to gauge this risk factor.
Heiser’s (2010a) extensibility risk factor relates to the upgrade risk associated with
cloud computing. The suggestion here is that software upgrades by the provider can
potentially cause security vulnerabilities. As with the complexity risk, decision-makers
are given limited information regarding the relevance of this risk factor to SaaS adoption
or succinct guidelines for assessing this risk.

The security risk categories and subcategories in ENISA’s (2009) cloud risk
assessment framework are broad and inclusive but not specific to SaaS. The framework
requires SaaS adopters to determine the relevance of each vulnerability and rephrase
these identified vulnerabilities into self-derived questions specific to their needs. This
approach imposes a burden on SaaS decision-makers to develop a tool of their own.
The challenge in leveraging ENISA’s risk categories to serve as input to a self-derived
SaaS risk assessment questionnaire is in determining which of the multitude of risk
subcategories and vulnerabilities identified in the framework are practical for evaluating
SaaS adoption.

FedRAMP’s (2010) framework is fairly comprehensive and specifically tailored to


the stringent information security demands of sensitive government agencies, such as
the Department of Defense and the Department of Justice. Because the framework is
meant as an encompassing risk assessment standard for all federal agencies, the
requirements are quite stringent. The shortcomings of FedRAMP’s (2010) framework in
terms of its portability to SaaS risk evaluation are similar to those of the other
frameworks discussed. The broad scope of FedRAMP’s controls, the insistence that
providers adhere to federal monitoring and risk assessment requirements, and the
regularly scheduled federal cloud service reauthorization that providers must undergo
would be considered excessive by private-sector organizations that demand expedited
cloud evaluation and have fewer resources to commit to an in-depth cloud security risk
inquiry.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 49

CSA’s (2010) CAIQ is concise and useful to the adopter in overall cloud risk
assessment, but not all the questions are relevant to SaaS adoption. CAIQ goes further
in providing questions pertinent to clouding computing, but as with ENISA’s (2009)
framework, the SaaS decision-maker must use judgment in deciding which questions
merit inclusion in a SaaS risk assessment questionnaire to evaluate providers.

Despite the shortcoming in Heiser’s (2010b) risk factors and ENISA’s (2009),
FedRAMP’s (2010), and CSA’s (2010) frameworks, these approaches offer a good
starting point for understanding the collective risk concerns prevalent among private-
and public-sector cloud adopters. The factors and frameworks have a common element
of emphasizing the need to establish trust between the adopter and the cloud service
provider. The factors and frameworks also underscore the fact that differences in
perception remain regarding what risk factors are relevant to cloud service adoption. For
the SaaS adopter, the question of what factors, categories, or controls to leverage in
developing a list of SaaS risk assessment requirements or creating a questionnaire
specifically geared toward evaluating SaaS providers remains unanswered.
Nevertheless, these risk factors and frameworks can serve as a viable starting point to
developing a SaaS-specific risk assessment framework. To further rationalize the SaaS
adoption process, streamline SaaS provider evaluation, and reduce business risks, a
determination must be made as to which risk factors proposed in these various
frameworks are unique to SaaS adoption. The conceptual model of this research is
intended to do just that. The aim is to determine which specific risk factors are relevant
to software success and can facilitate an expedited evaluation and decision-making
process. An empirical analysis of prior SaaS adoption successes and the role of
suggested risk factors is necessary to achieve this objective.

Gauging Successful Software

Successful software adoption is another key component used in the conceptual


model of this research. Several factors that contribute to successful software adoption
are noted in the literature. The prevalence of clear requirements criteria that allow the
decision-maker to discriminate among various cloud products is one factor (Maiden &
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 50

Ncube, 1998; Ellis & Banerjea, 2007). A second is the degree to which the software
contributes to organizational productivity and profitability (D’Amico, 2005), and a third is
the users’ overall satisfaction with the system. Several studies on software acquisition
also show a strong linkage between successful software adoption and the level of
satisfaction experienced by the adopter (DeLone & McLean, 2003; McKinney, Yoon, &
Zahedi, 2002; Olson & Ives, 1981; Staples, Wong, & Seddon, 2002; Torkzadeh & Doll,
1999). Oliver & Swan (1989) define satisfaction as an emotional and evaluative
judgment regarding the ability of a product of service to meet perceived expectations
and desires. Satisfaction represents an emotive measure of the level of usability and
comfort experienced by the adopter.

DeLone and McLean’s (2003) IS success model lists the user satisfaction
dimension as one determinant of success in IT system adoption. The IS success model
proposes that system, information, and service quality can negatively or positively affect
user satisfaction. Although this model is derived from a qualitative study of a sample of
organizations adopting IS, it sheds light on a seemingly subjective yet essential element
of IT system adoption—user satisfaction. McKinney, Wong, and Zahedi (2002) conclude
that user satisfaction is an appropriate measure of adoption success because it is a
consequence of various positive and negative experiences the adopter has had with an
IT system. Sang-Yong, Hee-Woong, and Gupta’s (2009) research on OSS success
empirically validates DeLone and McLean’s (2003) IS success model; this research,
based on statistical analysis of data collected in a survey, determined that software
quality and available support service contribute to more than 50% of the variance in
user satisfaction. These findings on software success validate the construct for this
research, that is, that user satisfaction is a major and measurable contributing factor to
software and overall IT system adoption success.

Conceptual Model and Hypothesis

The first step in developing this risk-based SaaS evaluation model was to
determine the pertinent SaaS risk areas. As an underlying construct in its conceptual
model, this research adopted propositions made by Aven (2003) and Koller (2005):
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 51

Although risk analysis is indicative of a rational decision process, filtering the pool of
potential risks down to ones relevant to the decision domain is necessary to streamline
and expedite the decision process. Without this filtering exercise, risk analysis becomes
an overwhelming and inefficient effort that considers all risks without regard to their
significance to outcomes. The literature also shows that considering risk consequences
in software acquisition can influence the success or failure of the acquisition (Hollander,
2000; Starinsky, 2003; Liang-Chuan & Chorng-Shyong, 2008).

The final three top-level risk dimensions and associated sub-dimensions used in
this research represent a synthesis of cloud service risk factor suggestions from a
variety of sources, including cloud security risk factors noted by Heiser (2010a);
business continuity risk factors suggested in CSA’s (2010) cloud risk assessment
framework; the technical and political cloud risk categories noted in ENISA’s framework
(2010); access, maintenance, contingency planning, and system and communication
protection controls and requirements described in the FedRAMP (2010) framework; and
other cloud service risks extracted from the literature. Figure 2.8 shows the three SaaS
risk dimensions of security, business continuity, and integration that form the basis for
SaaS risk relevance determination in this research. These risk dimensions represent the
common and primary risk concerns prevalent in the frameworks discussed and in the
literature on cloud computing. As Figure 2.8 also shows, each SaaS risk dimension
includes distinct sub-dimensions that individually contribute to the prevalence of
vulnerabilities associated with the higher-level risk dimension. Appendix VI provides a
more detailed description of each risk sub-dimension item shown in Figure 2.8.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 52

Figure 2.8. Synthesized SaaS risk dimensions and sub-dimensions.

To gauge the relevance of each risk dimension to successful SaaS adoption, the
conceptual model used in this research was premised on finding an association
between the decision–maker’s level of certainty of these risks during the selection
process and the subsequent level of satisfaction after the SaaS solution was adopted
and in use for a reasonable period of time. To validate the conceptual model and the
proposed S-CRA framework introduced in this research, three distinct hypotheses were
proposed, establishing the relevance of each risk dimension to SaaS adoption
satisfaction. Questions synthesized from the literature review and existing frameworks
were used to determine the relevancy of each risk dimension to the satisfaction
construct. If relevancy was established between the key SaaS adoption constructs
based on the quantitative approach used in this research and ensuing data analysis,
then the questions became part of the S-CRA questionnaire-based framework. It must
be noted that the questions associated with each risk sub-dimension were not meant to
elicit yes or no responses. Rather, each question was intended to encourage the SaaS
risk evaluator to reflect on the amount of due diligence exercised in protecting the
adopting organization from potential pitfalls associated with SaaS and cloud services in
general. The following paragraphs describe each risk dimension and the associated risk
relevance hypothesis.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 53

Security risk dimension.

This research defines the security risk dimension as risks related to the SaaS
provider’s ability to ensure data and system confidentiality, integrity, and compliance
and prevent access by unauthorized individuals to sensitive information. This risk
dimension also evaluates the SaaS vendor’s provisions to encrypt communications,
validate credentials, and provide independent accreditations of its operations.
Government agencies may have stricter security requirements than private
organizations because the information generated from some agencies may pertain to
issues of national security. Organizations that are considering SaaS adoption and
engage in a rational decision process entailing gathering information about each
potential provider’s ability to address the security dimension and associated sub-
dimensions will likely experience a higher overall level of satisfaction with the SaaS
adoption. This increased satisfaction and, hence, adoption success are presumed to be
derived from security risk awareness and mitigation measures enacted by the
organization. To establish the relevance of the security risk dimension to SaaS adoption
success, this research proposed the following null and alternate hypotheses:

• Null Security Risk Hypothesis (Hos1): SaaS adoption success (AS) does not
depend on the decision-maker’s level of certainty (C) of the SaaS security (S)
risk dimension as defined by the S-CRA framework.

Hos1: AS = (S)c

• Alternate Security Risk Hypothesis (Hs1): SaaS adoption success (AS)


depends on the decision-maker’s level of certainty (C) of the SaaS security (S)
risk dimension as defined by the S-CRA framework.

Has1: AS = (S)c

Business continuity risk dimension.

Business continuity as a risk dimension evaluates the flexibility, sustainability,


and quality of application support offered by the SaaS provider, enabling business
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 54

functions dependent on the SaaS platform to continue. Response time, recovery time,
and competency of support personnel are a few of the critical factors in evaluating SaaS
support. Other significant continuity factors include scalability, availability, and ability to
customize the SaaS application. The cost and convenience of SaaS often overshadow
the very real continuity risks faced by organizations that adopt this outsourced form of
computing. An organization adopting SaaS for multiple functions is exposed to the risk
of financial loss and operational disruption if any single-provider web-based SaaS
application is inaccessible or if Internet connectivity is down. Adopting organizations can
take steps to address business continuity risks to facilitate successful SaaS adoption,
but the critical first step is to gain awareness of each evaluated provider’s risk footprint
before adoption. In that regard, this research proposed the following null and alternate
hypotheses:

• Null Business Continuity Risk Hypothesis (Hobc1): SaaS adoption success


(AS) does not depend on the decision-maker’s level of certainty (C) of the SaaS
business continuity (BC) risk dimension as defined by the S-CRA framework.

Hobc1: AS = (BC)c

• Alternate Business Continuity Risk Hypothesis (Habc1): SaaS adoption


success (AS) depends on the decision-maker’s level of certainty (C) of the SaaS
business continuity (BC) risk dimension as defined by the S-CRA framework.

Habc1: AS = (BC)c

Integration risk dimension.

This risk area concerns the level of integration required for assimilating the

application into the organization’s IS. It also covers such operational issues as ease of

use, compatibility, functionality, and reporting. In most situations, using SaaS requires

exchanging internally residing data with the external SaaS application and vice versa. In

a few adoption scenarios, the adopting organization could require a wholesale migration
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 55

of data stored and processed internally to the SaaS platform. In either case,

organizations must determine whether the SaaS capabilities allow for single-instance

information migration, frequent migration, or continual synchronization of information

between the provider and the organization. The risks associated with integrating an

adopted SaaS application can affect the organization’s data quality and functioning and

increase costs. Appropriate consideration and negotiation with the provider based on

integration requirements can dramatically increase adoption success in terms of

assimilating a SaaS application into the organization’s IT infrastructure. Therefore, this

research proposed the following null and alternate hypotheses:

• Null Integration Risk Hypothesis (Hoi1): SaaS adoption success (AS) does not
depend on the decision-maker’s level of certainty (C) of the SaaS integration (I)
risk dimension as defined by the S-CRA framework.

Hoi1: AS = (BC)c

• Alternate Integration Risk Hypothesis (Hai1): SaaS adoption success (AS)


depends on the decision-maker’s level of certainty (C) of the SaaS integration (I)
risk dimension as defined by the S-CRA framework.

Hai1: AS = (BC)c
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 56

Chapter 3

Research Method and Data Collection

Target Population

To test the hypothesis underlying the theoretical SaaS risk assessment


framework, an online survey was conducted of a random sample of IT decision-makers
who have adopted SaaS in their business operations. A typical respondent would have
already undergone a formal or informal SaaS evaluation process, selected a solution,
and begun using the online software for organizational functions. IT decision-makers
responsible for making the final software acquisition decision come from all levels of the
organizational hierarchy. Research on IT system acquisition shows that cost and extent
of cross-departmental or cross-functional use determine the level of authority required
for the final decision (Chary, 2007; Yamin & Sinkovics, 2010; Yeo & Ajam, 2010). The
suggestion here is that if the acquisition cost is marginal and usage is limited to a
specific department or a small sub-group in the organization, then the decision process
usually involves a single manager and a technical staff evaluating and selecting from
alternative IT systems. If the scope of the system being acquired has multi-departmental
implications, then group decision-making and upper-level managerial authority are
involved. IT system acquisition authority also varies depending on the size of the
organization. A smaller organization has a flatter decision structure and fewer levels of
authority, thus limiting the IT system evaluation and selection process to upper-level
management or a single decision-maker. Because SaaS adoption entails an ongoing
contractual arrangement with the provider and consideration of the security implications
of entrusting a third party with potentially confidential organizational data, the deciding
authority customarily has managerial oversight and is authorized to establish contracts
with outside parties based on organizational policies.
This research used a simple random probability sampling method to select a
sample from the population of IT decision-makers in private, government, nonprofit, and
academic organizations in the United States. A 2007 economic census indicated that
the total number of employer firms in the United States totaled 14.5 million (U.S.
Census Bureau, 2007). A margin-of-error table indicates that a sample size of 384 is
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 57

necessary to achieve a 95% level of confidence and ±5% margin of error in


representing the population being studied (Saunders, Lewis, & Thornhill, 2007).
However, a smaller sample size of only 97 participants is necessary to achieve a 90%
level of confidence and ±10% margin of error. This research sought to achieve either a
95% or 90% level of confidence, based on ability to reach potential respondents given
the limited timeframe for conducting the survey and the feasibility of making contact with
respondents. An established rule of thumb is that a sample size of 30 (n = 30) is the
lowest limit for a representative sample that can be used in a statistical analysis to make
inferences about a population (Saunders, Lewis, & Thornhill, 2007).

Research Method

The survey questionnaire was set up and hosted on the SurveyMonkey.com


SaaS-based web site, and an email with a description and link to the survey was
created to send to target respondents. DS3 Datavaluting/Terramark, an established
cloud services company, and Allcovered, Inc., an IT services company, both agreed to
distribute the survey via email to their pools of clients. Their clients consist of a variety
of organization types and IT decision-makers that subscribe to the cloud and IT services
offered by the two firms. Both organizations frequently send customer service surveys
to their respective clients and have developed efficient internal systems for sending
mass survey emails. The companies distributed the survey to a total of 951 customers,
who were given a two-week period to respond. A total of 121 respondents completed
the online SaaS survey questionnaire during the two-week period. After eliminating
several incomplete responses, the remaining 114 total responses—a 12% response
rate—were used for data analysis, to validate the construct, and to test the hypotheses
as it applied to this research.

Research Instrument

The survey instrument was a questionnaire used to determine the relevance of


certain risk dimensions to SaaS adoption success. The three prominent cloud risk
assessment frameworks and other literature on cloud and SaaS risk assessment were
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 58

reviewed to determine whether they presented suitable instruments to evaluate risk


relevancy. The ENISA (2009) framework is not a questionnaire per se, but it is a good
reference point for cloud vulnerabilities that merit evaluation in a questionnaire.
FedRAMP’s (2010) framework provides government entities with invaluable cloud
service requirements that could be rephrased as questions in a SaaS evaluation
exercise. CSA’s (2010) framework has a list of questions adequate for inclusion in a risk
relevancy questionnaire for this research. In addition, some direct questions on
evaluating SaaS risks are suggested by several authors in the literature (Marston,
Bandyopadhyay, Zhang, & Ghalsasi, 2011; Paquette, Jaeger, & Wilson, 2010;
Svantesson & Clarke, 2010). These questions, although pertinent, are presented as
best practices suggestions and not as wholly reliable instruments for determining SaaS
adoption success.

Given that no instrument currently exists to gauge the relevancy of certain risks
to SaaS adoption success, a custom questionnaire was developed to complement this
research. The questionnaire is a synthesis of questions directly suggested or implied in
the three frameworks discussed and other risk concerns explored in the literature
review. The unavailability of a proven SaaS risk questionnaire is not a surprise given the
infancy of this emerging technology and the lack of an industry- or private-sector-driven
initiative. Such an initiative would help achieve consensus on SaaS risk concerns and
contribute to the development of a standardized questionnaire for evaluating these
concerns. Further, at of the time of this research, credible academic research on cloud
and SaaS computing risk was sparse. No single research report or framework provided
questions specific to SaaS assessment. However, many studies presented opinions
about best practices and provided advice to potential SaaS and cloud adopters about
engaging in a formal investigative inquiry to affirm the capabilities of the SaaS provider
and help reduce business exposure to risk.

Judgment to determine which questions to include in the final questionnaire was


based on the S-CRA conceptual model’s risk dimensions and sub-dimensions
specifically developed for this research. For example, if a question from the various
evaluated frameworks and cloud risk assessment best practices literature appeared
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 59

relevant to the data location sub-dimension of the security risk dimension, it was
included as a candidate question for that risk dimension and sub-dimension pairing. The
draft SaaS risk assessment questionnaire included 53 questions grouped into four
categories. General demographic questions were used to derive a demographic profile
of participants and included such topics as the type of organization the respondent was
affiliated with, his or her job function, whether the organization subscribed to SaaS,
whether the respondent was involved in the SaaS selection process, how long the
organization had used SaaS, and the respondent’s overall level of satisfaction with a
particular SaaS application used by his or her organization. Security risk questions
included those relevant to the security risk sub-dimensions and were designed to gauge
the respondent’s level of awareness of various SaaS security risks pertaining to the
provider during the selection process. Questions relating to business continuity and
integration issues associated with SaaS adoption made up the final two categories of
questions included in the SaaS risk assessment questionnaire. With the exception of
the final open-ended question, which asked for descriptive feedback on the
respondent’s experience in using the SaaS application, only closed, multiple-choice
responses were allowed for most questions. The original 53 questions were narrowed
down to 44 multiple-choice questions and 1 open-ended question after a pilot study was
conducted. Thirty-eight of the remaining questions were related to satisfaction or risk
certainty. The final survey questionnaire used for this research is included in Appendix
II.

Questionnaire Scales

A Likert scale was used to gauge the constructs of satisfaction and risk certainty
as described in the conceptual model developed for this research. Several research
studies in marketing and other social science disciplines have proven that satisfaction is
a measurable construct using a Likert-like scale. In his quantitative survey research on
an empirical linkage among satisfaction, expectation, and consumer pre-purchase and
post-purchase experiences, Oliver (1980) devised a 6-point Likert scale to gauge
consumer satisfaction level. Similarly, Spreng and Mackoy (1996) found that a 7-point
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 60

strongly agree/strongly disagree Likert scale proved adequate in their research


confirming a relationship between consumer satisfaction and perceived service quality.

For the purpose of this research, a 5-point Likert scale was used to obtain a
respondent’s measure of satisfaction with a specific SaaS adoption experience. Figure
3.1 shows the satisfaction Likert scale used in this research. A “very satisfied” rating
indicates that the SaaS application meets all expectations and was fully adopted with
few or no surprises. A “somewhat satisfied” rating indicates that a few problems arose
during adoption, but these imposed minimal risk to business processes. At the
dissatisfied end of the scale, a “very unsatisfied” rating implies that the adoption was a
complete failure, placing the business process and, possibly, the entire organization in
peril. A “somewhat unsatisfied” rating reveals that the SaaS solution adopted did not
meet most expectations but did allow the business process to function at a minimal
level. Whether the organization continues to use or has disavowed the application is
relevant only in the sense that the level of satisfaction supported the decision to keep or
cancel the subscription arrangement with the SaaS provider.

4 3 0 2 1

Very Somewhat Very


Unsatisfied Satisfied
Unsatisfied Satisfied
Don’t Know

Figure 3.1. Likert-like SaaS adoption satisfaction rating scale and explanation.

A Likert scale was also developed to measure the uncertainty construct adopted
from normative decision-making theory for this research. The literature review on
decision-making reveals that uncertainty comprises the unknown factors that contribute
directly to risks in decision outcomes (Damodaran, 2008; Gopinath & Nyer, 2009; Koller,
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 61

2005). Being cognizant or incognizant of these unknown factors during the SaaS
decision process is the underlying independent construct that forms part of the
argument for the relevancy of some SaaS risk dimensions to successful adoption. To
measure this level of cognizance by the SaaS adopter for each risk sub-dimension’s
question, a 5-point certainty Likert scale was used in the questionnaire. Figure 3.2
shows the certainty scale and varying response options.

Respondents were asked to rate their certainty regarding the information they
had related to each risk question. A “very certain” response indicates that during the
SaaS adoption decision-making process, the respondent had sufficient information
about the risk question and was certain about its implications. In contrast, a “very
uncertain” response implies that the respondent did not have adequate information
about the risk question and its implications. For example, a response of “very certain” to
the business continuity risk dimension and pricing/fees sub-dimension question “How
certain are you that the vendor imposes a penalty for early termination of your
subscription?” would indicate that the respondent was well aware that a penalty existed
or did not exist for early termination of the service during the provider evaluation
process. Although other social science research adopted a broader certainty Likert
scale that included such granular certainty responses as “totally certain” and “extremely
certain” or ”not at all confident/extremely confident,” the abbreviated certainty scale
used in this research was necessary to achieve response brevity and quality of analysis
(Campbell, 1990; Walter, Mitchell, & Southwell, 1995; Wilson & Rapee, 2006). The
brevity logic is derived from the rational decision process, which recommends
conciseness in gathering and analyzing decision-relevant information. Furthermore, a
more granular certainty response option is unnecessary given that the respondents, to
some degree, either had information relating to the risk question or had no information
pertaining to a particular risk question.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 62

4 3 0 2 1

Very Uncertain Certain Very


No answer /
Certain Certain
Does not
apply

Figure 3.2. Likert-like SaaS certainty rating scale and explanation.

Validity and Reliability

To establish the validity and reliability of the questionnaire instrument used in this
research, standard tests for content validity, reliability, and construct validity were
conducted.

Content validity.

Conclusions derived from an ineffective research instrument have the potential to


marginalize a study. Threats posed to the content validity of this research were
addressed through a comprehensive review of the literature on decision theory,
software selection frameworks, and risk assessment. The questionnaire used in this
research was also field-tested on a small sample of the target population of SaaS
adopters and a few cloud technology experts. The eight pilot test respondents were
contacted individually and asked to complete the survey to gauge its validity, length,
and clarity and whether legitimate inferences could be made from the survey results.
Respondents were also asked to provide narrative feedback on their experience in
participating in the pilot and whether or not they had difficulty in recalling their SaaS
adoption experiences, as required by the questionnaire. Most respondents, including
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 63

the cloud technology experts, noted the length and relevancy of several questions.
Based on the feedback provided from the pilot test, the questionnaire was narrowed
from 53 questions to 44. One security dimension question and four business continuity
and integration risk dimension questions were dropped from the final questionnaire after
the pilot test. Several questions were also reworded for clarity, and an average
timeframe of 10 minutes was identified as necessary to complete the survey. During the
process of screening pilot test participants, the eight SaaS adopters and the cloud
experts were asked if they were clients of either of the two cloud providers that
distributed the final survey. All pilot test participants were excluded from the final survey
distribution and, hence, the main data collected and used to validate the research
construct and to test reliability and the stated hypotheses.

Reliability.

To further establish the quality of the data collection instrument used in this
research, a reliability analysis was necessary to determine the reliability of the
instrument in producing consistent results. Reliability analysis is an estimate of internal
consistency and is derived from the proportion of systematic changes in an instrument’s
scale. These systematic changes are found in the correlation between the results
provided during different administrations of the instrument. If correlation is high between
results in different administrations, then the instrument is deemed consistent and
reliable (Trochim, 2006). Reliability analysis also answers questions about the credibility
of the research instrument, including whether the instrument will derive similar results if
repeatedly administered, whether other researchers will make the same conclusions
when using the same instrument, and whether transparency exists in how the results
were interpreted (Saunders, Lewis, & Thornhill, 2007).

Although reliability can only be estimated, not calculated, the two standard
estimates of reliability and, hence, internal consistency are Cronbach alpha and Fornell
and Larcker’s composite reliability (Fornell & Larcker, 1981; Trochim, 2006). Table 3.1
shows the Cronbach alpha coefficient for each of the risk sub-dimension constructs.
Each alpha measure is shown as substantially above the 0.7 benchmark, attesting to
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 64

the reliability of the instrument in measuring these constructs (Nunnally, 1978).


Composite factor reliability, which is considered a better measure of the internal
consistency of the scale used to measure a specific factor, is also shown in Table 3.1,
with all values above the minimum value recommended by Fornell and Larcker (1981)
of 0.7. The adequacy of the measures of Cronbach alpha and composite factor
reliability confirm the overall reliability of the questionnaire used in this research.

Table 3.1
Reliability Measures and Constructs
Average
Composite
Risk Dimension/Sub-dimension Cronbach Variance
Factor
Constructs Alpha Extracted
Reliability
(AVE)
SEC-Access 0.90 0.915 0.862
SEC-Integrity/Confidentiality 0.90 0.794 0.726
SEC-Transmission 0.90 0.803 0.735
SEC-Data Location 0.90 0.808 0.740
SEC-Ownership 0.90 0.858 0.792
SEC-Compliance 0.90 0.853 0.786
BC - Testing 0.90 0.837 0.769
BC - Recovery 0.90 0.905 0.849
BC - Availability 0.90 0.912 0.847
BC - Scalability 0.90 0.828 0.760
BC – Upgrade Notification 0.90 0.872 0.809
BC - Support 0.90 0.861 0.795
BC - Pricing 0.90 0.890 0.830
BC - Termination 0.90 0.808 0.740
BC - Customization 0.90 0.972 0.948
BC - Training 0.90 0.916 0.863
BC - Documentation 0.90 0.845 0.778
BC - Provider Management 0.90 0.775 0.709
I - Usability 0.90 0.854 0.787
I - Compatibility 0.90 0.951 0.913
I - Functionality 0.90 0.912 0.857
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 65

Key:
SEC = Security risk dimension
BC = Business continuity risk dimension
I = Integration risk dimension

Construct validity.

Construct validity is used to describe how well hypothetical statements are


supported by the instrument and constructs used in a research study and aims to
determine how well construct measurements complement the proposed theories. For
example, this research proposed a cause-and-effect relationship between security risk
certainty and SaaS adoption success and introduced a satisfaction scale to measure
adoption success and a certainty scale to measure the security risk dimension and sub-
dimension items. The security construct is said to be validated if sufficient data exist to
show that the security risk construct proposed predicates the relationship between
SaaS adoption success and the security risk certainty dimension.

Convergent and discriminant validity are two prominent subtypes of construct


validity assessment. Construct validity is demonstrated by establishing evidence of both
convergent and discriminant validity (Chin, Gopal, & Salisbury, 1997). Convergent
validity indicates evidence of a pattern of correlation between measures of constructs
that are theorized to be related. It shows a convergence of similar measures that make
up a construct. Discriminant validity provides evidence of the exact opposite, that is, that
no pattern of correlation exists between measures of dissimilar constructs with no
theoretical relationship.

Convergent validity is considered sufficient if the factor loading for items is


greater than 0.5 (factor load > 0.5) and each construct has an average variance
extracted (AVE) of at least 0.5 (Wixom & Watson, 2001; Fornell & Larcker, 1981; Kim,
Ferrin, & Rao, 2007). Factor loading represents the extent of the relationship between
each questionnaire item and the primary risk dimension/sub-dimension construct. As
shown in Tables 3.1, 3.2, 3.3, and 3.4, all final factor loadings in this research exceeded
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 66

the standard 0.5 factor-loading and AVE thresholds. However, several items were
dropped from the initial analysis because their factor loadings were below the
acceptable threshold. Of the security risk construct items, one of the access sub-
dimension certainty measurement items and one of the segregation sub-dimension
certainty measurement items were dropped because of their respective low loading
factors in measuring security access and data segregation risks. A data recovery and
pricing risk sub-dimension certainty measurement item was removed from the business
continuity construct because of its low factor-loading value. Similarly, the reporting sub-
dimension of the integration risk construct was also deemed an inadequate measure
based on its low loading factor. A full list of component items removed based on
convergent validity analysis is included in Appendix IV. Nevertheless, Table 3.2 shows
that access risk is a valid high-value measure of security risk. Table 3.3 shows that
customization risk is a key construct item of the business continuity risk dimension
construct, and Table 3.4 highlights compatibility risk as a reliable measure of the
integration risk construct. The tables also show that the cumulative percent of variance
explicable by factors with eigenvalues greater than 1.0 is greater than 50% for these
constructs.

Table 3.2
Factor Analysis for SaaS Security Risk (n = 114)
% of
Factor Eigenvalue >
Construct Certainty Measurement Item (Risk Sub-dimension) Explained
Loading 1.0
Variance
Security Risk
Dimension (SEC) Background Checks Certainty (Access) 0.799 3.77 34.25
Restricted Access Certainty (Access) 0.764 1.43 13.02
Breach Certainty (Access) 0.670 1.23 11.14
Intrusion Controls Certainty (Integrity/Confidentiality) 0.527
Web Browser Encryption Certainty (Transmission) 0.540
Data in Same Country Certainty (Location) 0.548
Data Ownership Certainty (Ownership) 0.627
SOX/HIPAA Compliance Certainty (Compliance) 0.647
Vendor SAS70 Report Certainty (Compliance) 0.590
Cumulative % of Variance Explained 58.41
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 67

Table 3.3
Factor Analysis for SaaS Business Continuity Risk (n = 114)
% of
Factor Eigenvalue >
Construct Measurement Item (Risk Sub-dimension Construct) Explained
Loading 1.0
Variance
Business
Continuity Risk Test Before Adoption Certainty (Testing)
Dimension (BC) 0.592 6.38 31.88
Recovery of Lost Archive Data Certainty (Recovery) 0.721 1.56 7.79
Continuity: Uptime/Downtime Performance Certainty
(Availability) 0.735 1.35 6.75
Scalability Certainty (Scalability) 0.577 1.10 5.48
Upgrade Notification Certainty (Upgrade) 0.654 1.00 5.02
Phone Support Certainty (Support) 0.573
Email Support Certainty (Support) 0.722
Web Ticket Support Certainty (Support) 0.608
Subscription Fees Certainty (Pricing) 0.712
Payment Terms Certainty (Pricing) 0.667
Early Termination Penalty Certainty (Termination) 0.642
Data Return on Cancellation Certainty (Termination) 0.548
Customization Allowed Certainty (Customization) 0.927
Customization Fee Certainty (Customization) 0.869
Software Training Certainty (Training) 0.711
Software Training Fee Certainty (Training) 0.778
Print/Electronic Documentation Certainty (Documentation) 0.605
Primary Contact Certainty (Provider Management) 0.502
Cumulative % of Variance Explained 56.91

Table 3.4
Factor Analysis for SaaS Integration Risk (n = 114)
% of
Factor Eigenvalue >
Construct Measurement Item (Risk Sub-dimension) Explained
Loading 1.0
Variance
Functional Requirements Outlined Before Selection Certainty
Integration Risk (I)
(Functionality) 0.782 3.16 52.61
All Functional Requirements Met Certainty (Functionality) 0.689 1.03 17.02

Exchange Data with Internal Software Certainty (Compatibility) 0.816

Vendor Assistance in Data Transfer Certainty (Compatibility) 0.852


Easy Navigation Certainty (Usability) 0.620
Cumulative % of Variance Explained 69.63
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 68

To achieve adequate discriminant validity, the AVE for each construct must be
greater than the variance between each construct and other constructs in the theoretical
framework (Chin, 1998). This can be validated by checking whether the square root of
the construct’s AVE is higher than its correlation with other latent variable constructs.
Tables 3.5 through 3.7 show the correlation matrix for the security, business continuity,
and integration construct items with each AVE’s square root exceeding the off-diagonal
items. These calculations demonstrate adequate discriminant validity.

Table 3.5
Correlation Matrix of Latent Variables (Security Risk Construct)
SEC-Ownership
SEC-Integrity /
Confidentiality

Transmission
SEC-Access

Compliance
SEC-Data
Location
SEC-

SEC-

SEC-Access 0.89*
SEC-Integrity/Confidentiality 0.382 0.85
SEC-Transmission 0.105 0.321 0.86
SEC-Data Location 0.157 0.276 0.232 0.86
SEC-Ownership 0.071 0.212 0.306 0.486 0.89
SEC-Compliance 0.444 0.469 0.161 0.514 0.522 0.89
*Bolded diagonal values are the square roots of the AVE. Values below each AVE column
are lower than AVE, indicating discriminant validity.

Table 3.6
Correlation Matrix of Latent Variables (Business Continuity Risk Construct)
BC - Customization
BC - Availability

BC - Termination
BC - Scalability
BC - Recovery

Documentation
BC - Testing

Management
BC - Upgrade

BC - Provider
BC - Training
BC - Support

BC - Pricing

BC -

BC - Testing 0.88
BC - Recovery 0.178 0.92
BC - Availability 0.290 0.581 0.92
BC - Scalability 0.364 0.328 0.478 0.87
BC - Upgrade 0.454 0.257 0.429 0.543 0.90
BC - Support 0.859 0.427 0.523 0.857 0.662 0.89
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 69

BC - Pricing 0.663 0.172 0.149 0.541 0.800 0.596 0.91


BC - Termination 0.413 0.376 0.379 0.451 0.644 0.422 0.535 0.86
BC - Customization 0.760 0.408 0.651 0.437 0.530 0.631 0.665 0.529 0.97
BC - Training 0.476 0.296 0.364 0.450 0.601 0.369 0.613 0.482 0.537 0.93
BC - Documentation 0.262 0.115 0.121 0.186 0.331 0.205 0.260 0.061 0.212 0.37 0.88
BC - Provider Management 0.419 0.244 0.306 0.306 0.256 0.257 0.185 0.210 0.254 0.191 0.364 0.84
*Bolded diagonal values are the square roots of the AVE. Values below each AVE column are lower than AVE, indicating discriminant
validity.

Table 3.7
Correlation Matrix of Latent Variables (Integration Risk Construct)
I - Compatibility
I - Functionality

I - Usability

I - Functionality 0.93
I - Compatibility 0.596 0.96
I - Usability 0.556 0.414 0.89

*Bolded values are the square roots of the AVE. Values


below each AVE column are lower than AVE, indicating
discriminant validity.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 70

Chapter 4

Data Analysis and Results

Introduction

The data resulting from the survey questionnaire were statistically analyzed in
three distinct steps. Descriptive analysis was used to derive a quantitative profile of the
respondents based on the responses provided using the nominal and ordinal
measurement scales. Additional factor analysis was employed to further reduce the
various construct items, the security, business continuity, and integration risk
dimensions, to use in testing the three hypotheses. The final analysis involved testing
each of the hypotheses proposed by leveraging inferential statistical tests of two-tailed
significance and Spearman’s rho correlation coefficient.
The data were collected primarily using nominal and ordinal scales. The yes/no
responses were pre-coded with 1 and 2 values. Other nominal scales were used to
extract additional demographic data from respondents. For example, job function was
determined using a 7-point Likert scale with response options ranging from IT executive
to consultant. The ordinal satisfied/dissatisfied scale was pre-coded from 0 through 4,
with 1 representing very satisfied, 4 representing not very satisfied, and 0 representing
not applicable. The ordinal certain/uncertain scale was pre-coded from 0 through 4 in a
manner similar to the satisfied/dissatisfied scale. This pre-coding scheme allowed for
descriptive analysis to depict the responding population and inferential statistical
analysis to support or refute the hypotheses.

Descriptive Analysis

The survey’s first six questions were designed to attain a demographic and
satisfaction profile of respondents using univariate analysis of the nominal measures. Of
the 114 responses to the question of organization type, a majority of 66.7% indicated
affiliation with a private corporation. Government entities accounted for 17.6% of
respondents’ organization type, while nonprofits accounted for 14%. Only two
respondents indicated academic institution as their organization type. In classifying their
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 71

job functions from the response options provided, the respondents showed more
dispersion. Nevertheless, the predominant selection was “other management” as job
function. Aggregating the job function responses into higher-level function categories of
top IT management and staff-level IT management showed that 17.8% of the
respondents were from top-level IT management, whereas 36.7% of respondents
functioned in other IT roles, such as network and system management. Non-IT
corporate management functions accounted for 5.4% of respondents.
In responding to the question of length of SaaS adoption, all respondents
indicated using SaaS for 6 years or less. A total of 48.7% had used SaaS for less than a
year, followed by 42.5% indicating a 1- to 3-year adoption timeframe. Continued use of
SaaS in their respective organizations was confirmed by 77.2% of the 114 respondents,
while 22.8% of respondents had relinquished use of the SaaS solution they were asked
to recall for the survey. Of the responses to the dependent variable construct question
of overall SaaS adoption satisfaction, 77.9% of respondents indicated a satisfaction
level of “somewhat satisfied” or “very satisfied” with their adoption experiences.
Conversely, 20.8% expressed some form of dissatisfaction with their adopted SaaS
solutions.
A descriptive analysis would not be complete without further inquiry into the
demographic profile of respondents as it relates to their level of satisfaction. A cross-
tabulation of the categorical variables of level of satisfaction to organization type shows
nonprofit organizations as representing the largest number of those who were “very
satisfied” with their SaaS adoption experiences, at 37.5%. On the opposite end of the
satisfaction scale, government agency respondents were the most dissatisfied group,
with 36.8% stating that they were not very satisfied in their SaaS adoption experiences.
The relatively high SaaS satisfaction indication among nonprofits is not a chance result
and may be due to such factors as low cost incentives in SaaS offerings and
comparably lower security requirements from organizations in this category. Further
research is necessary to substantiate this assumption based on analysis of the data
collected. The typical stringent security standards for IT systems required by
government entities may also explain the high indications of dissatisfaction from
respondents representing this group.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 72

Respondents from private corporations had the highest overall level of


satisfaction (combined ‘very satisfied’ and ‘somewhat satisfied’ responses) among the
four types of organizations and represented more than two-thirds of the total
respondents. Cloud providers have oriented their service offerings primarily to private
entities under the assumption that private-sector organizations are more willing to
overlook the risks of adoption if the technology offers palpable cost savings and the
benefits of rapid deployment, improved accessibility, scalability, and access to support
expertise (Bangert, 2008; Blokdijk, 2008). This focus on the part of providers may
explain high SaaS adoption and satisfaction rates among private corporations, but this
conclusion can be substantiated only with further research specific to private
corporations. Table 4.1 provides a cross-tabulation analysis summary for organization
type and satisfaction level.

Table 4.1
Cross-Tabulation of Level of Satisfaction and Organization Type
Very Somewhat Not Very Not at All
Organization Type\Satisfaction Scale Count
Satisfied Satisfied Satisfied Satisfied
Private Corporation 35.5% 46.1% 13.2% 5.2% 77
Government (Federal, State, or Local) 26.3% 31.6% 36.8% 5.3% 19
Nonprofit 37.5% 43.8% 18.7% 0.0% 16
Academic Institution 50.0% 50.0% 0.0% 0.0% 2

Level of satisfaction was highest among consultant respondents of the seven job
function response options, although this group accounted for only 3 of the 114
respondents. Coming in second in satisfaction level and accounting for 35.9% of
respondents were those in the “other management”—non-IT and not corporate
management—category. With 36.4% indicating very satisfied, respondents in the IT
director/manager job function were seemingly conciliatory in their sentiments toward
their respective experiences with SaaS. The data in Table 4.2 show IT executives
expressing the highest level of dissatisfaction in their experience with SaaS. Beyond the
risk factors, the only inference that can be drawn from the data showing some pattern of
dissatisfaction among top IT executives is that this group may have overlooked the risks
in SaaS that could emerge in a post-adoption scenario. Other explanatory factors could
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 73

include unforeseen resistance to change—a product of the organizational culture and


environment—that impeded full integration of the adopted system (Weick & Quinn,
1999). Underlying dissatisfaction causal factors that are non-risk-related would be
affirmed with further inquiry targeted to exploring these factors.

Table 4.2
Cross-Tabulation of Level of Satisfaction and Job Function
Very Somewhat Not Very Not at All
Job Function\Satisfaction Scale Count
Satisfied Satisfied Satisfied Satisfied
IT Executive (CIO, CTO, CSO, VP) 11.1% 44.4% 33.3% 11.2% 11
IT Manager (Director/Manager) 36.4% 27.3% 27.3% 9.0% 12
Network/System Management 33.3% 47.6% 9.5% 9.6% 21
Corporate Management (CEO, COO, PRES, VP,
GM) 33.3% 66.7% 0.0% 0.0% 6
Other Management 39.0% 43.9% 17.1% 0.0% 41
IT Staff 30.0% 45.0% 20.0% 5.0% 20
Consultant 66.7% 33.3% 0.0% 0.0% 3

Cross-tabulating nominal measures for length of adoption to level of satisfaction


revealed valuable insights about the association between SaaS adoption longevity and
satisfaction. As shown in Table 4.3, satisfaction was highest among respondents who
had used SaaS for between 4 and 6 years and lowest among those who had adopted
the technology for between 1 and 3 years. Dissatisfaction was highest among those
using SaaS for 1 and 2 years. The satisfaction and dissatisfaction disparity between the
shorter- and longer-term adopters is potentially explicable from a non-risk perspective
by examining the evolution in the relationship between the adopter and the SaaS
provider over the lifespan of their arrangement. During the early adoption stage, the
adopter may be fairly optimistic about the new technology and its perceived value. As
the adoption timeframe progresses to the 2nd year and the organization’s functional
dependency on the SaaS solution increases, technical issues may evolve that challenge
the adoption experience and the overall relationship between the adopting organization
and the SaaS solution provider. The source of decreasing satisfaction with the SaaS
solution shown during the 2nd and 3rd years of adoption may lie in uncertainty about the
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 74

risk constructs outlined in this research, uncertainties that later manifest themselves as
security, business continuity, and integration issues. If these issues are not properly
addressed initially, dissatisfaction is amplified in the 2nd-year stage. Overall
dissatisfaction is shown to decrease thereafter, when the SaaS provider presumably
improves service quality and support and addresses security concerns in an effort to
retain customers.

Table 4.3
Cross-Tabulation of Level of Satisfaction and Length of Adoption
Very Somewhat Not Very Not at All
Length of Adoption\Satisfaction Scale Count
Satisfied Satisfied Satisfied Satisfied
Less than 1 Year 38.2% 43.6% 12.7% 5.5% 55
1 to 3 Years 29.2% 41.7% 25.0% 4.2% 48
4 to 6 Years 40.0% 50.0% 10.0% 0.0% 11
More than 7 Years 0.0% 0.0% 0.0% 0.0% 0

The results seen in Table 4.4, comparing continued use of SaaS and level of
satisfaction, were as expected. More than 90% of respondents still using SaaS were
satisfied overall with their adoption experiences. In contrast, 72% of respondents no
longer using SaaS indicated dissatisfaction with the cloud-based SaaS solutions they
had adopted. Whether their satisfaction is the primary contributing factor to their
continued use of or disassociation with the SaaS solution is up for speculation, but the
data reveal that satisfaction is an influential component in the decision to retain a SaaS
solution or terminate the service.

Table 4.4
Cross-Tabulation of Level of Satisfaction and Continued Usage
Very Somewhat Not Very Not at All
Continued Usage\Satisfaction Scale Count
Satisfied Satisfied Satisfied Satisfied
Yes (Still Using) 42.0% 50.0% 6.8% 1.1% 88
No (Terminated Service) 8.0% 20.0% 56.0% 16.0% 26
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 75

Factor Analysis of Dimension Construct Items

Factor analysis was applied to further reduce the construct items of each risk
dimension. In chapter 3, confirmatory factor analysis was used to determine the validity
of the overall instrument in supporting the hypothetical statements. Here, further
exploratory factor and correlation analysis were used to ascertain whether each
construct item was adequately associated with the main construct factor it represented
and to determine the strength of interdependency between construct items representing
each of the risk dimensions of the S-CRA framework. These additional analyses were
also necessary because of the unproven constructs used in this research.
The security risk dimension measured the level of certainty of applicable security
risks in a SaaS adoption initiative. The analysis in chapter 3 narrowed the construct
items of security risk down to access, integrity/confidentiality, transmission, data
location, ownership, and compliance. Component factor analysis with varimax rotation
indicated the intercorrelation among the six different items of the security dimension
construct. Intercorrelations having an insignificant relationship (p > .05) were discarded
as unreliable construct items for the security risk dimension. The two remaining security
risk construct items and components were access (background check, restricted
access, and breach sub-dimension items) and integrity/confidentiality (intrusion controls
sub-dimension item). As shown in Table 4.5, the intercorrelation among the security risk
construct items ranged from .401 to .772, and all showed a significant relationship (p <
.05). This is a strong indication that the remaining construct item components of the
security risk dimension are not independent of one another.

Table 4.5
Correlation of Security Risk Dimension Construct Items
Background Restricted Access Breach Intrusion Controls
Check Certainty Certainty Certainty Certainty
(Access) (Access) (Access) (Access)
Background Check Spearman’s rho
1 .440 .401 .772
Certainty (Access) Correlation Coefficient
Sig. (two-tailed) . .000 .000 .000
n 114 114 114 114
Restricted Access Spearman’s rho
.772 1 .510 .419
Certainty (Access) Correlation Coefficient
Sig. (two-tailed) .000 . .000 .000
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 76

n 114 114 114 114


Breach Certainty Spearman’s rho
.440 .510 1 .613
(Access) Correlation Coefficient
Sig. (two-tailed) .000 .000 . .000
n 114 114 114 114
Intrusion Controls
Spearman’s rho
Certainty .401 .419 .613 1
Correlation Coefficient
(Integrity/Confidentiality)
Sig. (two-tailed) .000 .000 .000 .
n 114 114 114 114

To further support the interdependency of security risk construct items, factor


analysis indicted a Kaiser-Meyer-Olkin (KMO) score of .687 and a highly significant
Bartlett’s test of sphericity (approximate chi-square = 175.281) (p = .000). The KMO
score, a measure of the appropriateness of using factor analysis, is above the
recommended .500 threshold and is an indication of adequate sample size (Kaiser,
1974). The Bartlett’s test on the correlation matrix for the four construct item
components suggests that it is not an identity matrix. Table 4.6 shows that the total
variance explained by the first component, security risk background check certainty
(access construct item), is 64.3%, and only a single component is extracted, with the
components loading into one factor with an eigenvalue greater than 1.0.

Table 4.6
Security Risk Dimension Reduction Factor Analysis
Initial Extracted Sum
% of Cumulative % of Cumulative
Component (Construct Item) Eigen- of Squared
Variance % Variance %
values Loading
Background Check Certainty
(Access) (1) 2.572 64.303 64.303 2.572 64.303 64.303
Restricted Access Certainty
(Access) (2) .818 20.441 84.743
Breach Certainty (Access) (3)
.377 9.431 94.174
Intrusion Controls Certainty
(Integrity/Confidentiality) (4) .233 5.826 100.000

Factor and correlation analysis of the business continuity risk construct items
also revealed a need for construct item reduction. Intercorrelation analysis necessitated
reducing the number of business continuity construct item components from 18 to the
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 77

remaining 5 shown in Table 4.7. The remaining business continuity construct items,
which include scalability, testing, upgrade, support, and customization, all show a
correlation coefficient above .300 and all have a significant correlation (p < .05), thus
affirming interdependency of the construct items. KMO score for the correlation matrix
of the five business continuity construct items stood at .802, and the Bartlett’s test
affirmed the matrix’s high significance (approximate chi-square = 127.994) (p = .000)
and non-identity status. Factor analysis data provided in Table 4.8 show that the
remaining business continuity components also loaded into the single scalability
component, with the only eigenvalue above 1 of 2.642 and accounting for 52.831% of
variance in the factor.

Table 4.7
Correlation of Business Continuity Risk Dimension Construct Items

Test Before Upgrade Phone


Scalability Adoption Notification Support Customization
Certainty Certainty Certainty Certainty Fee Certainty
(Access) (Access) (Access) (Access) (Access)
Scalability Certainty Spearman’s rho
1 .390 .543 .398 .379
(Scalability) Correlation Coefficient
Sig. (two-tailed) . .000 .000 .002 .000
n 114 114 114 114 114

Test Before Adoption Spearman’s rho


.390 1 .421 .374 .357
Certainty (Testing) Correlation Coefficient
Sig. (two-tailed) .000 . .000 .004 .000
n 114 114 114 114 114

Upgrade Notification Spearman’s rho


.543 .421 1 .436 .510
Certainty (Upgrade) Correlation Coefficient
Sig. (two-tailed) .000 .000 . .000 .000
n 114 114 114 114 114

Phone Support Spearman’s rho


.398 .374 .436 1 .331
Certainty (Support) Correlation Coefficient
Sig. (two-tailed) .002 .004 .000 . .017
n 114 114 114 114 114
Customization Fee
Spearman’s rho
Certainty .379 .357 .510 .331 1
Correlation Coefficient
(Customization)
Sig. (two-tailed) .000 .000 .000 .017 .
n 114 114 114 114 114
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 78

Table 4.8
Business Continuity Risk Dimension Reduction Factor Analysis
Extracted Sum
Initial % of Cumulative % of Cumulative
Component of Squared
Eigenvalues Variance % Variance %
Loading
Scalability Certainty
(Scalability) (1) 2.642 52.831 52.831 2.642 52.831 52.831
Test Before Adoption
Certainty (Testing) (2) .743 14.857 67.688
Upgrade Notification
(Upgrade) (3) .638 12.761 80.449
Phone Support Certainty
(Support) (4) .600 12.008 92.457
Customization Fee
Certainty (Customization)
(5) .377 7.543 100.000

Integration risk construct items were intended to measure the level of certainty of
risks relating to integrating SaaS into the organization. The proposed integration risk
construct items of functionality, compatibility, and usability were proven valid in chapter
3. Output from intercorrelation analysis, presented in Table 4.9, has correlation
coefficient values among the five factors ranging from .303 to .754, attesting to the
reasonable interdependency strength among the construct items. Additional factor
analysis of adequacy and non-identity resulted in a KMO score of .703, Bartlett’s test
significance (p = .000), and an approximate chi-square of 196.279. As shown in Table
4.10, all factors loaded into the construct item labeled “functional requirements outlined
before selection certainty,” which contributed more than 56% of factor variance.

Table 4.9
Correlation of Integration Risk Dimension Construct Items
Functional Exchange
Vendor
Requirements All Functional Data with Adequate Easy
Assistance in
Outlined Before Requirements Internal Reporting Navigation
Data Transfer
Selection Met Certainty Software Certainty Certainty
Certainty
Certainty (Functionality) Certainty (Usability) (Customization)
(Compatibility)
(Functionality) (Compatibility)

Functional Spearman's
Requirements rho
Outlined Before 1 .587 .294 .299 .340 .546
Correlation
Selection Coefficient
Certainty
(Functionality)
Sig. (two-
. .000 .002 .002 .000 .000
tailed)
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 79

n 114 114 114 114 114 114

Spearman's
All Functional rho
Requirements .587 1 .387 .452 .423 .428
Correlation
Met Certainty Coefficient
(Functionality)
Sig. (two-
.000 . .000 .004 .000 .000
tailed)
n 114 114 114 114 114 114
Exchange Data
Spearman's
with Internal
rho
Software .294 .387 1 .754 .402 .303
Correlation
Certainty
Coefficient
(Compatibility)
Sig. (two-
.002 .000 . .000 .000 .002
tailed)
n 114 114 114 114 114 114
Vendor
Spearman’s
Assistance in
rho
Data Transfer .299 .452 .754 1 .486 .394
Correlation
Certainty
Coefficient
(Compatibility)
Sig. (two-
.002 .000 .000 . .000 .000
tailed)
n 114 114 114 114 114 114

Spearman's
rho
Easy Navigation .546 .428 .303 .394 .327 1
Correlation
Certainty
Coefficient
(Usability)
Sig. (two-
.000 .000 .002 .000 .001 .
tailed)
n 114 114 114 114 114 114

Table 4.10
Integration Risk Dimension Reduction Factor Analysis
Extracted Sum
Initial Cumulative % of Cumulative
Component % of Variance of Squared
Eigenvalues % Variance %
Loading
Functional Requirements
Outlined Before Selection
Certainty (Functionality)
(1) 2.833 56.668 56.668 2.833 56.668 56.668
All Functional
Requirements Met
Certainty (Functionality)
(2) .994 19.884 76.552
Exchange Data with
Internal Software
Certainty
(Compatibility) (3) .566 11.316 878.868
Vendor Assistance in Data
Transfer Certainty
(Compatibility) (4) .372 7.440 95.308
Easy Navigation Certainty
(Usability) (6)
.235 4.692 100.000
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 80

Hypothesis Testing

To substantiate the SaaS risk assessment framework proposed in this research


and to determine the relevancy of certain risks to SaaS adoption success, the three
hypotheses stated in chapter 2 were tested. In this study, SaaS adoption success,
measured as level of adoption satisfaction, served as the dependent variable in all three
hypotheses. The risk dimension constructs of security, business continuity, and
integration were the independent variables. The dependent variable was correlated with
each corresponding independent variable to determine whether a significant correlation
existed (p < .05) to reject the null hypothesis and accept the alternate hypothesis or if no
significant relationship existed (p > .05) to accept the null hypothesis in lieu of the
alternate. Because the certainty and satisfaction scales used in this research were both
ordinal scales, only non-parametric chi-square and correlation analyses were used to
test each of the hypotheses.

Hypothesis 1: Security risk.

• Null Security Risk Hypothesis (Hos1): SaaS adoption success (AS) does not
depend on the decision-maker’s level of certainty (C) of the SaaS security (S)
risk dimension as defined by the S-CRA framework. (Hos1: AS = (S)c)

• Alternate Security Risk Hypothesis (Hs1): SaaS adoption success (AS)


depends on the decision-maker’s level of certainty (C) of the SaaS security (S)
risk dimension as defined by the S-CRA framework. (Has1: AS = (S)c)

The four security construct items from the earlier factor analysis were used to
test for a significant relationship between level of satisfaction and the security risk
dimension. As can be seen in the test results in Table 4.11, the background check and
restricted-access components of the access construct item both show no significant
correlation and entail a low correlation coefficient, thus negating their relevance as
construct items to the level-of-satisfaction construct. Pearson chi-square testing showed
no significance for both components with the satisfaction construct (p > .05).
Spearman’s rho correlation coefficient values were also negligible for both construct
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 81

items. Nevertheless, the fact that the remaining access and integrity/confidentiality risk
construct items of breach and intrusion controls each has a significant correlation (p <
.05) with level of satisfaction and a reasonable correlation coefficient—.287 and .297 at
9 degrees of freedom—is quantitative evidence to reject the null hypothesis, Hos1, and
accept the alternate hypothesis, Has1: that SaaS adoption success does depend on the
decision-maker’s level of certainty of prevalent security risks, particularly in terms of his
or her awareness of the provider’s security breach policy and intrusion control elements.
This finding also validates the security dimension construct as relevant for inclusion in
the S-CRA framework and SaaS adoption decision process.

Table 4.11
Level of Satisfaction and Security Risk Certainty Hypothesis Test Analysis
Pearson Chi- Spearman’s
Construct Item\Analysis Square (Sig./ two- rho df
sided) Correlation
Background Check Certainty
(Access) .043 -0.027 9
Restricted Access Certainty
(Access) .217 0.068 9
Breach Certainty (Access) .011 0.287 9
Intrusion Control Certainty
(Integrity/Confidentiality) .000 0.297 9

Hypothesis 2: Business continuity risk.

• Null Business Continuity Risk Hypothesis (Hobc1): SaaS adoption success


(AS) does not depend on the decision-maker’s level of certainty (C) of the SaaS
business continuity (BC) risk dimension as defined by the S-CRA framework.
(Hobc1: AS = (BC)c)

• Alternate Business Continuity Risk Hypothesis (Habc1): SaaS adoption


success (AS) depends on the decision-maker’s level of certainty (C) of the SaaS
business continuity (BC) risk dimension as defined by the S-CRA framework.
(Habc1: AS = (BC)c)

An organization’s operational continuity depends on the reliable functioning of its


IT systems. The five remaining business continuity construct items from the original
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 82

factor analysis in chapter 3 were used to test the second null hypothesis statement
proposed in this research: that adoption success does not depend on the decision-
maker’s level of certainty of business continuity components. Statistical test results,
shown in Table 4.12, indicate that a highly significant correlation (p = .000) exists
between SaaS adoption satisfaction and levels of the decision-maker’s certainty about
the scalability of the SaaS solution, provisions allowing for testing before adoption,
upgrade notification, reliable phone support, and hidden customization fees. Non-
parametric testing also revealed that each of the independent business continuity
construct items displayed a relatively high correlation to the level-of-satisfaction
construct, ranging from .360 to .533. These two complementary tests indicate that SaaS
business continuity certainty is a predictor of adoption success. The test results were
sufficient to support rejecting the null hypothesis, Hobc1—that business continuity risk
certainty is not a factor in SaaS adoption success—and accepting the alternate
hypothesis, Habc1—that it is.

Table 4.12
Level of Satisfaction and Business Continuity Risk Certainty Hypothesis Test Analysis
Pearson Chi- Spearman's
Construct Item\Analysis Square (Sig./ two- rho df
sided) Correlation

Scalability Certainty (Scalability)


.000 0.533 9
Test Before Adoption Certainty
(Testing) .000 0.375 9
Upgrade Notification Certainty
(Upgrade) .000 0.433 9
Phone Support Certainty
(Support) .000 0.360 9
Customization Fee Certainty
(Customization) .000 0.424 12

Hypothesis 3: Integration risk.

• Null Integration Risk Hypothesis (Hoi1): SaaS adoption success (AS) does not
depend on the decision-maker’s level of certainty (C) of the SaaS integration (I)
risk dimension as defined by the S-CRA framework. (Hoi1: AS = (BC)c)
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 83

• Alternate Integration Risk Hypothesis (Hai1): SaaS adoption success (AS)


depends on the decision-maker’s level of certainty (C) of the SaaS integration (I)
risk dimension as defined by the S-CRA framework. (Hai1: AS = (BC)c)

To determine whether a significant relationship exists between integration risk


certainty and level of satisfaction, which implies SaaS adoption success or failure, this
research also relied on correlation and significance tests. Both functionality construct
items were significantly related to level of satisfaction at p < .05, and both items had a
Spearman’s rho correlation coefficient over .500. Compatibility construct items also
passed the significance and correlation tests, with 9 and 12 degrees of freedom. The
remaining usability construct item test results, shown in Table 4.13, suggest acceptable
correlation and significance between SaaS usability and level of satisfaction. The test
also indicates that the construct items representing the higher-level integration risk
construct are all significant to level of satisfaction for SaaS adoption. Given the
hypothesis test results, the research concluded that integration risk certainty is a
significant factor in SaaS adoption success and rejected the null hypothesis that
integration risk certainty is irrelevant to SaaS adoption success.

Table 4.13
Level of Satisfaction and Integration Risk Certainty Hypothesis Test Analysis
Pearson Chi- Spearman's
Construct Item\Analysis Square (Sig./ two- rho df
sided) Correlation

Functional Requirements
Outlined Before Selection
Certainty (Functionality)
.000 0.561 9

All Functional Requirements Met


Certainty (Functionality)
.000 0.575 9
Exchange Data with Internal
Software Certainty
(Compatibility) .001 0.448 12
Vendor Assistance in Data
Transfer Certainty (Compatibility)
.000 0.451 9

Easy Navigation Certainty


(Usability)
.000 0.47 9
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 84

Chapter 5

Conclusion

Introduction

The aim of this research was to determine a rational risk-based decision


framework for selecting SaaS cloud applications that managers can leverage to reduce
the risks associated with SaaS adoption and improve the chances that their adoption
will be successful. Other research has shown that a rational decision process applied to
IT resource selection can have significant benefits for the adopting organization. The
research of Wasserman et al. (2006) argues that the significance of their BRR
framework for selecting OSS is that it helps to reduce selection risk and provides a
trustworthy method for OSS evaluation. Hollander’s (2000) study shows that the R2ISC
COTS software selection methodology, relying on a disciplined approach of devising
requirements, soliciting bids through RFPs from providers, and screening options
quantitatively, can help reduce many of the problems that cause software adoption
projects to fail. Additional COTS selection frameworks, including Maiden and Ncube’s
(1998) template-based PORE model and Starinsky’s (2003) SPIM framework,
complement the R2ISC approach in relying on requirements, RFPs, and a rational,
quantitative evaluation of options before a decision is made on the best vendor. Liang-
Chuan and Chorng-Shyong (2008) support this disciplined approach and further
elaborate that financial risks can be readily ascertained and revenue benefits confirmed
through a formal COTS evaluation approach. Step-based software selection
approaches also endorse the rational software selection process as beneficial.
D’Amico’s (2005) 10-step optimizing selection approach notes that applying this formal
approach to software selection can reduce disruptive effects, such as low morale and
productivity, subpar customer service, and loss of business opportunities that are
introduced from integrating new software into the organization’s functions. Ellis and
Banerjea (2007) concur with this disruption reduction argument, indicating that time
invested in prudent software evaluation can reduce costs and the potential for user
rejection.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 85

Despite existing precedence that a methodical and formal approach to software


selection is the ideal approach to reach a rational and optimal decision, SaaS and
overall cloud selection is noted for its informality (Heiser, 2010b). Cloud-computing
industry pundits and stakeholders have made significant strides in introducing
formalized approaches focused on streamlining cloud resource selection. The ENISA
(2009) cloud risk assessment framework makes a commendable effort to formalize
cloud service selection, but its complexity makes it impractical. CSA’s (2010)
Consensus Assessments Initiative Questionnaire is a step in the right direction for
formalizing the information-gathering facet of cloud risk assessment, but here as well,
the scope of CSA’s questions extends beyond SaaS assessment. Further, its reliance
on the provider to ascertain risk mitigation and certainty is not an advisable due
diligence strategy given that providers may misrepresent certain risks to protect their
business interests. The federal CIO Council’s (2010) cloud security assessment
framework is designed for government agencies and is too broad in scope and resource
requirements for practical use in private industry. In a survey of 268 organizations
worldwide, Heiser (2010b) found that most organizations use a questionnaire of some
sort for evaluating SaaS, but he also noted that these questionnaires are usually
compiled from a combination of standard frameworks, such as CSA’s (2010)
Consensus Assessments Initiative Questionnaire and ENISA’s (2009) cloud risk
assessment, and self-derived questions that the organizations determine are relevant.
Despite these approaches and efforts, the dilemma in SaaS risk assessment is that no
framework specific to SaaS evaluation exists, with certain risks established as relevant
to the assessment, as is the case with frameworks targeted at COTS or OSS
assessment; in those situations, requirements, RFPs, and quantitative rating and
scoring are established as the rational and credible approach.

Major Findings

This research introduced the S-CRA framework specifically for evaluating and
selecting SaaS provisions from among competing options. To substantiate the model
and determine the relevancy of security, business continuity, and integration risks to
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 86

SaaS adoption success, the research relied on a quantitative approach to collecting and
analyzing data. The survey instrument included questions compiled from a literature
review and existing cloud assessment frameworks, such as CSA’s (2010) Consensus
Assessments Initiative Questionnaire, ENISA’s (2009) risk factors framework, the CIO
Council’s (2010) cloud security assessment framework, and Heiser’s (2010a) cloud risk
dimensions. The online survey netted 114 participants from a variety of organization
types, including government agencies, private corporations, nonprofits, and academic
institutions. Factor analysis revealed that the instrument met all the criteria for validity
and reliability and that the satisfaction and certainty scales used were adequate
measures of their respective constructs. Descriptive analysis disclosed demographic
details about the respondents. Correlation and significance analysis were used to test
the three hypotheses, which implied a relationship between adoption success and the
constructs of security, business continuity, and integration risk certainty.

Although the descriptive analysis shed light on the respondents from a variety of
angles, an interesting fact emerged: The majority of respondents (66.7%) were from
private corporations, whereas personnel from government entities accounted for only
17.1% of participants. Di Maio and Claps (2010) discussed this discrepancy in cloud
adoption between public and private organizations as rooted in such factors as the
struggle among government entities to describe the risk and value of cloud computing,
to succinctly define cloud computing, and to resolve internal concerns about ownership
and control of an outsourced infrastructure. Di Maio and Claps (2010) also indicated
that government entities need a government-tailored framework for assessing cloud
services that incorporates their heightened security concerns. The CIO Council’s (2010)
requirement-based cloud assessment framework can accommodate the cloud
evaluation needs of government agencies that demand stricter security from cloud
providers, but its extensive requirements may limit the number of cloud service
providers that can meet the expectations of government agencies. Private organizations
may have similar issues with cloud security as government agencies, but the limited IT
compliance requirements private organizations face in comparison to government
agencies and the more receptive attitude toward outside control of IT resources shown
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 87

by private organizations may help explain why cloud-based services, such as SaaS,
have a stronger foothold in the private sector than among government agencies.

Despite the security risks and other adoption concerns, the high satisfaction level
across organization types, job function, and length of usage, as shown in the
demographic analysis of the results from this research, is a testament to the value of
SaaS as a reliable computing platform. The data reveal a satisfaction rate of more than
55% (combining “somewhat satisfied” and “very satisfied” responses) for each of the
key demographic measures. Realizing that natural concerns about security, continuity,
and integration exist among their prospective clients, cloud providers, including SaaS
providers, may be making a concerted effort to proactively address these concerns.
Nevertheless, the data show that more work is still needed to improve satisfaction level
for SaaS. This is particularly true among government organizations, which show an
overall satisfaction rating of 57.9%; among IT executives, of which only 55.5% indicated
that they were somewhat or very satisfied; and among organizations that have
surpassed the SaaS adoption honeymoon phase and reached the 3-year threshold,
during which time overall satisfaction rate drops by nearly 10%, from 81.8% to 70.9%.

The predictive effect of security risk certainty to SaaS adoption success was
confirmed by this research. The research findings indicate a significant relationship (p <
.05) between the security risk certainty construct and level of satisfaction construct that
was used to represent adoption success. This confirms the first hypothesis, Has1, that
security risk assessment is adequate in a rational SaaS risk assessment framework.
Several research studies on cloud computing indicated security risk as perhaps the
most significant risk component of cloud computing in general (Heiser, 2010a; Paquette,
Jaeger, & Wilson, 2010; Subashini & Kavitha, 2010; Zissis & Lekkas, in-press). Both
ENISA’s (2009) cloud risk assessment framework and CSA’s (2010) Consensus
Assessments Initiative Questionnaire are specifically targeted at evaluating cloud
security risks, with the assumption that the confidentiality of information and liability
resulting from provider infrastructure failure are the top concerns relating to adopting
cloud-based services. Even Heiser’s (2010a) cloud evaluation framework suggests that
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 88

security risk assessment should be the single deciding factor for migrating sensitive
data to a cloud-based service.

Nevertheless, the research data also revealed that several security risk construct
items deemed significant to SaaS success were not as significant as presumed.
Surprisingly, several security concerns were determined to be irrelevant to successful
SaaS adoption; these include: reassurance that the SaaS provider is able to provide a
valid audit report of its operations (SAS70 report), contradicting a recommendation by
Heiser (2010b); certainty about whether data are stored in country; data ownership
issues; concerns about encryption of data transmitted via the open Internet medium;
and concerns that each client’s data are kept separately in the multi-tenant environment
that typifies the underlying SaaS infrastructure. These results suggest that these
specific security concerns are not directly related to the primary cloud security issues of
confidentiality and infrastructure reliability suggested by the standard cloud risk
assessment frameworks. Nevertheless, the research confirms confidentiality and
infrastructure reliability as top security issues in showing that certainty about data
access and controls and polices put in place by the SaaS provider to enhance data
integrity and confidentiality are the key security risk predictors of successful SaaS
adoption.

Evidence provided from analysis of the research data established business


continuity risk certainty as a relevant factor in SaaS adoption success. Hypothesis tests
on the second hypothesis, Habc1, confirm that business continuity is a significant (p <
.05) predictor of SaaS level of satisfaction and merits inclusion in a risk assessment
framework. Business continuity risks, as applied to SaaS, include the disruptive
aspects of adopting this new technology. Like security risk certainty, only a few of the
original business continuity risk construct items showed relevance to SaaS adoption
satisfaction.

The requirement for business continuity resiliency testing before adoption


recommended in CSA’s (2010) Consensus Assessments Initiative Questionnaire is
confirmed by the research data as having a strong influence on subsequent satisfaction
level. The issue raised by ENISA’s (2009) cloud risk assessment framework that the
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 89

flexibility or scalability of the SaaS solution is a valid risk factor was also affirmed by the
research data, which showed scalability as a one of the legitimate business continuity
risk factors contributing to SaaS adoption success. The fact that scalability showed the
highest correlation to level of satisfaction among the business continuity risk items is an
indication that organizations value the dynamic flexibility of SaaS in seamlessly
accommodating potential growth in data processing and data storage.

Although phone support was shown as a significant business continuity risk


factor, upgrade notification and customization were the second and third highest
business continuity factors contributing to level of satisfaction. This finding suggests that
the satisfaction and operational efficiency of the adopting organization also depend on
scrutinizing the notification practices of the SaaS provider regarding disruptive software
upgrades and carefully evaluating the provider’s capacity to allow software
customization. Although Fabbi (2010) suggests that availability of support services is a
main influence on customer satisfaction, the research results confirmed only phone
support as a determining factor. Email and web-based support certainty as business
continuity risk items were shown to be insignificant (p < .05) to level of satisfaction. On
the subject of phone support preference over virtual support, Maoz et al. (2010) note
that phone support has six times more impact in resolving a customer’s problem than
email or web-based support. This finding helps explain the research result indicating
that SaaS adopters considered phone support to be the only relevant factor among the
support certainty items of the business continuity risk construct.

A two-tailed significance test and a correlation test confirm that integration risk
certainty is not only significant to level of satisfaction, but it is also both positively and
negatively correlated to level of satisfaction as it pertains to SaaS adoption. This
rejection of the null hypothesis, Hoi1, in favor of the alternate hypothesis, Hai1, suggests
that integration risk concerns, such as the ability of the adopted SaaS solution to
continually meet functional requirements, its compatibility with internal systems, and its
ease of use, are relevant to determining eventual adopters’ satisfaction with the
software. Unlike COTS software, which requires a one-time integration into the
organization’s computing environment, SaaS naturally involves an ongoing exchange of
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 90

data between the provider’s systems and the adopter’s. The adopter of a SaaS email
solution enjoys the convenience of access to email from any computer with Internet
connectivity from anywhere in the world, as well as the flexibility of paying for only the
amount of storage and the number of email accounts used. However, integration
becomes a major bottleneck if the SaaS email solution is incompatible with mobile
devices used by the organization’s staff or certain features in the solution limit the
efficient exchange of data among systems within the organization. Heiser (2010a)
regards integration risk as growing in significance in cloud risk evaluation on par with
security and business continuity risks, especially as the use of cloud-based services
increases and the need to integrate on-premise and cloud-based business applications
to exchange data seamlessly gains importance. The findings of this research confirm
integration risk certainty as a relevant factor for adopters to consider when conducting a
risk assessment of SaaS provisions prior to selection.

The three questions introduced in chapter 2 of this research were adequately


answered based on the literature review, the data collection and analysis, and the
confirmation of the S-CRA framework as a credible tool for evaluating the risk profile of
competing SaaS providers. The first question tasked this research with determining the
legitimate risk factors associated with SaaS adoption. The literature revealed that many
risk factors were being proposed as complementary to SaaS and cloud adoption. Some
of these risk factors were misconceptions emerging from unsubstantiated fears and
mistrust of the outsourced, multi-tenant model of cloud computing and SaaS. However,
the recurring concerns of security, business continuity, and integration were prevalent in
frameworks proposed by credible IT sources and institutions. The research adopted
several of these recurring risk concerns as worthy factors for SaaS risk assessment and
set out to substantiate their legitimacy with a quantitative study.

The second research question, concerning the influence of risk awareness on


SaaS success or failure, was answered using a robust statistical analysis of the survey
data collected. The simple answer to this second question is that the decision-makers’
awareness of certain risks does ultimately contribute to satisfaction with the overall
SaaS adoption experience. Speculating on this causal relationship, one could infer that
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 91

being aware of adoption risks beforehand helps the SaaS decision-maker select
solutions that entail minimal risk, develop a strategy for mitigating observed risks, or
both.

The final research question sought a solution that links mitigation with the SaaS
decision process. The S-CRA framework was introduced as the answer to this puzzle.
The framework attempts to inculcate formality, objectivity, and rationality into the SaaS
decision process by requiring the adopter to pose targeted questions in each risk
dimension to each SaaS provider, rate the certainty level for each response, apply a
weight to each dimension as necessary, and finally, derive a numerical score for each
competing SaaS solution. The S-CRA framework not only complements rational
decision models that exist for selecting other types of software, but it is also a normative
decision model that enables the SaaS decision-maker to maximize the value of the
decision to the organization. The final S-CRA framework questions and sample rating
scheme are included in Appendix V.

Limitations of Research Design

The methodology used in this research and the S-CRA framework have a few
inherent limitations. The first is the use of a questionnaire as the primary instrument for
data collection. Rea and Parker (2005) note several disadvantages to using web-based,
email-distributed surveys, including the fact that the practice may entail a self-selection
bias in excluding those who are uncomfortable with email and web browsing from
participating and the fact that participants may not follow instructions correctly because
of limited personal contact with the researchers. Furthermore, although the security,
business continuity, and integration risk dimensions used as primary constructs were
synthesized from among many risks identified in the literature sources, other risk
concerns that may or may not prove relevant to SaaS selection were deliberately
excluded to improve the manageability of the research. Thus, the questionnaire used
may not anticipate all the wide-ranging and potentially relevant SaaS risk concerns. The
implication here is that another researcher could possibly identify other SaaS risk
concerns and derive an entirely different set of research questions, in addition to using a
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 92

different methodology for collecting the data. It is possible to repeat this study using
more risk dimensions and a larger representative sample, but the selection of additional
SaaS dimensions would need to be scrutinized and the research design would need to
accommodate participation by both phone and email.
The S-CRA framework’s limitations are largely derived from the limitations
apparent in the research model that feeds the framework. The relevant risk dimensions
determined from the research analysis and the corresponding certainty questions form
the basis of the framework. Although several risk evaluation questions were discovered
to be irrelevant to their respective constructs during the analysis, the onus is on the
adopting organization to determine the necessity of including these irrelevant questions
in their SaaS risk assessment exercises or creating additional questions that address
their unique requirements. Somewhat ironically, a major limitation of the S-CRA
framework is that the scope of the risk assessment questions may not be
comprehensive enough to accommodate all organizations and may require that
organizations modify the S-CRA framework’s evaluation questions as they see fit.
Nevertheless, the framework is flexible enough to support extension or reduction in the
number and scope of the original evaluation questions.

Managerial Implications

With SaaS and its underlying cloud-computing technology still in their infancy,
this research fills some of the prevailing gaps between theory and practice with regard
to evaluating the risk of SaaS technology. The issue of the efficacy of managerial
decision-making for IT investments is at the core of this research. From a theoretical
perspective, the research findings suggest that if managerial decision-makers are aware
of the risks beforehand, their adoption experience will likely be uneventful and the
organization can successfully leverage all the cost and efficiency benefits inherent in
SaaS technology. The findings also suggest that the key to meeting the requirement for
risk awareness is information. A risk assessment exercise is only as good as the
information attained. As discussed in chapter 2, the rational and unbiased decision
process entails establishing goals, gathering relevant information, processing that
information, and making an informed decision. The SaaS selection process must
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 93

embrace the rational decision process to be effective. As is the case in any technology
acquisition, uncertainty exists in SaaS selection, but this research provides a theoretical
foundation that sheds light on and describes these ambiguities from a risk concern
perspective comprehensible to nontechnical and technical managers.

The practical implications of the S-CRA framework introduced in this research


are that it provides managerial IT decision-makers with a standardized tool for
evaluating SaaS providers and it contributes empirical insight into the risk factors that
influence SaaS adoption success or failure. Heiser (2010b) notes that some existing
SaaS risk assessment questionnaires based on established frameworks—such as
CSA’s (2010) Consensus Assessments Initiative Questionnaire and ENISA’s (2009)
cloud risk assessment framework—require the provider to complete the questionnaire.
This practice can result in misleading information and risk profile, as well as
questionnaire fatigue on the part of the provider, who must complete a multitude of the
same questionnaires individually for each customer. The S-CRA framework, in contrast,
requires that the managerial decision-maker actively interview the provider and review
documents pertaining to the SaaS cloud service to fully understand the information
provided and clarify certain relevant risk concerns. The quantitative rating and scoring
method used in the S-CRA framework greatly improves the objectivity of the SaaS risk
assessment in comparison to other standardized frameworks that require narrative and
subjective responses to risk-discerning questions. Also, unlike prevailing standardized
cloud risk assessment frameworks, S-CRA is tailored to SaaS risks. The S-CRA
framework is by no means foolproof, but its practicality and focus allow it to fill an
existing void in SaaS selection. Combining this framework with conventional return on
investment analysis and conventional requirements assessment allows for
determination of total cost of ownership and balances the SaaS consumer/provider
scale that is currently skewed to the provider’s advantage.

Risk assessment is a crucial component of risk management in an efficiently


functioning organization. It helps the organization identify elements of risk impact and
probability in all operations and investment undertakings. A credible risk assessment
framework, such as S-CRA, is only of value if it serves as input into a comprehensive
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 94

strategy of managing identified risks through mitigation. The service-level agreement


(SLA) remains the primary vehicle for minimizing the disruption of business operations
stemming from SaaS adoption failures. The SLA is a standard contract outlining the
terms of the engagement between the SaaS adopter and the provider. SaaS providers
typically issue template contracts to customers with language limiting the provider’s
liability and excluding provisions that may be of concern to the adopting organization
(Mauer & Bona, 2007). As a complement to a SaaS risk assessment effort, the adopter
will find it beneficial to scrutinize and negotiate SaaS contracts to address security,
continuity, and integration risk concerns, such as service performance, disaster
recovery, data security and privacy, pricing, and service termination. Although a risk-
based SaaS assessment approach is the underlying theme of this research, the
manager/decision-maker must also note that this approach is not designed to give the
“correct” answer in a decision scenario; rather—as noted by Ersdal and Aven (2008) in
their work on risk-based decision approaches—it is designed to clarify consequences
and uncertainties associated with the SaaS adoption decision and serve as one of the
tools for technology acquisition available to aid IT decision-makers.

Implications for Future Research

Future research is needed to assess the qualitative implications of the S-CRA


framework and to gauge its impact in a SaaS adoption scenario in rationalizing the
decision process and reducing uncertainty. The question of whether the S-CRA
framework will hold in real-world conditions merits exploration. A qualitative research
study to complement the findings in this research would require an introspective case
study approach of observing and interviewing organizations that have adopted the S-
CRA framework as their standard SaaS risk assessment tool. Such a research narrative
could provide invaluable insight into the flexibility of the framework in accommodating
different organization types and different SaaS selection requirements. It would also
lead to a better understanding of the role played by substitute or complementary
software evaluation approaches in the SaaS evaluation process. Research by Heiser
(2010b) identified other SaaS assessment approaches, including making site visits,
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 95

reviewing provider literature, requiring standard certification (such as SAS70), and


conducting a financial cost-benefit analysis. These additional non-risk approaches to
evaluating SaaS deserve recognition and validation as possible alternatives to S-CRA.
Despite strong empirical evidence in this research supporting the relevance of the
security, business continuity, and integration constructs to SaaS adoption success,
other risk dimensions, such as the operations management risk concern proposed in
ENISA’s (2009) framework, could easily be substituted for any one of the risk
dimensions identified in this research. A qualitative research study would not only
confirm the viability of the S-CRA framework in its standing with other prominent
frameworks but also showcase marginal SaaS selection approaches that address the
non-risk issues associated with adopting cloud-based services in attempts to maximize
the benefits of SaaS.

Given that SaaS is a subset of a broader class of cloud-based services, another


desirable future research effort would involve determining relevant risks pertinent to
other cloud-based services, namely, IaaS and PaaS. As discussed in chapter 1, the
IaaS utility computing model mirrors SaaS but is distinct in that IaaS customers rent
needed computing equipment, including servers, hardware, and storage, to support
their organizations’ IT infrastructures. PaaS is leveraged by an organization to host web
sites and other customized applications owned by the organization. Heiser (2010a)
notes that SaaS, IaaS, and PaaS all share similar risk concerns of security and
business continuity. Nevertheless, the relevance of these and other potential risks
concerns unique to IaaS and PaaS cloud-based services can be substantiated only in a
targeted research undertaking to determine the need for a new and distinct risk
assessment framework or to support the applicability of the S-CRA framework proposed
in this research or other standardized frameworks for evaluating IaaS and PaaS.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 96

References

Aven, T. (2003). Foundations of risk analysis: A knowledge and decision-oriented


perspective. West Sussex: John Wiley & Sons.
Aven, T., & Korte, J. (2003). On the use of risk and decision analysis to support
decision-making. Reliability Engineering and System Safety, 79(3), 289-299.
Ayag, Z., & Ozdemir, G. (2007). An intelligent approach to ERP software selection
through fuzzy ANP. International Journal of Production Research, 45(10), 2169-
2194.
Bajaj, A., Bradley, W. E., & Cravens, K. S. (2008). SAAS: Integrating systems analysis
with accounting and strategy for ex ante evaluation of IS investments. Journal of
Information Systems, 22(1), 97-124.
Bangert, M. (2008). Software on demand. Quality, 47(8), 32-33.
Bielski, L. (2008). The case for e-Project management. ABA Banking
Journal, 100(5), 47-48.
Bittman, T (2010, November). Private cloud computing: An essential overview. Gartner
Research.
Blokdijk, G. (2008). SaaS 100 success secrets: How companies successfully buy,
manage, host and deliver software as a service (SaaS). United States: Emereo
Pty Ltd.
Busenitz, L. W., & Barney, J. B. (1997). Differences between entrepreneurs and
managers in large organizations: Biases and heuristics in strategic decision-
making. Journal of Business Venturing, 12(1), 9-30.
Campbell, J. D. (1990). Self-esteem and clarity of the self-concept. Journal of
Personality and Social Psychology, 59, 538-549.
Chary, M. (2007). Public organizations in the age of globalization and technology. Public
Organization Review, 7(2), 181.
Chin, W. W. (1998) The partial least squares approach to structural equation modeling.
In G. A. Marcoulides (Ed.), Modern methods for business research (295-336).
Mahwah, NJ: Lawrence Erlbaum Associates.
Chin, W. W., Gopal, A., & Salisbury, W. D. (1997). Advancing the theory of adaptive
structuration: The development of a scale to measure faithfulness of
appropriation. Information Systems Research, 8(4), 342–367.
Cloud Security Alliance (CSA). (2010). Consensus Assessments Initiative
Questionnaire. Retrieved from http://www.cloudsecurityalliance.org/cm.html
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 97

Corran, E. R., & Witt, H. H. (1982). Reliability analysis techniques for the design
engineer. Reliability Engineering, 3(1), 47-57.
Creating the cumulus. (2008, October). Economist. Retrieved from
http://www.economist.com/node/12411908
Cummings, L. (1998). The scientific reductionism of relevance theory: The lesson from
logical positivism, Journal of Pragmatics, 29(1), 1-12.
D’Amico, V. (2005). 10 easy steps to the right business software. Consulting to
Management, 16(2), 47-53.
Damodaran, A. (2008). Strategic risk taking: A framework for risk management.
Pennsylvania: Wharton School Publishing.
D'Andrea, G. (2006). Tools for effective decision-making. The Case Manager, 17(1), 43-
59.
Davies, J. (2008, July). SaaS impact on enterprise feedback management. Gartner
Research.
DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information
systems success: A ten-year update. Journal of Management Information
Systems, 19(4), 9-30.
Department of Labor, Bureau of Labor Statistics (2008). U.S. telecommunications
report, Q1 2008. Retrieved from http://www.bls.gov/
Desisto, R. P., Paquet, R., & Pring, B. (2007, June). Hybrid SaaS: Questions and
answers. Gartner Research.
Desisto, R. P., & Pring, B. (2010, May). Essential SaaS overview and 2010 guide to
SaaS research. Gartner Research.
Di Maio, A. D., & Claps, M. (2010, May). Government in the cloud. Gartner Research.
Donston, D. (2008). Shaklee cleans up with SaaS. eWeek. Retrieved from
http://www.eweek.com/c/a/Virtualization/Shaklee-Cleans-Up-with-SAAS/1/
Ellis, D. B., & Banerjea, D. K. (2007). Successful software selection. Quality, 46(8), 44-
46, 48.
Ersdal, G., & Aven, T. (2008) Risk informed decision-making and its ethical basis.
Reliability Engineering and System Safety, 93(2), 197-205.
European Network and Information Security Agency (ENISA). (2009). Cloud computing
risk assessment. Retrieved from
http://www.enisa.europa.eu/act/rm/files/deliverables/cloud-computing-risk-
assessment/?searchterm=cloud%20computing
eWeek (2008, April). Software as a service survey. Retrieved from www.eweek.com
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 98

Fabbi, M. (2010, November). Case study: Reducing cloud service support cost, speed is
important. Gartner Research.
Federal Risk and Authorization Management Program (FedRAMP) (2010). Proposed
Security assessment and authorization for U.S. government cloud computing.
Retrieved from https://info.apps.gov/sites/default/files/Proposed-Security-
Assessment-and-Authorization-for-Cloud-Computing.pdf
Fornell, C., & Larcker, D. (1981). Evaluating structural equations models with
observable variables and measurement error. Journal of Marketing Research,
18, 39-50.
Gilboa, I, (2009). Theory of decision under uncertainty. New York: Cambridge University
Press.
Gopinath, M., & Nyer, P.U. (2009). The effect of public commitment on resistance to
persuasion: The influence of attitude certainty, issue importance, susceptibility to
normative influence, preference for consistency and source proximity.
International Journal of Research in Marketing, 26(1), 60-68.
Gutnik, L.A., Hakimzada, A. F., Yoskowitz, N. A., & Patel, V. L. (2006). The role of
emotion in decision-making: A cognitive neuroeconomic approach towards
understanding sexual risk behavior. Journal of Biomedical Informatics, 39(6),
720-736.
Hall, D. J., & Davis, R. A. (2007). Engaging multiple perspectives: A value-based
decision-making model. Decision Support Systems, 43(4), 1588-1604.
Havenstein, H. (2008). Google adds a weapon in its battle to kill Windows.
Computerworld. Retrieved from
http://www.computerworld.com/action/article.do?command=viewArticleBasic&arti
cleId=9114004
Hayes, B. (2008, July). Cloud computing. Communications of the ACM, 51(7), 9-11.
Heiser, J. (2010a, March). Analyzing the risk dimensions of cloud and SaaS computing.
Gartner Research.
Heiser, J. (2010b, April). Survey results: Assessment practices for cloud, SaaS and
partner risks. Gartner Research.
Hofstede, G. (1991). Cultures and organizations: Software of the mind. London:
McGraw-Hill.
Holford, W. (2009). Risk, knowledge and contradiction: An alternative and
transdisciplinary view as to how risk is induced. Futures, 41(7), 455.
Hollander, N. (2000). A guide to software package evaluation and selection: The R2ISC
method. New York: AMACOM.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 99

Hurst, P. M., & Siegel, S. (1956). Prediction of decisions from a higher ordered metric
scale of utility. Journal of Experimental Psychology, 52(2), 138-144.
Info-Tech Research Group. (2006, September). SaaS: What it is and why you should
care. Retrieved from http://www.infotech.com/
Olson, M. H., & Ives, B. (1981). User involvement in system design: an empirical test of
alternative approaches. Information & Management, 4(4), 183.
Janus, I. L. (1989). Crucial decisions: Leadership in Policymaking and Crisis
Management. New York: Free Press.
Jeffrey, R. C. (1983). The logic of decision (2nd ed.). Chicago: University of Chicago
Press.
Jeffrey, R. (2004). Subjective probability: The real thing. Cambridge: Cambridge
University Press.
Kaiser, J. F. (1974, April). Nonrecursive digital filter design using the Lo-sinh window
function. IEEE Symposium on Circuits and Systems. Retrieved from
http://www.mathworks.com/help/toolbox/signal/kaiserord.html.

Khattab, A., Aldehayyat, J., & Stein, W. (2010). Informing country risk assessment in
international business. International Journal of Business and Management, 5(7),
54-62.
Kim, D., Ferrin, D. L., & Rao, H. R. (2007). A trust-based consumer decision-making
model in electronic commerce: The role of trust, perceived risk, and their
antecedents. Decision Support Systems, 44, 544-564.
Kitchenham, B. A., Pickard, L., Linkman, S., & Jones, P. (2005). A framework for
evaluating a software bidding model. Information and Software Technology,
47(11), 747-760.
Koller, G. (2005). Risk assessment and decision-making in business and industry: A
practical guide. Cambridge: Cambridge University Press; Boca Raton, FL:
Chapman & Hall/CRC.

Krill, P. (2008). Cloud computing, Web 2.0 trends emphasized. Infoworld. Retrieved
from http://www.infoworld.com/d/developer-world/cloud-computing-web-20-
trends-emphasized-075
Kuhn, T. S. (1996). The structure of scientific revolutions (3rd ed.). Chicago and London:
University of Chicago Press.

Lamarre, E., & Pergler, M. (2010). Risk: Seeing around the corners. McKinsey
Quarterly, (1), 102-106. Retrieved from Business Source Complete database.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 100

Lee, J. W., & Kim, S. H. (2000). Using analytic network process and goal programming
for interdependent information system project selection. Computers and
Operations Research, 27(4), 367-382.
Liang-Chuan, W., & Chorng-Shyong, O. (2008). Management of information technology
investment: A framework based on a real options and mean–variance theory
perspective. Technovation, 28, 122-134.
Longwood, J. (2009, June). Evaluating, selecting, and managing cloud service
providers. Gartner Research.
Maiden, N. A. & Ncube, C. (1998, March/April). Acquiring COTS software selection
requirements. IEEE Software, 15(2), 46-56.
Maoz, M., Collins, K., Alvarez, G., Thompson, E., Fletcher C., Jacobs, J., Woods, J., &
Davies, J. (2010, August). Q&A: Customer service for 2011 and the Gartner
CRM research team. Gartner Research.
Marston, S., Li, Z., Bandyopadhyay, S., Zhang, J., & Ghalsasi, A. (2011). Cloud
computing: The business perspective. Decision Support Systems, 51(1), 176-
189.
Mauer, W., & Bona, A. (2007, August). Best practices for negotiating software as a
service contract. Gartner Research.
McKinney, V., Yoon, K., & Zahedi, F. (2002). The measurement of Web-customer
satisfaction: An expectation and disconfirmation approach. Information Systems
Research, 13(3), 296-315.
McNee, W. S. (2007). SaaS 2.0. Journal of Digital Asset Management, 3(4), 209-214.
Meade, L., & Sarkis, J. (1998). Strategic analysis of logistics and supply chain
management systems using the analytical network process. Transportation
Research Part E: Logistics and Transportation Review, 34(3), 201-215.
Mertz, S. A., Eid, T., Eschinger, C., Huang H. H., Pang, C., & Pring, B. (2008,
September). Market trends: Software as a service, worldwide, 2007-2012.
Gartner Research.
Miller, M. (2008). Cloud computing: Web-based applications that change the way you
work and collaborate online. Indianapolis, IN: Que.
Moltzen, E. F. (2008). Customer relationship management: Intuit goes SaaS with
QuickBooks. CRN: CRNTech, 21, 10.
National Institute of Science and Technology (NIST) (2010a, February). Guide for
applying the risk management framework to federal information systems.
(Special Publication 800-37). Retrieved from
http://csrc.nist.gov/publications/nistpubs/800-37-rev1/sp800-37-rev1-final.pdf
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 101

National Institute of Science and Technology (NIST) (2010b, August). Recommended


security controls for federal information systems and organizations (Special
Publication 800-53). Retrieved from http://csrc.nist.gov/publications/nistpubs/800-
53-Rev3/sp800-53-rev3-final_updated-errata_05-01-2010.pdf
Nguyen, T., Marmier, F., & Gourc, D. (in press). A decision-making tool to maximize
chances of meeting project commitments. International Journal of Production
Economics.
Nunnally, J. C. (1978). Psychometric theory. New York: McGraw-Hill.
Oliver, R. L. (1980). A cognitive model of the antecedents and consequences of
satisfaction decisions. Journal of Marketing Research, 17(4), 460-469.
Oliver, R. L., & Swan, J. E. (1989). Consumer perceptions of interpersonal equity and
satisfaction. Journal of Marketing, 53(2), 21.
Orr, B. (2006). SaaS just may be the end of software as we know it. ABA Banking
Journal, 98(8), 51-52.
Paquette, S., Jaeger, P. T., & Wilson, S. C. (2010). Identifying the security risks
associated with governmental use of cloud computing. Government Information
Quarterly, 27(3), 245-253.
Peterson, Martin. (2009) Introduction to Decision Theory. Cambridge: Cambridge
University Press.
Razmi, J., Sangari, M.S., & Ghodsi, R. (2009), Developing a practical framework for
ERP readiness assessment using fuzzy analytic network process. Advances in
Engineering Software, 40(11), 1168-1178.
Rea, M. S., & Parker, R. A. (2005). Designing and conducting survey research: A
comprehensive guide (3rd ed.). San Francisco: Wiley Imprint.
Rechtman, Y. (2009). Evaluating software risk as part of a financial audit. The CPA
Journal, 79(6), 68-71.
Reneke, J. A. (2009). A game theory formulation of decision making under conditions of
uncertainty and risk. Nonlinear Analysis: Theory, Methods, and Applications,
71(12), e1239-e1246.

Saaty, T.L. (1980). The analytic hierarchy process. New York: McGraw-Hill.

Sang-Yong, T. L., Hee-Woong, K., & Gupta, S. (2009) Measuring open source software
success. Omega, 37(2), 426-438.

Saunders, M., Lewis, P., & Thornhill, A. (2007). Research methods for business
students (4th ed.). London: Prentice Hall Financial Times.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 102

Shrivastava, B. P. (1987). Anatomy of a crisis. Cambridge, MA: Ballinger.Publishing Co.


Simon, Herbert (1972). Theories of Bounded Rationality, In C.B. McGuire & R. Radner
(Eds.), Decisions and Organizations (Chapter 8). Amsterdam: North-Holland
Publishing Company.
SnapLogic. (2008, August). Integrating SaaS applications: A SnapLogic white paper.
Retrieved from http://www.snaplogic.com
Spreng, R. A., & Mackoy, R. D. (1996). An empirical examination of a model of
perceived service quality and satisfaction. Journal of Retailing, 72(2), 201-214.
Staples, D.S., Wong, I., & Seddon, P.B. (2002). Having expectations of information
systems benefits that match received benefits: does it really matter? Information
& Management, 40(1), 115-131.

Starinsky, Robert W. (2003) Maximizing business performance through software


packages: best practices for justification, selection and implementation. New
York: CRC Press.

Subashini, S., & Kavitha, V. (2010). A survey on security issues in service delivery
models of cloud computing. Journal of Network and Computer Applications,
34(1), 1-11.
Svantesson, D., & Clarke, R. (2010). Privacy and consumer risks in cloud computing.
Computer Law and Security Review, 26(4), 391-397.
Tasa, K., & Whyte, G. (2005). Collective efficacy and vigilant problem solving in group
decision making: A non-linear model. Organizational Behavior and Human
Decision Processes, 96(2), 119-129.
Torkzadeh, G., & Doll, W.J. (1999). The development of a tool for measuring the
perceived impact of information technology on work. Omega International Journal
of Management, 27(1), 327-339.
Trochim, W. M. K. (2006). Research methods knowledge base. Retrieved from
http://www.socialresearchmethods.net/kb/index.php
U.S. Census Bureau (2007). 2007 economic census. Retrieved from
http://www.census.gov/econ/census07/
Van Ginkel, W. P., & Van Knippenberg, D. (2008). Group information elaboration and
group decision-making: The role of shared task representations. Organizational
Behavior and Human Decision Processes, 105(1), 82-97.
Walter, S. D., Mitchell, A., & Southwell, D. (1995). Use of certainty of opinion data to
enhance clinical decision-making. Journal of Clinical Epidemiology, 48(7), 897-
902.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 103

Wasserman, A., Murugan, P., & Chan C. (2006). Business readiness rating for open
source: A proposed open standard to facilitate assessment and adoption of open
source software. Retrieved from http://www.openbrr.org
Weick, K., & Quinn, R. (1999). Organizational change and development. Annual Review
of Psychology, 50, 361-386.
Weil, N. (2008a, November). How fast is the road to SaaS: Vendor would make it easier
to migrate apps to hosted model. CIO, 22(4), 8.
Weil, N. (2008b, October). SaaS and the IT staff: As software-as-a-service offerings
expand, IT jobs will change. Here's what the shift may mean to IT
departments. CIO, 22(4), 12.
Weiss, A. (2007). Computing in the clouds. NetWorker, 11(4), 16-25.
Where the cloud meets the ground. (2008, October). Economist. Retrieved from
http://www.economist.com/research/articlesbysubject/displaystory.cfm?subjectid
=348981&story_id=E1_TNQTTJND
Wilson, J. K., & Rapee, R. M. (2006). Self-concept certainty in social phobia. Behaviour
Research and Therapy, 44(1), 113-136.
Wixom, B. H., & Watson, H. J. (2001). An empirical investigation of the factors affecting
data warehousing success. MIS Quarterly 25(1), 17-41.
Worthem, B. (2008, September 23). Overuse of the term “cloud computing” clouds
meaning of the tech buzz phrase. Wall Street Journal. Retrieved from
http://online.wsj.com/article/SB122214259441966713.html
Yamin, M., & Sinkovics, R. (2010). ICT deployment and resource-based power in
multinational enterprise futures. Futures, 42(9), 952.
Yeo, R. K., & Ajam, M. Y. (2010). Technological development and challenges in
strategizing organizational change. International Journal of Organizational
Analysis, 18(3), 295-320.
Zellman, E., Kaye-Blake, W., & Abell, W. (2010). Identifying consumer decision-making
strategies using alternative methods. Qualitative Market Research, 13(3), 271-
286.
Zeng, J., An, M., Smith, N. J. (2007). Application of fuzzy logic decision making
methodology to construction project risk management. International Journal of
Project management, 25(1), 589-600.
Zissis, D., & Lekkas, D. (in-press). Addressing cloud computing security issues. Future
Generation Computer Systems.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 104

Appendix I: Institutional Research Board (IRB) Survey Approval Form


RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 105
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 106
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 107

Appendix II: Survey Instrument (with Consent Form)

Study Title: Survey of Online Software Selection and Risk Assessment (Software-As-a-Service)

Principal Investigator: Lionel Bernard

Faculty Advisor: Dr. Monica Bolesta

Thank you for taking the time to complete this survey on online software selection and risk assessment.
The aim of this survey is find out whether or not the awareness of the risks associated with adopting
subscription based online software in your organization can lead to a better experience in using the
software and a better relationship with the software vendor.

Completing this survey will take no more than 15 minutes of your time. The survey is completed
anonymously and all data collected in this study will be kept confidential. Your responses will not be
passed on to any third-parties and will only be used for academic research.

There are no anticipated risks to participating in this study. There may be no direct benefit to you other
than the sense of contributing to knowledge in this particular area.

If you would like further information about the study please contact me (Lionel Bernard) at
lbernard@arc.gov or call me at 202.731.8402.

This study has been reviewed and approved by the Human Subjects Review Board, University of
Maryland University College. If you have concerns about ethical aspects of the study, please contact the
Dr. John Aje, Graduate School Representative, University of Maryland University College.

I have read and understand the above information. I agree to participate in this study.

o Yes, I agree.

QGx = General Demographic Questions QBx = Business Continuity Risk Questions

QRx = Security Risk Questions QIx = Integration Risk Questions

QOE = Open Ended Question

QG1: What type of organization is your organization?


RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 108

o Private Corporation (1)

o Government (Federal, State or Local) (2)

o Non-Profit (3)

QG2: What is your job function within your organization?

o IT Executive (CIO, CTO, CSO, VP) (1)

o IT Manager (Director/Manager) (2)

o Network/System Management (3)

o Corporate Management (CEO, COO, PRES, VP, GM) (4)

o Other Corporate Management (5)

o IT Staff (6)

o Consultant (7)

QG3: Does your organization subscribe to any online based (pay-as-you-go) software that you
primarily use via a web browser and that is owned by an outside vendor?

(Yes, go to Q4. No, exit survey and thank for completing.)

o Yes (1)

o No (2)

QG4: If yes (to question #3), can you recall how long it has been since your organization began
subscribing to online based (pay-as-you-go) software?

o Less than 1 year (1)

o 1 to 3 years (2)

o 4 to 6 years (3)

o More than 7 years (4)

QG5: Were you involved in the evaluation and selection process for any of the online software
that your organization uses? (If yes, go to Q6, if no, end survey.)

o Yes (1)

o No (2)

Please respond to the following question by thinking about one (1) of the online software
applications used in your organization that your were involved in selecting and that you are
familiar with and can give your honest opinion about this online software.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 109

QG6: In your opinion, how satisfied are you or your organization overall with this online based
software that your organization is using or has used in the past?

o Very Satisfied (1)

o Somewhat Satisfied (2)

o Not very satisfied (3)

o Not at all satisfied (4)

o Don’t Know (0)

QG7: Is your organization still using this software?

o Yes (1)

o No (2)

Please respond to the following questions by thinking about the online software and the vendor
who owns the software that your organization subscribes to. Think about how well you know the
online software and the vendor and respond based on information you have or can remember.
Please respond to each question honestly and as best as you can. Respond based on how certain
or uncertain you were about a specific item related to the online software or the vendor.

QR8: How certain are you that the vendor conducts security checks on their staff?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QR9: How certain are you that the vendor has controls in place to restrict access to your data by
their staff?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QR10: How certain are you that the vendor has policies in place about informing you regarding a
security breach that allowed someone to get access to your data?

o Very Certain o Somewhat o Not very o Not at all o This Does Not
(1) Certain(2) certain(3) certain(4) Apply(0)

QR11: How certain are you that the vendor has internal controls in place to prevent intrusion,
phishing, and malware attacks against your data?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 110

QR12: How certain are you that the online software requires a username and password (or some
other form of authentication) to gain access to the online software?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QR13: How certain are you that the vendor will allow you to investigate your usage logs and
access records?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QR14: How certain are you that the vendor encrypts the communication whenever someone in
your organization uses a web browser to access the online software?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QR15: How certain are you that the online software stores your data in the same country where
your organization is a legal entity?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QR16: How certain are you about who owns your data that is stored in the online software?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QR17: How certain are you that the vendor enables you to retain your Sarbanes-Oxley (SOX)
and/or HIPAA compliance (if applicable)?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QR18: How certain are you that the vendor furnished or can furnish a recent Statement on
Auditing Standards (SAS70) report?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB19: How certain are you that your organization tested the online software before adopting it?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 111

QB20: How certain are you that the vendor has a disaster contingency plan?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB21: How certain are you that the vendor is able to recover your lost or archived data at your
request?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB22: How certain are you about the uptime and downtime performance requirements for this
online software?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB23: How certain are you that the online software can scale to accommodate an increase in
usage or storage volume by your organization?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB24: How certain are you that the vendor has a policy for notifying you before an upgrade to the
online software?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB25: How certain are you that the vendor provides support by phone?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB26: How certain are you that the vendor provides support by email?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB27: How certain are you that the vendor provides support by a web-based trouble ticket
system?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB28: How certain are you about the subscription rates/fees charged by the online software
vendor?
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 112

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB29: How certain are you about the contract payment terms with the vendor (i.e. monthly,
quarterly and annually)?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB30: How certain are you about whether or not the vendor can increase subscription fees at any
time or only when the contract is renewed?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB31: How certain are you that the vendor imposes a penalty for early termination of your
subscription?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB32: How certain are you that the vendor will return your data if your organization cancels its
subscription to the online software?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB33: How certain are you about whether or not the vendor allows for customizing the online
software?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB34: How certain are you about whether or not the vendor charges customization fees?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB35: How certain are you about whether or not the vendor provides training on using the online
software?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB36: How certain are you about the whether or not the vendor charges training fees?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 113

QB37: How certain are you that the vendor has provided documentation for the online software in
electronic and/or print format?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QB38: How certain are you about the whether or not someone in your organization is the primary
person responsible for communicating with the vendor?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QI39: How certain are you that a list of functional requirements was outlined by your organization
before selecting the online software?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QI40: How certain are you that the online software meets all or most of your functional
requirements?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QI41: How certain are you the online software can work/communicate with other software used by
your organization if required (i.e. exchange data)?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QI42: How certain are you that the online software vendor will assist in transferring your data from
your in-house system to the online software, if necessary?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QI43: How certain are you that the online software has adequate reporting functionality to meet
your organization’s needs?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)

QI44: How certain are you that the online software is easy to navigate and use based on your
experience with the online software?

o Very o Somewhat o Not very o Not at all o This Does Not


Certain(1) Certain(2) certain(3) certain(4) Apply(0)
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 114

QOE45: Please provide any feedback and comments about your experience in completing this
survey and/or your experience in using online (pay-as-you-go) software in your organization that
you think is helpful to this research.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 115

Appendix III: Survey Distribution Approval Emails

From: Eliot Ware [mailto:eware@allcovered.com]


Sent: Tuesday, November 03, 2009 11:33 AM
To: Bernard, Lionel
Cc: Marc Howard
Subject: RE: Survey Request.

Mr. Bernard –

You are quite welcome. Please consider this formal approval to go ahead with this survey to our clients.

Regards,

Eliot

From: Bernard, Lionel [mailto:lbernard@arc.gov]


Sent: Tuesday, November 03, 2009 11:31 AM
To: Eliot Ware
Cc: Marc Howard
Subject: Survey Request.

Mr. Ware,

Thanks for agreeing to forward my SAAS survey to AllCovered clients. Please respond to this email by stating your approval to
go ahead with this survey of your clients. I simply need this formal approval as documentation for the University of Maryland’s
research board.

Thanks again,

Lionel Bernard

From: Bernard, Lionel


Sent: Wednesday, November 04, 2009 2:40 PM
To: cwatkins@ds3datavaulting.com
Subject: SAAS Survey

Hi Craig,

Thanks for agreeing to forward my doctoral survey to DS3 clients via email. Below is a link to the survey. My research is on SaaS
risk assessment. Once you start the survey it will give you a full explanation of the research. Please review the survey and give
any feedback you can. I’m looking to have it sent out during the week of November 16. When you are ready to send it out I’ll
send you an introductory email to use when sending.

https://www.surveymonkey.com/s.aspx?sm=372M8xMSPZVDDMbeIC29Vw_3d_3d

Thanks,

Lionel Bernard
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 116

Appendix IV: Construct Items Removed Based on Factor Analysis

Certainty Measurement Item (Risk Sub


Construct Certainty Question
Dimension)

How certain are you that the online software requires a


Security Risk
Username/Password Certainty (Access) username and password (or some other form of
Dimension (SEC)
authentication) to gain access to the online software?

How certain are you that the vendor will allow you to
Investigate Log/Access Certainty (Segregation)
investigate your usage logs and access records?

How certain are you that the vendor encrypts the


Web Browser Encryption Certainty (Transmission) communication whenever someone in your organization uses
a web browser to access the online software?

Data in Same Country Certainty (Location) How certain are you that the online software stores your data
in the same country where your organization is a legal entity?

How certain are you about who owns your data that is stored
Data Ownership Certainty (Ownership)
in the online software?

How certain are you that the vendor enables you to retain
SOX/HIPAA Compliance Certainty (Compliance) your Sarbanes-Oxley (SOX) and/or HIPAA compliance (if
applicable)?

Vendor SAS70 Report Certainty (Compliance) How certain are you that the vendor furnished or can furnish
a recent Statement on Auditing Standards (SAS70) report?

Business Continuity How certain are you that the vendor has a disaster
Vendor Disaster Plan Certainty (Recovery)
Dimension (BC) contingency plan?

How certain are you that the vendor is able to recover your
Recover Lost Archive Data Certainty (Recovery)
lost or archived data at your request?

Uptime/Downtime Performance Certainty How certain are you about the uptime and downtime
(Availability) performance requirements for this online software?

How certain are you that the vendor provides support by


Email Support Certainty (Support)
email?

How certain are you that the vendor provides support by a


Web Ticket Support Certainty (Support)
web-based trouble ticket system?

How certain are you about the subscription rates/fees


Subscription Fees Certainty (Pricing)
charged by the online software vendor?

How certain are you about the contract payment terms with
Payment Terms Certainty (Pricing) the online software vendor (i.e. monthly, quarterly and
annually)?

How certain are you about whether or not the vendor can
Subscription Fee Increase Certainty (Pricing) increase subscription fees at any time or only when the
contract is renewed?

How certain are you that the vendor imposes a penalty for
Early Termination Penalty Certainty (Termination)
early termination of your subscription?
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 117

How certain are you that the vendor will return your data if
Data Return on Cancellation Certainty
your organization cancels its subscription to the online
(Termination)
software?

How certain are you about whether or not the vendor allows
Customization Allowed Certainty (Customization)
for customizing the online software?

How certain are you about whether or not the vendor


Software Training Certainty (Training)
provides training on using the online software?

How certain are you about whether or not the vendor


Software Training Fee Certainty (Training)
charges a training fee?

How certain are you that the vendor has provided


Print/Electronic Documentation Certainty
documentation for the online software in electronic and/or
(Documentation)
print format?

How certain are you about whether or not someone in your


Primary Contact Certainty (Provider
organization is the primary person responsible for
Management)
communicating with the vendor?

Integration Risk How certain are you that the online software has adequate
Adequate Report Certainty (Reporting)
Dimension (I) reporting functionality to meet your organization’s needs?
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 118

Appendix V: S-CRA Framework Questions and Rating/Weight Sample

Security Risk Dimension Questions and Sample Certainty Rating Weights

Certainty Rating Weight


Certainty Question (Subjective,
Sub-Dimension Item (1 – Very Certain Optional,

4 = Very Uncertain, 1-10)

0 = Not Applicable)

How certain are you that the vendor conducts


security background checks on their staff?
Access 2 10(20)

How certain are you that the vendor has controls


in place to restrict access to your data by their
Access staff? 1 10(10)

How certain are you that the vendor has policies


in place about informing you regarding a security
Access breach that allows someone to get access to your 3 10(30)
data?

How certain are you that the vendor has internal


controls in place to prevent intrusion, phishing,
Integrity/ and/or malware attacks against your data? 3 10(30)
Confidentiality

Sample SaaS Provider Security Risk Score 9/16 (47%) 40(90)

Business Continuity Risk Dimension Questions and Sample Certainty Rating Weights

Certainty Rating Weight


Certainty Question (Subjective,
Sub-Dimension (1 = Very Certain Optional,
Item
4 = Very Uncertain, 1-10)

0 = Not Applicable)

How certain are you that the online software can


scale to accommodate an increase in usage or
Scalability storage volume by your organization? 2 10(20)

How certain are you that your organization


tested the online software before adopting it?
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 119

Testing 1 10(10)

How certain are you that the vendor has a policy


for notifying you before an upgrade to the online
Upgrade software? 3 10(30)

How certain are you that the vendor provides


support by phone?
Support 1 10(10)

How certain are you about whether or not the


vendor allows for customizing the online
Customization software? 3 10(30)

Sample SaaS Provider Business Continuity Risk Score 10/20 (50%) 50(100)

Integration Risk Dimension Questions and Sample Certainty Rating Weights

Certainty Rating Weight


Certainty Question (Subjective,
Sub-Dimension (1 = Very Certain Optional,
Item
4 = Very Uncertain, 1-10)

0 = Not Applicable)

How certain are you that a list of functional


requirements was outlined by your organization
Functionality before selecting the online software? 1 10(20)

How certain are you that the online software


meets all or most of your functional
Functionality requirements? 1 10(10)

How certain are you the online software can


work/communicate with other software used by
Compatibility your organization, if required (i.e. exchange 1 10(40)
data)?

How certain are you that the online software


vendor will assist in transferring your data from
Compatibility your in-house system to the online software, if 2 10(30)
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 120

necessary?

How certain are you that the online software is


easy to navigate and use based on your
Usability experience with the online software? 2 10(20)

Sample SaaS Provider Integration Risk Score 7/20 (35%) 50(120)


RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 121

Appendix VI: Explanation of Risk Sub-Dimension Items in S-CRA Framework

Security Risk Sub- Description


Dimension Item

Access Risk concerns regarding who has access to data, the type of access, and provisions to
prevent unauthorized access.

Integrity/Confidentiality Risk concerns regarding the privacy and protection of data while it is stored,
retrieved and transferred.

Transmission Risk concerns regarding encrypting and safeguarding data while it is being
transmitted electronically over the Internet.

Data Location Risk concerns regarding where data is physically stored (in-country or out-country)
and whether data is protected under local laws.

Data Segregation Risk concerns regarding multi-tenancy provisions to ensure that each SaaS client
data and usage is separate from other clients.

Ownership Risk concerns regarding establishing ownership of data on service termination.

Compliance Risk concerns regarding client’s ability to meet certain legal reporting and
operational requirements and provider’s ability to meeting accreditation
requirements.

Business Continuity Risk Description


Sub-Dimension Item

Availability Risk concerns regarding performance reliability of the SaaS solution.

Recovery Risk concerns regarding the provider’s recoverability in the event of a disaster and
contingency plans.

Scalability Risk concerns regarding the flexibility of the SaaS solution to accommodate increases
in usage and storage.

Documentation Risk concerns regarding availability of succinct electronic and/or print


documentation of the SaaS solution for reference by clients.

Training Risk concerns regarding availability of training for clients on using the SaaS solution.

Testing Risk concerns regarding provisions allowing for client testing of the SaaS solution.

Upgrade Risk concerns regarding client’s receiving timely notifications of system upgrades
and upgrades being conducting during non-peak usage hours.

Support Risk concerns regarding availability of phone, email, and/or web trouble ticket
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 122

support and qualified support staff.

Pricing Risk concerns regarding penalties, subscription price increases, and pricing related to
training, customization, and other tie-in services.

Provider Management Risk concerns regarding client personnel responsible for managing the service
relationship with the provider.

Customization Risk concerns regarding level and type of customization allowed by the provider and
availability of tools for making the customization.

Integration Risk Sub- Description


Dimension Item

Usability Risk concerns regarding ease-of-navigation and user-friendliness of the SaaS


solution.

Compatibility Risk concerns regarding ability of the SaaS solution to integrate with internal systems
if necessary and the provider’s ability to support integration.

Functionality Risk concerns regarding ability for SaaS solution to meet established functional
requirements set by the client.

Reporting Risk concerns regarding availability of adequate reporting functions in the SaaS
solution.

You might also like