Professional Documents
Culture Documents
Lionel Bernard
A Thesis
Submitted to the
Graduate Faculty
of
University of Maryland University College
in Partial Fulfillment of
the Requirements for the Degree
of
Doctor of Management
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.
UMI 3482543
Copyright 2011 by ProQuest LLC.
All rights reserved. This edition of the work is protected against
unauthorized copying under Title 17, United States Code.
ProQuest LLC.
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106 - 1346
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS i
Abstract
Table of Contents
List of Tables
1.1. Results from Survey of Current SaaS Selection Methods (n = 252) ............................... 18
3.3. Factor Analysis for SaaS Business Continuity Risk (n = 114) ........................................ 67
3.6. Correlation Matrix of Latent Variables (Business Continuity Risk Construct) ................. 68
4.11. Level of Satisfaction and Security Risk Certainty Hypothesis Test Analysis .................. 81
4.13. Level of Satisfaction and Integration Risk Certainty Hypothesis Test Analysis............... 83
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS v
List of Figures
2.6. Comparison of cloud risk factors, categories, and controls in Heiser’s cloud risk factors
and the ENISA, CSA, and FedRAMP cloud risk assessment frameworks ..................... 46
2.7. Sample risk assessment questions based on Heiser’s cloud risk factors and the ENISA,
CSA, and FedRAMP cloud risk assessment frameworks ............................................... 47
3.1. Likert-like SaaS adoption satisfaction rating scale and explanation ............................... 60
Chapter 1
Introduction
Overview
What Is SaaS?
Given that the web browser is the primary medium used by subscribing
organizations to access SaaS applications, SaaS is poised to dramatically expand the
status of the web browser from its original role as the window to the Internet to the
standard operating system for computers. Google, one of the pioneers in SaaS
provision, is banking on the web browser’s expanded role as the gatekeeper to the
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 3
SaaS domain. Google introduced its heralded Chrome web browser in 2008 not only as
a means of supporting its SaaS offerings, such as Google Apps, an online word
processing and spreadsheet application, and Gmail, but also as a strategy to position
the Chrome browser as the predominant tool for accessing online applications. Unlike
Microsoft’s Internet Explorer (IE) and Mozilla’s Firefox browsers, Chrome is specifically
designed to optimize online applications, with faster response time, a friendlier interface,
and feature-rich offerings (Havenstein, 2008). As the use of online applications
increases, a free web browser that facilitates SaaS access and use could monopolize
the SaaS distribution channel by forcing providers to adjust their applications to
accommodate this browser. Microsoft embarked on a similar path more than a decade
ago. Sensing the growing availability and usage of the Internet, Microsoft purposely
linked its free IE browser to its Windows operating system (OS), thus annihilating its
main competitor, Netscape, and allowing Microsoft to leverage IE’s subsequent
widespread use to push its own content to Internet users. Google’s Trojan horse
strategy to benefit from the SaaS trend appears to be similar.
Source: SaaS-Showplace.com
For most organizations, the adoption of SaaS is not a sudden leap of faith but,
rather, a creeping deployment entailing gradual integration and replacement of in-house
applications. This cautious approach to SaaS selection and integration stems from both
latent reservations about entrusting vital applications to an outside provider and the
perception that SaaS transitioning involves a fair amount of complexity. Once an
organization comes to understand the benefits of SaaS in its evaluation process, its
fears are diminished by the viable economic value of SaaS. As a litmus test, most
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 5
SaaS has evolved and matured as a technology to the point where it is now
growing in acceptance and adoption and becoming the platform of choice for many
organizations. McNee (2007) notes this evolutionary phenomenon and describes a
distinction between SaaS 1.0 and SaaS 2.0. SaaS 1.0 includes applications that
emphasize functionality and cost effectiveness but are limited in configurability.
Organizations gravitate to commoditized SaaS 1.0 applications because they are niche-
oriented and inexpensive, can be deployed quickly, and have a low total cost of
ownership (TCO). In contrast, the SaaS 2.0 applications described by McNee (2007),
which began to emerge only in 2005, are much broader in scope and offer organizations
greater flexibility in configuration and integration. Whereas SaaS 1.0 applications are
focused on a specific need, such as video conferencing, SaaS 2.0 offerings combine a
variety of high-end business functions, such as ERP and human resource management
(HRM), into a single integrated SaaS package.
The evolution of SaaS has been described as taking place in four distinct phases
of maturity. Miller (2008) notes that during the 1990s and at its first maturity level, SaaS
was embodied by the idea of the application service provider (ASP). ASP entailed a
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 6
SaaS adoption by both small and large companies appears to have increased as
the technology has matured and become more credible. One estimate places SaaS
spending as of 2008 at 17% of software budgets for large companies (+1000
employees), approximately 11% for mid-sized companies (+100 employees), and about
26% for small companies (fewer than 100 employees). These numbers represent a
proportional increase from only 5% of software spending 3 years earlier among
companies of all sizes (SnapLogic, 2008). Just as the advent of the Internet ignited a
movement toward this medium as a repository of digitized information of all types, SaaS
is slowly luring companies and individuals alike to engage in such online activities as
storing photos and videos; using free or subscription-based web mail services; and
using online applications, such as Salesforce.com to manage sales contacts and
Elephantdrive.com to store documents. Despite its already impressive growth, SaaS
promises to grab an even bigger piece of the software spending pie of organizations’ IT
budgets. Surveys and insights by prominent IT trade publications, research
organizations, and many IT pundits (Donston, 2008; Orr, 2007; SnapLogic,
2008;eWeek, 2008; Weil, 2008a) conclude that between 70% and 84% of organizations,
both large and small, are currently considering adopting SaaS. Another research team
forecasts that SaaS growth will climb to 56% of the software market by 2011, with a
compound annual growth rate (CAGR) for the industry of 28% through 2012 (Mertz et
al., 2008). This projected growth rate will dramatically outpace conventional shrink-
wrapped software growth and far exceed the software industry’s CAGR of 11% for the
same time period (Mertz et al., 2008).
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 7
The merits of this rapid growth are readily apparent and serve to further increase
the appeal of SaaS, but as is the case in any paradigm shift, SaaS growth will also
prove to be disruptive to existing software systems, IT infrastructure, and IT staff in
organizations that have adopted or are considering adopting this technology. In
highlighting this disruptive tendency, Weil (2008b) notes that widespread adoption of
cloud computing services, including SaaS, not only causes organizations to reduce IT
staff and infrastructure—thus forcing some IT professionals to consider career
alternatives—but also shifts the high demand for IT resources, such as hardware and
labor, from non-IT organizations to cloud service providers.
Several factors have supported and continue to support the high growth of SaaS.
One of these is the advent of high-speed Internet in the form of broadband. The
expansion of Internet speed and the decreasing cost of Internet access over the last few
decades have been key elements in the growth of SaaS. Some estimates indicate that
broadband access in the form of cable modem, DSL, and fixed wireless had increased
to 69% of the total U.S. population by 2007; this trend is expected to further increase to
71% by 2012 (Department of Labor, Bureau of Labor Statistics, 2008). This means that
two of every three individuals in the United States now have access to high-speed
Internet in the workplace or at home. Not surprisingly, existing and new computer
companies will continue to develop software service models that piggyback on the
Internet as a delivery medium. Other devices, such as cell phones and mobile devices
with high-speed Internet access, are quickly becoming feasible platforms for SaaS
delivery.
Beyond shrinking broadband costs, other cost and benefit incentives also drive
organizational IT decision-makers toward SaaS adoption. SaaS has evolved beyond a
mere computing trend. It helps to improve computing efficiency and the overall bottom
line of adopting organizations. In terms of cost, SaaS offers a strong incentive in
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 8
Bajaj, Bradley, and Cravens (2008) note that some software benefits may be
“intangible and non-financial.” SaaS offers credible non-financial benefits, but opinions
vary on the extent and types of benefits inherent in SaaS; a summary of financial and
non-financial benefits is shown in Figure 1.3. Bangert (2008) emphasizes the high
degree of scalability, ease of accessibility, and low startup costs of SaaS, whereas
Blokdijk (2008) points out its cost predictability and low risk factors. Bielski (2008)
suggests that the anywhere availability of SaaS can improve coordination and
collaboration among team members. Orr (2006) describes additional tangible benefits of
SaaS, including the fact that SaaS enables users to generate and dispense information
at any time and from any location, that it is less disruptive than the installation of
traditional software, that it promises a quick adoption timeframe, and that it offers a
lower cost of ownership. Pring et al. (2007) also suggest that because SaaS
applications are usually built on open standards, they are easier to integrate with
existing in-house or other SaaS applications in an organization’s operating environment.
The financial and non-financial benefits of SaaS have an uneven and subjective
impact in different SaaS categories and in varying organizations. For example, web
conferencing, which is the leading SaaS growth category, is popular among
organizations because it offers a cost-effective and convenient alternative to traveling
for face-to-face meetings. In contrast, ERP SaaS applications, although just as cost
effective and beneficial as any other SaaS offering, are experiencing slower adoption
largely because of the integration complexity and perceived security risks of these
applications.
SaaS providers. SaaS providers are quickly emerging as the predominant IT employers
and purchasers of computing hardware and services.
channels supported, whereas larger non-SaaS vendors who are actively transitioning
their products to SaaS cover most of the feedback management requirements, including
telephone and snail-mail surveys. When these large vendors complete the transition to
SaaS, their more extensive platforms may well attract survey customers away from
smaller vendors.
Because access to subscriber software is accomplished via the Internet and the
subscriber is essentially blind to the implementation at the provider’s facility, the term
“cloud computing” is also used to refer to SaaS applications. However, cloud computing
encompasses a much broader concept of network-based computing. According to Miller
(2008), the term ”cloud” implies a significant number of high- and low-end
interconnected computers charged with unanimously processing, storing, and delivering
information to clients. No one knows for certain where the concept or the phrase “cloud
computing” originated. It may have evolved from a standard practice among IT
professionals of depicting the Internet medium in network diagrams using the image of a
cloud. The online retailer Amazon.com is frequently credited with pioneering cloud-
computing technology when it introduced its Elastic Computer Cloud (EC2) service in
2006, enabling customers to store and run online applications on its data center server
farms (Worthem, 2008).
Cloud computing is the underlying massive computing power that makes SaaS
possible. As shown in Figure 1.4, the SaaS paradigm is merely a subset of available
cloud-computing concepts that include service-oriented architecture (SOA), web
services, the platform-as-a-service (PaaS) concept, the infrastructure-as-a-service
(IaaS) concept, Web 2.0, private and public clouds, and on-demand computing (Miller,
2008). Data centers and high-speed Internet connectivity form the foundation of cloud
computing and, hence, SaaS. SOA, which is intended for software developers,
describes the underlying software structure of SaaS applications. SOA provides a
model requiring that software be developed with discrete functions that are exposed as
web services to an open environment through a standardized addressing scheme that is
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 12
published and used by a subscribing entity to perform a specific task, all via the web.
Web services are the end result of an SOA implementation, enabling organizations to
essentially “rent” software functionality piecemeal and integrate that functionality into
custom in-house applications. Ariba, a web services vendor, provides a set of
procurement web services that can be integrated into an in-house business application
to evaluate and calculate raw material costs from a variety of suppliers. Although SOA
has not made inroads with major application vendors that have SOA-enabled their
product offerings, it has achieved greater acceptance in mashup software development.
For example, developers are now using Google Apps web services to integrate
mapping features in their custom business applications (“Creating the Cumulus,” 2008).
PaaS and IaaS, two emerging cloud-based technologies, are being hailed as the
next phase of cloud-computing offerings that organizations can leverage to improve
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 13
their IT systems and reduce the costs associated with establishing and maintaining
these systems (Krill, 2008). With PaaS, an organization no longer has to procure the
hardware and network access required to host its mission-critical web applications. Like
SaaS, PaaS vendors provide storage space and an accessibility platform for lease to
organizations to host applications that can be accessed remotely through the Internet or
through a secure channel between the organization and the PaaS provider. PaaS
providers typically have data center facilities with stringent security parameters to
safeguard physical and virtual access to their client data. The cost structure of PaaS
offerings is similar to SaaS from a per-usage perspective, but the usage component in
PaaS is the amount of storage space required to store business applications and the
amount of bandwidth required to access those applications. IaaS extends the PaaS
concept a bit further, enabling organizations to rent not only storage space and
bandwidth to store and access their applications but also the entire support
infrastructure, including servers, connectivity, data center space, software, firewalls,
access control, and remote connectivity devices.
The key difference between PaaS and IaaS is the level of control provided to
adopters of these cloud-based services. PaaS limits control over leased cloud
resources to security access and storage allocation, whereas IaaS grants the leasing
organization more granular control over security, configuration, access, and equipment
(Miller, 2008). The amount of control and the level of flexibility available to the cloud
service subscriber have led to a categorization of cloud services as either public or
private (Bittman, 2010). Public cloud services entail a dependency on a pay-as-you-go,
provider-controlled framework, such as SaaS, IaaS, and PaaS. A strategy of
outsourcing IT systems without ownership applies to public clouds. Public clouds enable
organizations to leverage the competency, support, and maintenance of seasoned IT
providers at a reasonable financial cost but at the expense of marginalized control. In
contrast, private clouds represent a strategy of leveraging external IT resources without
sacrificing control. For example, an international organization that builds a cloud-like
infrastructure in-house but limits its use and tenancy to geographically disbursed
subsidiaries is regarded as providing a private cloud service (Bittman, 2010).
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 14
Disadvantages of SaaS
Despite the promising future of SaaS, the technology has some significant
downsides that, if left unaddressed, could negate the excitement among potential
organizational adopters. These disadvantages include the need for reliable Internet
access, the fact that SaaS may be slower than in-house systems, limited feature
offerings for some functions, and the potential for breaches of security (Miller, 2008).
Other SaaS shortcomings that have been suggested by IT industry insiders include the
following (Pring, Desisto, & Bona, 2007):
A recent eWeek survey of 252 chief information officers (CIOs) from a variety of
organizations illustrates the empirical impact of SaaS risks on organizations that have
already adopted SaaS applications (eWeek, 2008). According to the survey, the
principal negative experience undergone by nearly half of the organizations represented
was that Internet downtime rendered SaaS applications inaccessible by the
organizations’ staff and reduced productivity. A close second in experienced risks was
the realization that SaaS applications were not as customizable as the organizations
had expected or as conventional software. Other significant risks experienced by the
survey participants in smaller numbers included high long-term costs, interface and
feature changes without prior notification, and increased information insecurity.
Research Purpose
At some point in the near future, nearly all business managers will be forced to
consider adopting SaaS as an IT solution because of the obvious cost and convenience
benefits that come with the technology. However, the typical decision process used by
management considers only the attractive incentives of using SaaS. The risks of this
new technology are largely ignored or not evaluated properly by IT decision-makers.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 17
The need for risk assessment during the SaaS evaluation phase of the adoption
process is critical to ensure information system sustainability and reduce uncertainties.
Risk assessment is widely used as a management tool; the process involves scanning
the environment, testing, or reviewing information sources to establish trustworthiness,
uncover threats, and inform the decision-making process. A key component of
organizational risk management, risk assessment includes such diverse areas as
commercial and political analysis prior to international investment, business continuity
planning, and evaluation of supply chain disruption vulnerabilities (Khattab, Aldehayyat,
& Stein, 2010; Lamarre & Pergler, 2010).
Management can turn to a variety of IS selection methods that may aid in the
SaaS selection process, but these methods are suited for adopting specific IS solutions
and may not address some of the unique requirements of a SaaS system. For example,
the business readiness rating (BRR), an open-source software (OSS) evaluation
method, proposes rating categories for OSS, including functionality, operational
characteristics, support, documentation, development process, and software attributes
(Wasserman, Murugan, & Chan, 2006). However, these categories exclude some
critical SaaS risk concerns, such as security, business continuity, and confidentiality.
Lacking a framework for SaaS evaluation, IT decision-makers are resorting to informal
measures in their evaluation of SaaS, measures that may not provide a comprehensive
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 18
picture of the risks, costs, benefits, and pitfalls of SaaS adoption. Eagerness to realize
the perceived value and necessity of SaaS may be the reason for this informality in
SaaS selection among organizations. Table 1.1 highlights a few of the most popular
evaluation practices used by IT decision-makers when considering SaaS adoption,
including speaking to current users, scanning trade publications and web sites,
checking references, attending seminars and webinars, and using search engines
(eWeek, 2008). The intended outcome of the current research is to develop a tool that
enables managers to more formally and reliably analyze the risks of available SaaS
choices before deciding on the best option.
Table 1.1
Results from Survey of Current SaaS Selection Methods (n = 252)
Selection Method % of
Respondents
Other 3%
Research Questions
impressive sales pitches by SaaS providers, the potential financial and non-financial
benefits, the provider references, the appealing sales webinars, and the articles in IT
trade journals espousing the merits of cloud computing and view SaaS adoption as an
engaging decision-making process vital to sustaining and improving organizational
operations. Given that SaaS is still an emerging technology, no proven methodology
exists for selecting the right options. Existing cloud evaluation guidelines and
frameworks are too broad in scope and advocate for evaluating cloud risk factors that
are sometimes irrelevant to SaaS selection. Legacy software selection methods
targeted at COTS selection and other methods targeted at OSS selection fall short by
omitting some steps that are unique to the SaaS selection process, including conducting
risk assessment and considering integration requirements.
The inherent risks and ownership model of SaaS lead this research to raise
several necessary pre-adoption questions concerning the overall viability of the SaaS
paradigm for supporting mission-critical organizational IT functions. These questions
include the following:
framework for SaaS adoption. Chapter 2 reviews existing decision-making and risk
assessment literature to support the argument for a rational risk-based approach to
SaaS selection. Chapter 2 also discusses conventional software and cloud-specific
selection approaches and presents a synthesized conceptual model and hypothesis for
this research that accommodates the unique characteristics of SaaS. Chapter 3
presents the methodology used in the research and gives an overview of the data
collection approach. Chapter 4 delves into the crux of the research with a
comprehensive analysis of the data collected and a determination of whether or not the
empirical facts support the underlying hypothesis. The final chapter explores the
implications of the findings on management decision-making as they pertain to online
software adoption, discusses limitations of this research, and highlights potential areas
for further study on this topic.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 21
Chapter 2
Introduction
SaaS selection is essentially a decision process with the aim of maximizing the
value of the software solution adopted by the organization. Well-functioning
organizations typically use the decision process in a wide variety of business scenarios
to analyze and determine the most suitable solution from available options. Regardless
of whether the decision scenario involves developing a new product, selecting a new
supplier, or expanding internationally, the internal decision process is leveraged to
ensure consistency and to realize the organizational goal of maximizing value from
expenditure of revenue. The importance of consistency and completeness in the
decision-making process is supported by empirical research confirming that a prudent
decision process reduces irrational and erratic decision-making and improves outcomes
(Aven & Korte, 2003; Busenitz & Barney, 1997; Tasa & Whyte, 2005).
A vital component of any decision process is the information that serves as input
to the process. Information brings rationality to the decision process and allows for a
decision that takes relevant factors into account. Information can also positively and
negatively influence cost, success, timeliness, and risk outcomes (Zeng, An, & Smith,
2007). Reaching an informed decision after establishing a concise goal requires
concerted effort in gathering and processing information and selecting an action option
for application in a systemic manner. Figure 2.1 depicts a rational decision process that
is applicable to most decisions made in efficiently functioning organizations. A key
benefit of a rational decision-making process is that it enables replication of the process
to derive the same outcome by a different decision-maker intending to audit the
credibility of the original decision.
Having gathered adequate information, the next challenge for the decision-maker
is processing that information in a formalized manner using one or more of the
axiomatic decision models available. Normative decision literature presents a host of
decision models to choose from in processing decision-relevant information. The
underlying objective of normative decision models is to provide pragmatic methods that
allow decision-makers to make rational decisions based on a reality that must conform
to the model (Gilboa, 2009; Peterson, 2009). Essentially, normative models tell
decision-makers how to make decisions using available information.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 24
Among the popular axiomatic decision models used for processing decision-
relevant information are the decision matrix, Bayes’s theorem, and game theory. The
common thread among these classical models is the concept of acts, states, and
outcomes (Gilboa, 2009). Acts are actions or behavior undertaken by the decision-
maker or a third party over whom the decision-maker has direct control. The decision
matrix in Table 2.1 shows an example of the acts in a software selection decision
process as buying, building, or—taking a more recent approach—outsourcing the
software to a SaaS provider. Non-action, that is, not buying the software, is considered
an equally significant state in the decision matrix. In contrast to acts, states are
conditional elements of the decision matrix over which the decision-makers cannot exert
any degree of control. States in a similar software decision process would include
functional aspects of the software, cost, and—in the case of outsourced software—risks
associated with adopting the software. The final component of the normative decision
model is the rational outcome of each intersecting act and state. Outcomes are a
consequence of the actions of the decision-maker given the prevailing conditions. The
outcome stated in the decision matrix is primarily based on a combination of the
expectation of the decision-maker, the nature of the state, and a probability factor
relating to the chances that the expected outcome will be realized. Outcomes can be
expressed in descriptive form or quantitatively.
Referring to the software decision matrix example in Table 2.1, buying software
package A will cost the organization $10,000 (the lowest purchase cost outcome), but
the software has only 50% of the desired functionality. The decision-maker has rated
the probability that security and other pertinent risks will impede the software buying
decision as low. The other factor used to express outcome in the decision model is the
probability that an outcome will occur. Probability may be expressed as a best guess
estimate of the likelihood of an outcome or calculated with an empirical approach using
the frequency of past observations of outcomes given similar conditions and actions
(Jeffrey, 1983). Outcome probability may be derived either objectively or subjectively.
The decision-maker relies on personal intuition, experience, and judgment in subjective
probability statements, whereas an objective approach relies on mathematical estimates
of probability based on the observed frequency of prior experiences.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 25
Table 2.1
Software Selection Decision Matrix
Required
Table 2.2
Software Selection Decision Matrix with Probability-Weighted Factors
Required
The outcomes derived from the normative decision model, given specified
actions, states, and outcomes, are the impetus of normative decision-making in a
business environment. The outcomes allow for objective comparison of the value of
each action/state pairing and provide guidance on appropriate action given the decision
domain. At this point, the decision-maker is capable of selecting the option from among
the derived outcomes that expresses the highest value based on the decision-maker’s
expectations. With the selection of a specific action based on the optimal outcome to
realize the expected business value, the business decision cycle is complete. The focus
then shifts to the requirement on the part of the decision-maker to plan and implement
the action strategy and frequently monitor the application to gauge the effectiveness of
the decision in benefiting the organization. In most major organizational decisions
involving financial, time, and labor asset commitment, the decision process of gathering
and processing relevant information and selecting the optimal outcome using
established rational decision methods enables the organization to maximize the
expected utility of outcomes by reducing the costs and risks associated with unfavorable
outcomes and increasing the benefits associated with the optimal outcome. This well-
tested and axiomatic approach to rational decision-making has evolved to become the
standard in organizational settings (Peterson, 2009).
suggests that the decision-maker will stop gathering information at some point and
commit to the option presenting the best value (Zellman, Kaye-Blake, & Abell, 2010).
Bayes’s theorem is one of the more widely used classical theories in business
decision-making (Aven & Korte, 2003; Damodaran, 2008; Reneke, 2009). The core
value in Bayes’s theorem is in determining the probability that an expected utility will be
realized. This probability is factored with the utility value to derive the outcomes for a
given action/state pairing in the decision matrix. The theory, illustrated in Figure 2.2,
states that the conditional probability of a utility X given state Y is a product of the
probability of the state (Y) and the probability of the state (Y) given the utility (X), divided
by the probability of the state (Y). For example, the software selection decision matrix
discussed earlier includes as a weighted factor the corresponding probability of the risk
state. However, if the utility of a specific state of risk (i.e., security) is used to evaluate
the software options and the probability of the security risk of an action option ranges
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 28
from low to high for each observable risk, the probability for a utility for state of risk
would depend on the historically observed probability of security incidents occurring on
any particular day with other adopters who bought, built, or leased the software and the
probability that the incident posed a palpable risk to adopters. To fully realize the value
of a decision analysis based on Bayes’s theorem, the decision-maker will need to
adequately assign probabilities and utilities to outcomes.
P (X|Y) = conditional probability of utility X (outcome) depending on the occurrence of event state
Y
Both Hollander (2000) and Starinsky (2003) emphasize that software acquisition
can affect the business and that the risk consequences of this acquisition must be
considered in the decision process by seeking out more information. Liang-Chuan and
Chorng-Shyong (2007) extend this observation further in their IS investment framework,
noting that although most IT acquisitions are risky endeavors, most of the available
frameworks for analyzing IT investments fail to consider the risk implications. Specific to
SaaS, Heiser (2010a) recommends that a risk assessment preceding SaaS adoption is
useful in gauging provider ability to meeting confidentiality, integrity, and availability
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 31
requirements, particularly for organizations with sensitive data. A valid framework for
assessing software risk must both be premised and expand on conventional software
selection and risk analysis practices.
The traditional risk analysis framework assumes that risk is measurable as the
probability that a failure event will occur. The first step in a conventional risk framework
involves identifying suitable or relevant risk elements and subsequently observing
occurrences of the risk element in the system to derive a predictive probability of future
impact of the risk element on the system. Implied in the risk analysis process is the
need to determine the relevance of potential risk elements to the system’s adequate
functioning or, in the case of IT acquisition, the relevance of the risk to the acquisition’s
outcome. Aven (2003) argues that the best method for determining the relevance of a
specific risk from among the pool of possible risks is to determine whether a correlation
exists between individual risk elements and the failure of the system. Engineers and
scientists typically rely on reliability analyses to determine the risk of each component’s
contributing to system failure (Corran & Witt, 1982). Koller (2005) notes that without this
concerted effort to define and accredit all relevant risks, it is difficult to determine risk
impact and probability in a normative decision approach.
• The European Network and Information Security Agency’s (ENISA, 2007) cloud-
computing risk likelihood and impact risk assessment framework
utility, expressed as an overall rating score for each provider. Table 2.3 summarizes the
key constructs of the proposed S-CRA model, including the risk dimension and sub-
dimensions, software satisfaction, risk certainty, relevant risks, risk rating, weight factor,
and overall risk rating score.
Security Dimension
Risk Assessment SaaS
Provider
#1
.
Integration SaaS
Dimension Risk Provider
Assessment #x
Table 2.3
S-CRA: Summary of Key Conceptual Model Components
Construct Description
Risk Dimension Risk factors that negatively influence the SaaS adoption experience.
Risk Sub-dimension Subfactors in risk dimensions that describe specific and unique vulnerabilities.
Software Satisfaction The adopting organization’s level of approval regarding its experience with the
SaaS application.
Risk Certainty The adopting organization’s level of awareness of certain risks during the
evaluation phase.
Risk Rating A subjective certainty rating used by the adopting organization to evaluate the
providers’ ability to address relevant risks.
Weight Factor An optional subjective percentage weight factor applied to each risk sub-dimension
item based on the significance of the item to the adopting organization.
Overall Rating Score Composite rating and/or weight score for each relevant risk sub-dimension item for
each provider considered. Used by the adopting organization as a final measure of
outcome to distinguish SaaS provider options.
Before delving into a discussion of the S-CRA framework proposed for this
research, it may be helpful to take a closer look at a few prominent software selection
methods and evaluate their shortcomings when applied to a SaaS adoption process.
The analytical hierarchy process (AHP) and analytical network process (ANP)
are both decision-making frameworks developed by Saaty (1980) to solve complex
decision problems involving multiple criteria, objectives, and goals. The frameworks
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 35
AHP and ANP are readily applied to many software selection processes and
naturally complement any process involving selection from among competing
alternatives. Ayagi and Ozdemir (2007), in their study on effective ERP software
selection, propose an approach that leverages the ANP/AHP framework. According to
the study, the first step when using AHP in an ERP software selection process is to
express requirements as a statement of goals. The next step involves outlining selection
dimensions, which might include system cost, vendor support, flexibility, functionality,
reliability, ease of use, and technical advantage. In the final step, the decision-maker
makes a pairwise comparison by ranking, on a scale of 1 to 9, his or her preference with
regard to the alternatives for each dimension. For example, the decision-maker would
rank alternative A versus alternative B on the 1-to-9 rating scale and do the same for
each alternative pairing. The AHP and ANP frameworks provide guidelines for
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 36
calculating the final priority score for each alternative and deriving the desirable
solution.
AHP and ANP are both proven and effective frameworks for software and
technology selection in general, but they have a major shortcoming that makes them
impractical as SaaS decision tools: Both frameworks require the decision-maker to
perceive each criterion as competing against another and to make a judgment call of
one priority over another. However, in SaaS selection, the risk dimensions of security,
business continuity, and integration are usually regarded as equally significant (Heiser,
2010a). Forcing the decision-maker to determine which risk is more important than
another reduces the effectiveness of the SaaS evaluation process.
A final COTS selection methodology that merits review is the software package
implementation methodology (SPIM) introduced by Starinsky (2003). SPIM is a more
recent software selection methodology and presents a holistic approach that
incorporates many of the ideas introduced in earlier COTS selection methodologies,
including PORE and risk-squared. Like the risk-squared software selection method,
SPIM also suggests a formal initial solicitation process to communicate requirements
and attain feedback from potential software vendors. SPIM provides succinct guidelines
for devising a comprehensive RFP requirements document and soliciting responses to
the RFP from potential vendors. To evaluate proposals, SPIM advises using the
standard COTS evaluation process of scoring, rating, and weighting each proposal’s
response to each requirement. SPIM’s distinction lies in its unique system for rating and
weighting responses to requirements, allowing for more objectivity in the evaluation.
Responses are rated depending on whether the vendor provides a feature requirement
as standard (10), optional (5), custom (1), or not at all (0) in its software. The weight for
each feature is predetermined as essential (10), desirable (5), and future (1). Additional
unique elements of SPIM include recommendations for checking vendor references,
scoring vendor demonstrations in a manner similar to the requirements scoring, visiting
vendor sites, and negotiating requirements. The proposed software with the highest
composite score in all requirements becomes the optimal choice.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 39
SPIM and the risk-squared software selection method, both of which involve a
customer-driven, push-selection approach by soliciting interested vendors using an
RFP, stand in sharp contrast to the provider-driven, pull-selection approach natural to
SaaS selection. SaaS selection seldom involves using an RFP to solicit providers.
Instead, the provider’s web site typically provides a wealth of information on the
software, as well as opportunities for self-demonstrating the SaaS provision through a
trial offering. In SaaS selection, the onus is on the deciding organization to extract as
much information from providers as necessary to evaluate their offerings using non-RFP
information-gathering techniques that fast-track the SaaS evaluation and adoption
process (Longwood, 2009). The distinctions of push versus pull information attainment
between COTS selection methods and SaaS selection and the need for an expedited
SaaS selection process are worth highlighting as key to the argument that COTS
selection methods cannot be applied “as is” to SaaS selection. SPIM’s additional
recommendations to test before adoption, check references, make site visits, and
negotiate final terms with the SaaS provider are all prudent measures that can serve as
peripheral undertakings in the SaaS decision process.
At the core of the SaaS decision is the issue of risk in outsourcing critical
software functionality to systems owned by an independent provider and entrusting that
party to ensure the confidentiality, integrity, and accessibility of information generated
by the organization using the provider’s software. This issue of outsourcing risk is not
prevalent in COTS software selection because software function and information
ownership are both retained within the IS infrastructure of the acquiring organization. In
COTS selection, requirements are the focus because external risk becomes negligible
after acquisition. The software operates within the organization’s infrastructure; hence,
risks associated with confidentiality, integrity, and accessibility are not in the scope of
the initial evaluation process. In SaaS selection, external risks are always prevalent and
become integrated with requirements, necessitating a risk-requirements approach for
evaluation. As Figure 2.5 shows, the S-CRA framework can and does incorporate some
elements from COTS selection, such as the concept of systemic collection and analysis
of pertinent information (originally incorporated into COTS selection from normative
decision theories), but S-CRA is framed from a risk perspective to accommodate the
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 40
COTS
SaaS
The consensus in decision theory literature is that risk analysis helps reduce
uncertainty in rational decision-making, and as long as uncertainty exists in decision-
making, evaluating risk will be a necessary component of the decision-making process.
Hofstede (1991) defines uncertainty as the perception of threat and anxiety due to
unknown, unstructured, or ambiguous risk factors. In contrasting, Gopinath and Nyer
(2009) describe certainty as an individual’s level of confidence in the outcome of his or
her evaluation or actions. Shrivastava (1987) describes risk as a complex concept with
varying perceptions across social science and scientific disciplines. Psychologists
perceive risk as the potential for human exposure to mental or physical injury or
accident. Financial analysts associate uncertainty with the possibility of gain or loss and
use financial models to estimate that uncertainty as a probability. Scientific disciplines,
such as engineering and biology, regard the risk associated with uncertainty as a
physical element requiring quantitative analysis (Aven, 2003). To address uncertainty in
business, risk analysis and assessment are typically adopted and organized under the
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 41
The definition of risk varies in literature on decision theory, but a common thread
is the expression of risk as a probability that an event will occur (Aven, 2003;
Damodaran, 2008; Holford, 2009). Some acknowledged attributes of risk include the
following: (1) It is a relevant event that can be positive and negative in consequence
(Koller, 2005); (2) it is an expression of the exposure to chance or probability of loss
(Aven, 2003); and (3) it is a quantifiable uncertainty (Damodaran, 2008). In terms of the
relationship between uncertainty and risk, uncertainty introduces elements of risk into
the decision process but does not contribute to the impact or probability of risk. For
SaaS acquisition, the uncertainty surrounding an organization’s adoption of outsourced
software technology and the potential disruptive impact that this action could have
necessitate consideration of risk elements in the software acquisition process. Reducing
uncertainty is only possible by attaining and processing more information about the
decision domain in a risk analysis initiative. Gilboa (2009) concurs in asserting that
rational decision-making requires gathering relevant information to reduce uncertainty.
Nevertheless, as noted by Holford (2009), businesses often resort to functional
decision-making without taking a closer look at the ambiguities of the decision-making
situation that may lead to increased risks and consequences.
and other forms of intrusion, that could expose confidential data to outsiders or shut
down access to the SaaS web site. Another risk concern associated with SaaS adoption
is the multi-tenant structure of SaaS providers’ data centers. Multi-tenancy is a cloud-
computing service trait, meaning that all customers share resources from the same
pool. In a SaaS environment, the web site being accessed, the server used to host the
software, and the underlying data storage are shared by multiple customers, thus
creating the risk of an unauthorized co-tenant or unknown party gaining access to
another SaaS customer’s private data.
What are the specific SaaS adoption risks, and how can these be addressed in a
risk assessment framework? Risk assessment is necessary in the SaaS acquisition
process primarily to address the uncertainties associated with relying on another party’s
platform, IT systems, and internal policies and procedures for specific computing
functions. Although research shows that organizations conduct risk assessment for
COTS software acquisition, the methods are mainly used to gauge financial outcomes ,
such as cost reductions or revenue generation, or are integrated in an audit process of
accounting and financial systems (Liang-Chuan & Chorng-Shyong, 2008; Rechtman,
2009). Partial answers to the questions of identifying and addressing prevailing SaaS
risks can be found in some existing proposals for assessing the risk of cloud computing
in general.
FedRAMP (2010), the framework of the federal Chief Information Officers (CIO)
Council for security assessment and authorization for cloud computing, is aimed at
standardizing evaluation of cloud services before adoption by government agencies.
FedRAMP is premised on the assumption that cloud-computing acquisition deserves
distinction as a risk-based decision in comparison to conventional IT acquisition, which
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 45
at assessing vendor compliance in specific cloud risk factors and subfactors. Each
question requires a simple yes or no response from the provider and gauges whether or
not specific security risk factors are mitigated by the provider or whether the provider
can attest to measures taken to address specific risks by providing credible
documentation to the customer conducting the assessment.
Figure 2.6 shows a comparison between Heiser’s (2010a) cloud risk factors and
the risk factors proposed in the ENISA (2009), FedRAMP (2010), and CSA (2010) risk
assessment frameworks. FedRAMP (2010) has the most extensive list of risk factors of
the three comparison frameworks and includes unique control factors to address such
risks as IS documentation and mobile device support that are not covered in-depth in
the other cloud risk assessment frameworks. Figure 2.7 shows a sampling of actual and
potential questions derived from Heiser’s (2010a) cloud risk factors and from the three
assessment frameworks that cloud customers could use to evaluate provider risk.
Figure 2.6. Comparison of cloud risk factors, categories, and controls in Heiser’s cloud
risk factors and the ENISA, CSA, and FedRAMP cloud risk assessment frameworks.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 47
Sample Cloud Risk Assessment Questions from Heiser’s Cloud Risk Factors
• Does the provider have a software notification policy in place for alerting or notifying customers
about software upgrades and potential downtime implications? (Extensibility Risk)
• Does the provider have security controls in place to monitor and log access to customer data?
(Accessibility Risk)
Sample Cloud Risk Assessment Questions from ENISA’s Framework
• Does the provider store your data in a known jurisdiction? (Legal Risk)
• Is your data isolated from other customers’ data? (Legal Risk)
• Does the provider have measures in place to prevent a malicious attack? (Technical Risk)
Sample Cloud Risk Assessment Questions from CSA’s Framework
• Do you provide tenants with documentation describing your Information Security Management
Program (ISMP?) (Information Security Risk)
• Are any of your data centers located in places that have a high probability/occurrence of high-
impact environmental risks (floods, tornadoes, earthquakes, hurricanes, etc.)?
(Resiliency/Business Continuity Risk)
Sample Cloud Risk Assessment Questions from FedRAMP’s Framework
• Does the provider have a plan in place for dealing with denial-of-service attacks? (System and
Communication Risk)
• Does the provider have controls in place to restrict physical access to the facilities where
information systems and client data reside? (Physical and Environmental Protection Risk)
Figure 2.7. Sample risk assessment questions based on Heiser’s cloud risk factors and
the ENISA, CSA, and FedRAMP cloud risk assessment frameworks.
The security risk categories and subcategories in ENISA’s (2009) cloud risk
assessment framework are broad and inclusive but not specific to SaaS. The framework
requires SaaS adopters to determine the relevance of each vulnerability and rephrase
these identified vulnerabilities into self-derived questions specific to their needs. This
approach imposes a burden on SaaS decision-makers to develop a tool of their own.
The challenge in leveraging ENISA’s risk categories to serve as input to a self-derived
SaaS risk assessment questionnaire is in determining which of the multitude of risk
subcategories and vulnerabilities identified in the framework are practical for evaluating
SaaS adoption.
CSA’s (2010) CAIQ is concise and useful to the adopter in overall cloud risk
assessment, but not all the questions are relevant to SaaS adoption. CAIQ goes further
in providing questions pertinent to clouding computing, but as with ENISA’s (2009)
framework, the SaaS decision-maker must use judgment in deciding which questions
merit inclusion in a SaaS risk assessment questionnaire to evaluate providers.
Despite the shortcoming in Heiser’s (2010b) risk factors and ENISA’s (2009),
FedRAMP’s (2010), and CSA’s (2010) frameworks, these approaches offer a good
starting point for understanding the collective risk concerns prevalent among private-
and public-sector cloud adopters. The factors and frameworks have a common element
of emphasizing the need to establish trust between the adopter and the cloud service
provider. The factors and frameworks also underscore the fact that differences in
perception remain regarding what risk factors are relevant to cloud service adoption. For
the SaaS adopter, the question of what factors, categories, or controls to leverage in
developing a list of SaaS risk assessment requirements or creating a questionnaire
specifically geared toward evaluating SaaS providers remains unanswered.
Nevertheless, these risk factors and frameworks can serve as a viable starting point to
developing a SaaS-specific risk assessment framework. To further rationalize the SaaS
adoption process, streamline SaaS provider evaluation, and reduce business risks, a
determination must be made as to which risk factors proposed in these various
frameworks are unique to SaaS adoption. The conceptual model of this research is
intended to do just that. The aim is to determine which specific risk factors are relevant
to software success and can facilitate an expedited evaluation and decision-making
process. An empirical analysis of prior SaaS adoption successes and the role of
suggested risk factors is necessary to achieve this objective.
Ncube, 1998; Ellis & Banerjea, 2007). A second is the degree to which the software
contributes to organizational productivity and profitability (D’Amico, 2005), and a third is
the users’ overall satisfaction with the system. Several studies on software acquisition
also show a strong linkage between successful software adoption and the level of
satisfaction experienced by the adopter (DeLone & McLean, 2003; McKinney, Yoon, &
Zahedi, 2002; Olson & Ives, 1981; Staples, Wong, & Seddon, 2002; Torkzadeh & Doll,
1999). Oliver & Swan (1989) define satisfaction as an emotional and evaluative
judgment regarding the ability of a product of service to meet perceived expectations
and desires. Satisfaction represents an emotive measure of the level of usability and
comfort experienced by the adopter.
DeLone and McLean’s (2003) IS success model lists the user satisfaction
dimension as one determinant of success in IT system adoption. The IS success model
proposes that system, information, and service quality can negatively or positively affect
user satisfaction. Although this model is derived from a qualitative study of a sample of
organizations adopting IS, it sheds light on a seemingly subjective yet essential element
of IT system adoption—user satisfaction. McKinney, Wong, and Zahedi (2002) conclude
that user satisfaction is an appropriate measure of adoption success because it is a
consequence of various positive and negative experiences the adopter has had with an
IT system. Sang-Yong, Hee-Woong, and Gupta’s (2009) research on OSS success
empirically validates DeLone and McLean’s (2003) IS success model; this research,
based on statistical analysis of data collected in a survey, determined that software
quality and available support service contribute to more than 50% of the variance in
user satisfaction. These findings on software success validate the construct for this
research, that is, that user satisfaction is a major and measurable contributing factor to
software and overall IT system adoption success.
The first step in developing this risk-based SaaS evaluation model was to
determine the pertinent SaaS risk areas. As an underlying construct in its conceptual
model, this research adopted propositions made by Aven (2003) and Koller (2005):
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 51
Although risk analysis is indicative of a rational decision process, filtering the pool of
potential risks down to ones relevant to the decision domain is necessary to streamline
and expedite the decision process. Without this filtering exercise, risk analysis becomes
an overwhelming and inefficient effort that considers all risks without regard to their
significance to outcomes. The literature also shows that considering risk consequences
in software acquisition can influence the success or failure of the acquisition (Hollander,
2000; Starinsky, 2003; Liang-Chuan & Chorng-Shyong, 2008).
The final three top-level risk dimensions and associated sub-dimensions used in
this research represent a synthesis of cloud service risk factor suggestions from a
variety of sources, including cloud security risk factors noted by Heiser (2010a);
business continuity risk factors suggested in CSA’s (2010) cloud risk assessment
framework; the technical and political cloud risk categories noted in ENISA’s framework
(2010); access, maintenance, contingency planning, and system and communication
protection controls and requirements described in the FedRAMP (2010) framework; and
other cloud service risks extracted from the literature. Figure 2.8 shows the three SaaS
risk dimensions of security, business continuity, and integration that form the basis for
SaaS risk relevance determination in this research. These risk dimensions represent the
common and primary risk concerns prevalent in the frameworks discussed and in the
literature on cloud computing. As Figure 2.8 also shows, each SaaS risk dimension
includes distinct sub-dimensions that individually contribute to the prevalence of
vulnerabilities associated with the higher-level risk dimension. Appendix VI provides a
more detailed description of each risk sub-dimension item shown in Figure 2.8.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 52
To gauge the relevance of each risk dimension to successful SaaS adoption, the
conceptual model used in this research was premised on finding an association
between the decision–maker’s level of certainty of these risks during the selection
process and the subsequent level of satisfaction after the SaaS solution was adopted
and in use for a reasonable period of time. To validate the conceptual model and the
proposed S-CRA framework introduced in this research, three distinct hypotheses were
proposed, establishing the relevance of each risk dimension to SaaS adoption
satisfaction. Questions synthesized from the literature review and existing frameworks
were used to determine the relevancy of each risk dimension to the satisfaction
construct. If relevancy was established between the key SaaS adoption constructs
based on the quantitative approach used in this research and ensuing data analysis,
then the questions became part of the S-CRA questionnaire-based framework. It must
be noted that the questions associated with each risk sub-dimension were not meant to
elicit yes or no responses. Rather, each question was intended to encourage the SaaS
risk evaluator to reflect on the amount of due diligence exercised in protecting the
adopting organization from potential pitfalls associated with SaaS and cloud services in
general. The following paragraphs describe each risk dimension and the associated risk
relevance hypothesis.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 53
This research defines the security risk dimension as risks related to the SaaS
provider’s ability to ensure data and system confidentiality, integrity, and compliance
and prevent access by unauthorized individuals to sensitive information. This risk
dimension also evaluates the SaaS vendor’s provisions to encrypt communications,
validate credentials, and provide independent accreditations of its operations.
Government agencies may have stricter security requirements than private
organizations because the information generated from some agencies may pertain to
issues of national security. Organizations that are considering SaaS adoption and
engage in a rational decision process entailing gathering information about each
potential provider’s ability to address the security dimension and associated sub-
dimensions will likely experience a higher overall level of satisfaction with the SaaS
adoption. This increased satisfaction and, hence, adoption success are presumed to be
derived from security risk awareness and mitigation measures enacted by the
organization. To establish the relevance of the security risk dimension to SaaS adoption
success, this research proposed the following null and alternate hypotheses:
• Null Security Risk Hypothesis (Hos1): SaaS adoption success (AS) does not
depend on the decision-maker’s level of certainty (C) of the SaaS security (S)
risk dimension as defined by the S-CRA framework.
Hos1: AS = (S)c
Has1: AS = (S)c
functions dependent on the SaaS platform to continue. Response time, recovery time,
and competency of support personnel are a few of the critical factors in evaluating SaaS
support. Other significant continuity factors include scalability, availability, and ability to
customize the SaaS application. The cost and convenience of SaaS often overshadow
the very real continuity risks faced by organizations that adopt this outsourced form of
computing. An organization adopting SaaS for multiple functions is exposed to the risk
of financial loss and operational disruption if any single-provider web-based SaaS
application is inaccessible or if Internet connectivity is down. Adopting organizations can
take steps to address business continuity risks to facilitate successful SaaS adoption,
but the critical first step is to gain awareness of each evaluated provider’s risk footprint
before adoption. In that regard, this research proposed the following null and alternate
hypotheses:
Hobc1: AS = (BC)c
Habc1: AS = (BC)c
This risk area concerns the level of integration required for assimilating the
application into the organization’s IS. It also covers such operational issues as ease of
use, compatibility, functionality, and reporting. In most situations, using SaaS requires
exchanging internally residing data with the external SaaS application and vice versa. In
a few adoption scenarios, the adopting organization could require a wholesale migration
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 55
of data stored and processed internally to the SaaS platform. In either case,
organizations must determine whether the SaaS capabilities allow for single-instance
between the provider and the organization. The risks associated with integrating an
adopted SaaS application can affect the organization’s data quality and functioning and
increase costs. Appropriate consideration and negotiation with the provider based on
• Null Integration Risk Hypothesis (Hoi1): SaaS adoption success (AS) does not
depend on the decision-maker’s level of certainty (C) of the SaaS integration (I)
risk dimension as defined by the S-CRA framework.
Hoi1: AS = (BC)c
Hai1: AS = (BC)c
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 56
Chapter 3
Target Population
Research Method
Research Instrument
Given that no instrument currently exists to gauge the relevancy of certain risks
to SaaS adoption success, a custom questionnaire was developed to complement this
research. The questionnaire is a synthesis of questions directly suggested or implied in
the three frameworks discussed and other risk concerns explored in the literature
review. The unavailability of a proven SaaS risk questionnaire is not a surprise given the
infancy of this emerging technology and the lack of an industry- or private-sector-driven
initiative. Such an initiative would help achieve consensus on SaaS risk concerns and
contribute to the development of a standardized questionnaire for evaluating these
concerns. Further, at of the time of this research, credible academic research on cloud
and SaaS computing risk was sparse. No single research report or framework provided
questions specific to SaaS assessment. However, many studies presented opinions
about best practices and provided advice to potential SaaS and cloud adopters about
engaging in a formal investigative inquiry to affirm the capabilities of the SaaS provider
and help reduce business exposure to risk.
relevant to the data location sub-dimension of the security risk dimension, it was
included as a candidate question for that risk dimension and sub-dimension pairing. The
draft SaaS risk assessment questionnaire included 53 questions grouped into four
categories. General demographic questions were used to derive a demographic profile
of participants and included such topics as the type of organization the respondent was
affiliated with, his or her job function, whether the organization subscribed to SaaS,
whether the respondent was involved in the SaaS selection process, how long the
organization had used SaaS, and the respondent’s overall level of satisfaction with a
particular SaaS application used by his or her organization. Security risk questions
included those relevant to the security risk sub-dimensions and were designed to gauge
the respondent’s level of awareness of various SaaS security risks pertaining to the
provider during the selection process. Questions relating to business continuity and
integration issues associated with SaaS adoption made up the final two categories of
questions included in the SaaS risk assessment questionnaire. With the exception of
the final open-ended question, which asked for descriptive feedback on the
respondent’s experience in using the SaaS application, only closed, multiple-choice
responses were allowed for most questions. The original 53 questions were narrowed
down to 44 multiple-choice questions and 1 open-ended question after a pilot study was
conducted. Thirty-eight of the remaining questions were related to satisfaction or risk
certainty. The final survey questionnaire used for this research is included in Appendix
II.
Questionnaire Scales
A Likert scale was used to gauge the constructs of satisfaction and risk certainty
as described in the conceptual model developed for this research. Several research
studies in marketing and other social science disciplines have proven that satisfaction is
a measurable construct using a Likert-like scale. In his quantitative survey research on
an empirical linkage among satisfaction, expectation, and consumer pre-purchase and
post-purchase experiences, Oliver (1980) devised a 6-point Likert scale to gauge
consumer satisfaction level. Similarly, Spreng and Mackoy (1996) found that a 7-point
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 60
For the purpose of this research, a 5-point Likert scale was used to obtain a
respondent’s measure of satisfaction with a specific SaaS adoption experience. Figure
3.1 shows the satisfaction Likert scale used in this research. A “very satisfied” rating
indicates that the SaaS application meets all expectations and was fully adopted with
few or no surprises. A “somewhat satisfied” rating indicates that a few problems arose
during adoption, but these imposed minimal risk to business processes. At the
dissatisfied end of the scale, a “very unsatisfied” rating implies that the adoption was a
complete failure, placing the business process and, possibly, the entire organization in
peril. A “somewhat unsatisfied” rating reveals that the SaaS solution adopted did not
meet most expectations but did allow the business process to function at a minimal
level. Whether the organization continues to use or has disavowed the application is
relevant only in the sense that the level of satisfaction supported the decision to keep or
cancel the subscription arrangement with the SaaS provider.
4 3 0 2 1
Figure 3.1. Likert-like SaaS adoption satisfaction rating scale and explanation.
A Likert scale was also developed to measure the uncertainty construct adopted
from normative decision-making theory for this research. The literature review on
decision-making reveals that uncertainty comprises the unknown factors that contribute
directly to risks in decision outcomes (Damodaran, 2008; Gopinath & Nyer, 2009; Koller,
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 61
2005). Being cognizant or incognizant of these unknown factors during the SaaS
decision process is the underlying independent construct that forms part of the
argument for the relevancy of some SaaS risk dimensions to successful adoption. To
measure this level of cognizance by the SaaS adopter for each risk sub-dimension’s
question, a 5-point certainty Likert scale was used in the questionnaire. Figure 3.2
shows the certainty scale and varying response options.
Respondents were asked to rate their certainty regarding the information they
had related to each risk question. A “very certain” response indicates that during the
SaaS adoption decision-making process, the respondent had sufficient information
about the risk question and was certain about its implications. In contrast, a “very
uncertain” response implies that the respondent did not have adequate information
about the risk question and its implications. For example, a response of “very certain” to
the business continuity risk dimension and pricing/fees sub-dimension question “How
certain are you that the vendor imposes a penalty for early termination of your
subscription?” would indicate that the respondent was well aware that a penalty existed
or did not exist for early termination of the service during the provider evaluation
process. Although other social science research adopted a broader certainty Likert
scale that included such granular certainty responses as “totally certain” and “extremely
certain” or ”not at all confident/extremely confident,” the abbreviated certainty scale
used in this research was necessary to achieve response brevity and quality of analysis
(Campbell, 1990; Walter, Mitchell, & Southwell, 1995; Wilson & Rapee, 2006). The
brevity logic is derived from the rational decision process, which recommends
conciseness in gathering and analyzing decision-relevant information. Furthermore, a
more granular certainty response option is unnecessary given that the respondents, to
some degree, either had information relating to the risk question or had no information
pertaining to a particular risk question.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 62
4 3 0 2 1
To establish the validity and reliability of the questionnaire instrument used in this
research, standard tests for content validity, reliability, and construct validity were
conducted.
Content validity.
the cloud technology experts, noted the length and relevancy of several questions.
Based on the feedback provided from the pilot test, the questionnaire was narrowed
from 53 questions to 44. One security dimension question and four business continuity
and integration risk dimension questions were dropped from the final questionnaire after
the pilot test. Several questions were also reworded for clarity, and an average
timeframe of 10 minutes was identified as necessary to complete the survey. During the
process of screening pilot test participants, the eight SaaS adopters and the cloud
experts were asked if they were clients of either of the two cloud providers that
distributed the final survey. All pilot test participants were excluded from the final survey
distribution and, hence, the main data collected and used to validate the research
construct and to test reliability and the stated hypotheses.
Reliability.
To further establish the quality of the data collection instrument used in this
research, a reliability analysis was necessary to determine the reliability of the
instrument in producing consistent results. Reliability analysis is an estimate of internal
consistency and is derived from the proportion of systematic changes in an instrument’s
scale. These systematic changes are found in the correlation between the results
provided during different administrations of the instrument. If correlation is high between
results in different administrations, then the instrument is deemed consistent and
reliable (Trochim, 2006). Reliability analysis also answers questions about the credibility
of the research instrument, including whether the instrument will derive similar results if
repeatedly administered, whether other researchers will make the same conclusions
when using the same instrument, and whether transparency exists in how the results
were interpreted (Saunders, Lewis, & Thornhill, 2007).
Although reliability can only be estimated, not calculated, the two standard
estimates of reliability and, hence, internal consistency are Cronbach alpha and Fornell
and Larcker’s composite reliability (Fornell & Larcker, 1981; Trochim, 2006). Table 3.1
shows the Cronbach alpha coefficient for each of the risk sub-dimension constructs.
Each alpha measure is shown as substantially above the 0.7 benchmark, attesting to
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 64
Table 3.1
Reliability Measures and Constructs
Average
Composite
Risk Dimension/Sub-dimension Cronbach Variance
Factor
Constructs Alpha Extracted
Reliability
(AVE)
SEC-Access 0.90 0.915 0.862
SEC-Integrity/Confidentiality 0.90 0.794 0.726
SEC-Transmission 0.90 0.803 0.735
SEC-Data Location 0.90 0.808 0.740
SEC-Ownership 0.90 0.858 0.792
SEC-Compliance 0.90 0.853 0.786
BC - Testing 0.90 0.837 0.769
BC - Recovery 0.90 0.905 0.849
BC - Availability 0.90 0.912 0.847
BC - Scalability 0.90 0.828 0.760
BC – Upgrade Notification 0.90 0.872 0.809
BC - Support 0.90 0.861 0.795
BC - Pricing 0.90 0.890 0.830
BC - Termination 0.90 0.808 0.740
BC - Customization 0.90 0.972 0.948
BC - Training 0.90 0.916 0.863
BC - Documentation 0.90 0.845 0.778
BC - Provider Management 0.90 0.775 0.709
I - Usability 0.90 0.854 0.787
I - Compatibility 0.90 0.951 0.913
I - Functionality 0.90 0.912 0.857
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 65
Key:
SEC = Security risk dimension
BC = Business continuity risk dimension
I = Integration risk dimension
Construct validity.
the standard 0.5 factor-loading and AVE thresholds. However, several items were
dropped from the initial analysis because their factor loadings were below the
acceptable threshold. Of the security risk construct items, one of the access sub-
dimension certainty measurement items and one of the segregation sub-dimension
certainty measurement items were dropped because of their respective low loading
factors in measuring security access and data segregation risks. A data recovery and
pricing risk sub-dimension certainty measurement item was removed from the business
continuity construct because of its low factor-loading value. Similarly, the reporting sub-
dimension of the integration risk construct was also deemed an inadequate measure
based on its low loading factor. A full list of component items removed based on
convergent validity analysis is included in Appendix IV. Nevertheless, Table 3.2 shows
that access risk is a valid high-value measure of security risk. Table 3.3 shows that
customization risk is a key construct item of the business continuity risk dimension
construct, and Table 3.4 highlights compatibility risk as a reliable measure of the
integration risk construct. The tables also show that the cumulative percent of variance
explicable by factors with eigenvalues greater than 1.0 is greater than 50% for these
constructs.
Table 3.2
Factor Analysis for SaaS Security Risk (n = 114)
% of
Factor Eigenvalue >
Construct Certainty Measurement Item (Risk Sub-dimension) Explained
Loading 1.0
Variance
Security Risk
Dimension (SEC) Background Checks Certainty (Access) 0.799 3.77 34.25
Restricted Access Certainty (Access) 0.764 1.43 13.02
Breach Certainty (Access) 0.670 1.23 11.14
Intrusion Controls Certainty (Integrity/Confidentiality) 0.527
Web Browser Encryption Certainty (Transmission) 0.540
Data in Same Country Certainty (Location) 0.548
Data Ownership Certainty (Ownership) 0.627
SOX/HIPAA Compliance Certainty (Compliance) 0.647
Vendor SAS70 Report Certainty (Compliance) 0.590
Cumulative % of Variance Explained 58.41
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 67
Table 3.3
Factor Analysis for SaaS Business Continuity Risk (n = 114)
% of
Factor Eigenvalue >
Construct Measurement Item (Risk Sub-dimension Construct) Explained
Loading 1.0
Variance
Business
Continuity Risk Test Before Adoption Certainty (Testing)
Dimension (BC) 0.592 6.38 31.88
Recovery of Lost Archive Data Certainty (Recovery) 0.721 1.56 7.79
Continuity: Uptime/Downtime Performance Certainty
(Availability) 0.735 1.35 6.75
Scalability Certainty (Scalability) 0.577 1.10 5.48
Upgrade Notification Certainty (Upgrade) 0.654 1.00 5.02
Phone Support Certainty (Support) 0.573
Email Support Certainty (Support) 0.722
Web Ticket Support Certainty (Support) 0.608
Subscription Fees Certainty (Pricing) 0.712
Payment Terms Certainty (Pricing) 0.667
Early Termination Penalty Certainty (Termination) 0.642
Data Return on Cancellation Certainty (Termination) 0.548
Customization Allowed Certainty (Customization) 0.927
Customization Fee Certainty (Customization) 0.869
Software Training Certainty (Training) 0.711
Software Training Fee Certainty (Training) 0.778
Print/Electronic Documentation Certainty (Documentation) 0.605
Primary Contact Certainty (Provider Management) 0.502
Cumulative % of Variance Explained 56.91
Table 3.4
Factor Analysis for SaaS Integration Risk (n = 114)
% of
Factor Eigenvalue >
Construct Measurement Item (Risk Sub-dimension) Explained
Loading 1.0
Variance
Functional Requirements Outlined Before Selection Certainty
Integration Risk (I)
(Functionality) 0.782 3.16 52.61
All Functional Requirements Met Certainty (Functionality) 0.689 1.03 17.02
To achieve adequate discriminant validity, the AVE for each construct must be
greater than the variance between each construct and other constructs in the theoretical
framework (Chin, 1998). This can be validated by checking whether the square root of
the construct’s AVE is higher than its correlation with other latent variable constructs.
Tables 3.5 through 3.7 show the correlation matrix for the security, business continuity,
and integration construct items with each AVE’s square root exceeding the off-diagonal
items. These calculations demonstrate adequate discriminant validity.
Table 3.5
Correlation Matrix of Latent Variables (Security Risk Construct)
SEC-Ownership
SEC-Integrity /
Confidentiality
Transmission
SEC-Access
Compliance
SEC-Data
Location
SEC-
SEC-
SEC-Access 0.89*
SEC-Integrity/Confidentiality 0.382 0.85
SEC-Transmission 0.105 0.321 0.86
SEC-Data Location 0.157 0.276 0.232 0.86
SEC-Ownership 0.071 0.212 0.306 0.486 0.89
SEC-Compliance 0.444 0.469 0.161 0.514 0.522 0.89
*Bolded diagonal values are the square roots of the AVE. Values below each AVE column
are lower than AVE, indicating discriminant validity.
Table 3.6
Correlation Matrix of Latent Variables (Business Continuity Risk Construct)
BC - Customization
BC - Availability
BC - Termination
BC - Scalability
BC - Recovery
Documentation
BC - Testing
Management
BC - Upgrade
BC - Provider
BC - Training
BC - Support
BC - Pricing
BC -
BC - Testing 0.88
BC - Recovery 0.178 0.92
BC - Availability 0.290 0.581 0.92
BC - Scalability 0.364 0.328 0.478 0.87
BC - Upgrade 0.454 0.257 0.429 0.543 0.90
BC - Support 0.859 0.427 0.523 0.857 0.662 0.89
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 69
Table 3.7
Correlation Matrix of Latent Variables (Integration Risk Construct)
I - Compatibility
I - Functionality
I - Usability
I - Functionality 0.93
I - Compatibility 0.596 0.96
I - Usability 0.556 0.414 0.89
Chapter 4
Introduction
The data resulting from the survey questionnaire were statistically analyzed in
three distinct steps. Descriptive analysis was used to derive a quantitative profile of the
respondents based on the responses provided using the nominal and ordinal
measurement scales. Additional factor analysis was employed to further reduce the
various construct items, the security, business continuity, and integration risk
dimensions, to use in testing the three hypotheses. The final analysis involved testing
each of the hypotheses proposed by leveraging inferential statistical tests of two-tailed
significance and Spearman’s rho correlation coefficient.
The data were collected primarily using nominal and ordinal scales. The yes/no
responses were pre-coded with 1 and 2 values. Other nominal scales were used to
extract additional demographic data from respondents. For example, job function was
determined using a 7-point Likert scale with response options ranging from IT executive
to consultant. The ordinal satisfied/dissatisfied scale was pre-coded from 0 through 4,
with 1 representing very satisfied, 4 representing not very satisfied, and 0 representing
not applicable. The ordinal certain/uncertain scale was pre-coded from 0 through 4 in a
manner similar to the satisfied/dissatisfied scale. This pre-coding scheme allowed for
descriptive analysis to depict the responding population and inferential statistical
analysis to support or refute the hypotheses.
Descriptive Analysis
The survey’s first six questions were designed to attain a demographic and
satisfaction profile of respondents using univariate analysis of the nominal measures. Of
the 114 responses to the question of organization type, a majority of 66.7% indicated
affiliation with a private corporation. Government entities accounted for 17.6% of
respondents’ organization type, while nonprofits accounted for 14%. Only two
respondents indicated academic institution as their organization type. In classifying their
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 71
job functions from the response options provided, the respondents showed more
dispersion. Nevertheless, the predominant selection was “other management” as job
function. Aggregating the job function responses into higher-level function categories of
top IT management and staff-level IT management showed that 17.8% of the
respondents were from top-level IT management, whereas 36.7% of respondents
functioned in other IT roles, such as network and system management. Non-IT
corporate management functions accounted for 5.4% of respondents.
In responding to the question of length of SaaS adoption, all respondents
indicated using SaaS for 6 years or less. A total of 48.7% had used SaaS for less than a
year, followed by 42.5% indicating a 1- to 3-year adoption timeframe. Continued use of
SaaS in their respective organizations was confirmed by 77.2% of the 114 respondents,
while 22.8% of respondents had relinquished use of the SaaS solution they were asked
to recall for the survey. Of the responses to the dependent variable construct question
of overall SaaS adoption satisfaction, 77.9% of respondents indicated a satisfaction
level of “somewhat satisfied” or “very satisfied” with their adoption experiences.
Conversely, 20.8% expressed some form of dissatisfaction with their adopted SaaS
solutions.
A descriptive analysis would not be complete without further inquiry into the
demographic profile of respondents as it relates to their level of satisfaction. A cross-
tabulation of the categorical variables of level of satisfaction to organization type shows
nonprofit organizations as representing the largest number of those who were “very
satisfied” with their SaaS adoption experiences, at 37.5%. On the opposite end of the
satisfaction scale, government agency respondents were the most dissatisfied group,
with 36.8% stating that they were not very satisfied in their SaaS adoption experiences.
The relatively high SaaS satisfaction indication among nonprofits is not a chance result
and may be due to such factors as low cost incentives in SaaS offerings and
comparably lower security requirements from organizations in this category. Further
research is necessary to substantiate this assumption based on analysis of the data
collected. The typical stringent security standards for IT systems required by
government entities may also explain the high indications of dissatisfaction from
respondents representing this group.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 72
Table 4.1
Cross-Tabulation of Level of Satisfaction and Organization Type
Very Somewhat Not Very Not at All
Organization Type\Satisfaction Scale Count
Satisfied Satisfied Satisfied Satisfied
Private Corporation 35.5% 46.1% 13.2% 5.2% 77
Government (Federal, State, or Local) 26.3% 31.6% 36.8% 5.3% 19
Nonprofit 37.5% 43.8% 18.7% 0.0% 16
Academic Institution 50.0% 50.0% 0.0% 0.0% 2
Level of satisfaction was highest among consultant respondents of the seven job
function response options, although this group accounted for only 3 of the 114
respondents. Coming in second in satisfaction level and accounting for 35.9% of
respondents were those in the “other management”—non-IT and not corporate
management—category. With 36.4% indicating very satisfied, respondents in the IT
director/manager job function were seemingly conciliatory in their sentiments toward
their respective experiences with SaaS. The data in Table 4.2 show IT executives
expressing the highest level of dissatisfaction in their experience with SaaS. Beyond the
risk factors, the only inference that can be drawn from the data showing some pattern of
dissatisfaction among top IT executives is that this group may have overlooked the risks
in SaaS that could emerge in a post-adoption scenario. Other explanatory factors could
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 73
Table 4.2
Cross-Tabulation of Level of Satisfaction and Job Function
Very Somewhat Not Very Not at All
Job Function\Satisfaction Scale Count
Satisfied Satisfied Satisfied Satisfied
IT Executive (CIO, CTO, CSO, VP) 11.1% 44.4% 33.3% 11.2% 11
IT Manager (Director/Manager) 36.4% 27.3% 27.3% 9.0% 12
Network/System Management 33.3% 47.6% 9.5% 9.6% 21
Corporate Management (CEO, COO, PRES, VP,
GM) 33.3% 66.7% 0.0% 0.0% 6
Other Management 39.0% 43.9% 17.1% 0.0% 41
IT Staff 30.0% 45.0% 20.0% 5.0% 20
Consultant 66.7% 33.3% 0.0% 0.0% 3
risk constructs outlined in this research, uncertainties that later manifest themselves as
security, business continuity, and integration issues. If these issues are not properly
addressed initially, dissatisfaction is amplified in the 2nd-year stage. Overall
dissatisfaction is shown to decrease thereafter, when the SaaS provider presumably
improves service quality and support and addresses security concerns in an effort to
retain customers.
Table 4.3
Cross-Tabulation of Level of Satisfaction and Length of Adoption
Very Somewhat Not Very Not at All
Length of Adoption\Satisfaction Scale Count
Satisfied Satisfied Satisfied Satisfied
Less than 1 Year 38.2% 43.6% 12.7% 5.5% 55
1 to 3 Years 29.2% 41.7% 25.0% 4.2% 48
4 to 6 Years 40.0% 50.0% 10.0% 0.0% 11
More than 7 Years 0.0% 0.0% 0.0% 0.0% 0
The results seen in Table 4.4, comparing continued use of SaaS and level of
satisfaction, were as expected. More than 90% of respondents still using SaaS were
satisfied overall with their adoption experiences. In contrast, 72% of respondents no
longer using SaaS indicated dissatisfaction with the cloud-based SaaS solutions they
had adopted. Whether their satisfaction is the primary contributing factor to their
continued use of or disassociation with the SaaS solution is up for speculation, but the
data reveal that satisfaction is an influential component in the decision to retain a SaaS
solution or terminate the service.
Table 4.4
Cross-Tabulation of Level of Satisfaction and Continued Usage
Very Somewhat Not Very Not at All
Continued Usage\Satisfaction Scale Count
Satisfied Satisfied Satisfied Satisfied
Yes (Still Using) 42.0% 50.0% 6.8% 1.1% 88
No (Terminated Service) 8.0% 20.0% 56.0% 16.0% 26
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 75
Factor analysis was applied to further reduce the construct items of each risk
dimension. In chapter 3, confirmatory factor analysis was used to determine the validity
of the overall instrument in supporting the hypothetical statements. Here, further
exploratory factor and correlation analysis were used to ascertain whether each
construct item was adequately associated with the main construct factor it represented
and to determine the strength of interdependency between construct items representing
each of the risk dimensions of the S-CRA framework. These additional analyses were
also necessary because of the unproven constructs used in this research.
The security risk dimension measured the level of certainty of applicable security
risks in a SaaS adoption initiative. The analysis in chapter 3 narrowed the construct
items of security risk down to access, integrity/confidentiality, transmission, data
location, ownership, and compliance. Component factor analysis with varimax rotation
indicated the intercorrelation among the six different items of the security dimension
construct. Intercorrelations having an insignificant relationship (p > .05) were discarded
as unreliable construct items for the security risk dimension. The two remaining security
risk construct items and components were access (background check, restricted
access, and breach sub-dimension items) and integrity/confidentiality (intrusion controls
sub-dimension item). As shown in Table 4.5, the intercorrelation among the security risk
construct items ranged from .401 to .772, and all showed a significant relationship (p <
.05). This is a strong indication that the remaining construct item components of the
security risk dimension are not independent of one another.
Table 4.5
Correlation of Security Risk Dimension Construct Items
Background Restricted Access Breach Intrusion Controls
Check Certainty Certainty Certainty Certainty
(Access) (Access) (Access) (Access)
Background Check Spearman’s rho
1 .440 .401 .772
Certainty (Access) Correlation Coefficient
Sig. (two-tailed) . .000 .000 .000
n 114 114 114 114
Restricted Access Spearman’s rho
.772 1 .510 .419
Certainty (Access) Correlation Coefficient
Sig. (two-tailed) .000 . .000 .000
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 76
Table 4.6
Security Risk Dimension Reduction Factor Analysis
Initial Extracted Sum
% of Cumulative % of Cumulative
Component (Construct Item) Eigen- of Squared
Variance % Variance %
values Loading
Background Check Certainty
(Access) (1) 2.572 64.303 64.303 2.572 64.303 64.303
Restricted Access Certainty
(Access) (2) .818 20.441 84.743
Breach Certainty (Access) (3)
.377 9.431 94.174
Intrusion Controls Certainty
(Integrity/Confidentiality) (4) .233 5.826 100.000
Factor and correlation analysis of the business continuity risk construct items
also revealed a need for construct item reduction. Intercorrelation analysis necessitated
reducing the number of business continuity construct item components from 18 to the
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 77
remaining 5 shown in Table 4.7. The remaining business continuity construct items,
which include scalability, testing, upgrade, support, and customization, all show a
correlation coefficient above .300 and all have a significant correlation (p < .05), thus
affirming interdependency of the construct items. KMO score for the correlation matrix
of the five business continuity construct items stood at .802, and the Bartlett’s test
affirmed the matrix’s high significance (approximate chi-square = 127.994) (p = .000)
and non-identity status. Factor analysis data provided in Table 4.8 show that the
remaining business continuity components also loaded into the single scalability
component, with the only eigenvalue above 1 of 2.642 and accounting for 52.831% of
variance in the factor.
Table 4.7
Correlation of Business Continuity Risk Dimension Construct Items
Table 4.8
Business Continuity Risk Dimension Reduction Factor Analysis
Extracted Sum
Initial % of Cumulative % of Cumulative
Component of Squared
Eigenvalues Variance % Variance %
Loading
Scalability Certainty
(Scalability) (1) 2.642 52.831 52.831 2.642 52.831 52.831
Test Before Adoption
Certainty (Testing) (2) .743 14.857 67.688
Upgrade Notification
(Upgrade) (3) .638 12.761 80.449
Phone Support Certainty
(Support) (4) .600 12.008 92.457
Customization Fee
Certainty (Customization)
(5) .377 7.543 100.000
Integration risk construct items were intended to measure the level of certainty of
risks relating to integrating SaaS into the organization. The proposed integration risk
construct items of functionality, compatibility, and usability were proven valid in chapter
3. Output from intercorrelation analysis, presented in Table 4.9, has correlation
coefficient values among the five factors ranging from .303 to .754, attesting to the
reasonable interdependency strength among the construct items. Additional factor
analysis of adequacy and non-identity resulted in a KMO score of .703, Bartlett’s test
significance (p = .000), and an approximate chi-square of 196.279. As shown in Table
4.10, all factors loaded into the construct item labeled “functional requirements outlined
before selection certainty,” which contributed more than 56% of factor variance.
Table 4.9
Correlation of Integration Risk Dimension Construct Items
Functional Exchange
Vendor
Requirements All Functional Data with Adequate Easy
Assistance in
Outlined Before Requirements Internal Reporting Navigation
Data Transfer
Selection Met Certainty Software Certainty Certainty
Certainty
Certainty (Functionality) Certainty (Usability) (Customization)
(Compatibility)
(Functionality) (Compatibility)
Functional Spearman's
Requirements rho
Outlined Before 1 .587 .294 .299 .340 .546
Correlation
Selection Coefficient
Certainty
(Functionality)
Sig. (two-
. .000 .002 .002 .000 .000
tailed)
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 79
Spearman's
All Functional rho
Requirements .587 1 .387 .452 .423 .428
Correlation
Met Certainty Coefficient
(Functionality)
Sig. (two-
.000 . .000 .004 .000 .000
tailed)
n 114 114 114 114 114 114
Exchange Data
Spearman's
with Internal
rho
Software .294 .387 1 .754 .402 .303
Correlation
Certainty
Coefficient
(Compatibility)
Sig. (two-
.002 .000 . .000 .000 .002
tailed)
n 114 114 114 114 114 114
Vendor
Spearman’s
Assistance in
rho
Data Transfer .299 .452 .754 1 .486 .394
Correlation
Certainty
Coefficient
(Compatibility)
Sig. (two-
.002 .000 .000 . .000 .000
tailed)
n 114 114 114 114 114 114
Spearman's
rho
Easy Navigation .546 .428 .303 .394 .327 1
Correlation
Certainty
Coefficient
(Usability)
Sig. (two-
.000 .000 .002 .000 .001 .
tailed)
n 114 114 114 114 114 114
Table 4.10
Integration Risk Dimension Reduction Factor Analysis
Extracted Sum
Initial Cumulative % of Cumulative
Component % of Variance of Squared
Eigenvalues % Variance %
Loading
Functional Requirements
Outlined Before Selection
Certainty (Functionality)
(1) 2.833 56.668 56.668 2.833 56.668 56.668
All Functional
Requirements Met
Certainty (Functionality)
(2) .994 19.884 76.552
Exchange Data with
Internal Software
Certainty
(Compatibility) (3) .566 11.316 878.868
Vendor Assistance in Data
Transfer Certainty
(Compatibility) (4) .372 7.440 95.308
Easy Navigation Certainty
(Usability) (6)
.235 4.692 100.000
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 80
Hypothesis Testing
• Null Security Risk Hypothesis (Hos1): SaaS adoption success (AS) does not
depend on the decision-maker’s level of certainty (C) of the SaaS security (S)
risk dimension as defined by the S-CRA framework. (Hos1: AS = (S)c)
The four security construct items from the earlier factor analysis were used to
test for a significant relationship between level of satisfaction and the security risk
dimension. As can be seen in the test results in Table 4.11, the background check and
restricted-access components of the access construct item both show no significant
correlation and entail a low correlation coefficient, thus negating their relevance as
construct items to the level-of-satisfaction construct. Pearson chi-square testing showed
no significance for both components with the satisfaction construct (p > .05).
Spearman’s rho correlation coefficient values were also negligible for both construct
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 81
items. Nevertheless, the fact that the remaining access and integrity/confidentiality risk
construct items of breach and intrusion controls each has a significant correlation (p <
.05) with level of satisfaction and a reasonable correlation coefficient—.287 and .297 at
9 degrees of freedom—is quantitative evidence to reject the null hypothesis, Hos1, and
accept the alternate hypothesis, Has1: that SaaS adoption success does depend on the
decision-maker’s level of certainty of prevalent security risks, particularly in terms of his
or her awareness of the provider’s security breach policy and intrusion control elements.
This finding also validates the security dimension construct as relevant for inclusion in
the S-CRA framework and SaaS adoption decision process.
Table 4.11
Level of Satisfaction and Security Risk Certainty Hypothesis Test Analysis
Pearson Chi- Spearman’s
Construct Item\Analysis Square (Sig./ two- rho df
sided) Correlation
Background Check Certainty
(Access) .043 -0.027 9
Restricted Access Certainty
(Access) .217 0.068 9
Breach Certainty (Access) .011 0.287 9
Intrusion Control Certainty
(Integrity/Confidentiality) .000 0.297 9
factor analysis in chapter 3 were used to test the second null hypothesis statement
proposed in this research: that adoption success does not depend on the decision-
maker’s level of certainty of business continuity components. Statistical test results,
shown in Table 4.12, indicate that a highly significant correlation (p = .000) exists
between SaaS adoption satisfaction and levels of the decision-maker’s certainty about
the scalability of the SaaS solution, provisions allowing for testing before adoption,
upgrade notification, reliable phone support, and hidden customization fees. Non-
parametric testing also revealed that each of the independent business continuity
construct items displayed a relatively high correlation to the level-of-satisfaction
construct, ranging from .360 to .533. These two complementary tests indicate that SaaS
business continuity certainty is a predictor of adoption success. The test results were
sufficient to support rejecting the null hypothesis, Hobc1—that business continuity risk
certainty is not a factor in SaaS adoption success—and accepting the alternate
hypothesis, Habc1—that it is.
Table 4.12
Level of Satisfaction and Business Continuity Risk Certainty Hypothesis Test Analysis
Pearson Chi- Spearman's
Construct Item\Analysis Square (Sig./ two- rho df
sided) Correlation
• Null Integration Risk Hypothesis (Hoi1): SaaS adoption success (AS) does not
depend on the decision-maker’s level of certainty (C) of the SaaS integration (I)
risk dimension as defined by the S-CRA framework. (Hoi1: AS = (BC)c)
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 83
Table 4.13
Level of Satisfaction and Integration Risk Certainty Hypothesis Test Analysis
Pearson Chi- Spearman's
Construct Item\Analysis Square (Sig./ two- rho df
sided) Correlation
Functional Requirements
Outlined Before Selection
Certainty (Functionality)
.000 0.561 9
Chapter 5
Conclusion
Introduction
Major Findings
This research introduced the S-CRA framework specifically for evaluating and
selecting SaaS provisions from among competing options. To substantiate the model
and determine the relevancy of security, business continuity, and integration risks to
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 86
SaaS adoption success, the research relied on a quantitative approach to collecting and
analyzing data. The survey instrument included questions compiled from a literature
review and existing cloud assessment frameworks, such as CSA’s (2010) Consensus
Assessments Initiative Questionnaire, ENISA’s (2009) risk factors framework, the CIO
Council’s (2010) cloud security assessment framework, and Heiser’s (2010a) cloud risk
dimensions. The online survey netted 114 participants from a variety of organization
types, including government agencies, private corporations, nonprofits, and academic
institutions. Factor analysis revealed that the instrument met all the criteria for validity
and reliability and that the satisfaction and certainty scales used were adequate
measures of their respective constructs. Descriptive analysis disclosed demographic
details about the respondents. Correlation and significance analysis were used to test
the three hypotheses, which implied a relationship between adoption success and the
constructs of security, business continuity, and integration risk certainty.
Although the descriptive analysis shed light on the respondents from a variety of
angles, an interesting fact emerged: The majority of respondents (66.7%) were from
private corporations, whereas personnel from government entities accounted for only
17.1% of participants. Di Maio and Claps (2010) discussed this discrepancy in cloud
adoption between public and private organizations as rooted in such factors as the
struggle among government entities to describe the risk and value of cloud computing,
to succinctly define cloud computing, and to resolve internal concerns about ownership
and control of an outsourced infrastructure. Di Maio and Claps (2010) also indicated
that government entities need a government-tailored framework for assessing cloud
services that incorporates their heightened security concerns. The CIO Council’s (2010)
requirement-based cloud assessment framework can accommodate the cloud
evaluation needs of government agencies that demand stricter security from cloud
providers, but its extensive requirements may limit the number of cloud service
providers that can meet the expectations of government agencies. Private organizations
may have similar issues with cloud security as government agencies, but the limited IT
compliance requirements private organizations face in comparison to government
agencies and the more receptive attitude toward outside control of IT resources shown
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 87
by private organizations may help explain why cloud-based services, such as SaaS,
have a stronger foothold in the private sector than among government agencies.
Despite the security risks and other adoption concerns, the high satisfaction level
across organization types, job function, and length of usage, as shown in the
demographic analysis of the results from this research, is a testament to the value of
SaaS as a reliable computing platform. The data reveal a satisfaction rate of more than
55% (combining “somewhat satisfied” and “very satisfied” responses) for each of the
key demographic measures. Realizing that natural concerns about security, continuity,
and integration exist among their prospective clients, cloud providers, including SaaS
providers, may be making a concerted effort to proactively address these concerns.
Nevertheless, the data show that more work is still needed to improve satisfaction level
for SaaS. This is particularly true among government organizations, which show an
overall satisfaction rating of 57.9%; among IT executives, of which only 55.5% indicated
that they were somewhat or very satisfied; and among organizations that have
surpassed the SaaS adoption honeymoon phase and reached the 3-year threshold,
during which time overall satisfaction rate drops by nearly 10%, from 81.8% to 70.9%.
The predictive effect of security risk certainty to SaaS adoption success was
confirmed by this research. The research findings indicate a significant relationship (p <
.05) between the security risk certainty construct and level of satisfaction construct that
was used to represent adoption success. This confirms the first hypothesis, Has1, that
security risk assessment is adequate in a rational SaaS risk assessment framework.
Several research studies on cloud computing indicated security risk as perhaps the
most significant risk component of cloud computing in general (Heiser, 2010a; Paquette,
Jaeger, & Wilson, 2010; Subashini & Kavitha, 2010; Zissis & Lekkas, in-press). Both
ENISA’s (2009) cloud risk assessment framework and CSA’s (2010) Consensus
Assessments Initiative Questionnaire are specifically targeted at evaluating cloud
security risks, with the assumption that the confidentiality of information and liability
resulting from provider infrastructure failure are the top concerns relating to adopting
cloud-based services. Even Heiser’s (2010a) cloud evaluation framework suggests that
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 88
security risk assessment should be the single deciding factor for migrating sensitive
data to a cloud-based service.
Nevertheless, the research data also revealed that several security risk construct
items deemed significant to SaaS success were not as significant as presumed.
Surprisingly, several security concerns were determined to be irrelevant to successful
SaaS adoption; these include: reassurance that the SaaS provider is able to provide a
valid audit report of its operations (SAS70 report), contradicting a recommendation by
Heiser (2010b); certainty about whether data are stored in country; data ownership
issues; concerns about encryption of data transmitted via the open Internet medium;
and concerns that each client’s data are kept separately in the multi-tenant environment
that typifies the underlying SaaS infrastructure. These results suggest that these
specific security concerns are not directly related to the primary cloud security issues of
confidentiality and infrastructure reliability suggested by the standard cloud risk
assessment frameworks. Nevertheless, the research confirms confidentiality and
infrastructure reliability as top security issues in showing that certainty about data
access and controls and polices put in place by the SaaS provider to enhance data
integrity and confidentiality are the key security risk predictors of successful SaaS
adoption.
flexibility or scalability of the SaaS solution is a valid risk factor was also affirmed by the
research data, which showed scalability as a one of the legitimate business continuity
risk factors contributing to SaaS adoption success. The fact that scalability showed the
highest correlation to level of satisfaction among the business continuity risk items is an
indication that organizations value the dynamic flexibility of SaaS in seamlessly
accommodating potential growth in data processing and data storage.
A two-tailed significance test and a correlation test confirm that integration risk
certainty is not only significant to level of satisfaction, but it is also both positively and
negatively correlated to level of satisfaction as it pertains to SaaS adoption. This
rejection of the null hypothesis, Hoi1, in favor of the alternate hypothesis, Hai1, suggests
that integration risk concerns, such as the ability of the adopted SaaS solution to
continually meet functional requirements, its compatibility with internal systems, and its
ease of use, are relevant to determining eventual adopters’ satisfaction with the
software. Unlike COTS software, which requires a one-time integration into the
organization’s computing environment, SaaS naturally involves an ongoing exchange of
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 90
data between the provider’s systems and the adopter’s. The adopter of a SaaS email
solution enjoys the convenience of access to email from any computer with Internet
connectivity from anywhere in the world, as well as the flexibility of paying for only the
amount of storage and the number of email accounts used. However, integration
becomes a major bottleneck if the SaaS email solution is incompatible with mobile
devices used by the organization’s staff or certain features in the solution limit the
efficient exchange of data among systems within the organization. Heiser (2010a)
regards integration risk as growing in significance in cloud risk evaluation on par with
security and business continuity risks, especially as the use of cloud-based services
increases and the need to integrate on-premise and cloud-based business applications
to exchange data seamlessly gains importance. The findings of this research confirm
integration risk certainty as a relevant factor for adopters to consider when conducting a
risk assessment of SaaS provisions prior to selection.
being aware of adoption risks beforehand helps the SaaS decision-maker select
solutions that entail minimal risk, develop a strategy for mitigating observed risks, or
both.
The final research question sought a solution that links mitigation with the SaaS
decision process. The S-CRA framework was introduced as the answer to this puzzle.
The framework attempts to inculcate formality, objectivity, and rationality into the SaaS
decision process by requiring the adopter to pose targeted questions in each risk
dimension to each SaaS provider, rate the certainty level for each response, apply a
weight to each dimension as necessary, and finally, derive a numerical score for each
competing SaaS solution. The S-CRA framework not only complements rational
decision models that exist for selecting other types of software, but it is also a normative
decision model that enables the SaaS decision-maker to maximize the value of the
decision to the organization. The final S-CRA framework questions and sample rating
scheme are included in Appendix V.
The methodology used in this research and the S-CRA framework have a few
inherent limitations. The first is the use of a questionnaire as the primary instrument for
data collection. Rea and Parker (2005) note several disadvantages to using web-based,
email-distributed surveys, including the fact that the practice may entail a self-selection
bias in excluding those who are uncomfortable with email and web browsing from
participating and the fact that participants may not follow instructions correctly because
of limited personal contact with the researchers. Furthermore, although the security,
business continuity, and integration risk dimensions used as primary constructs were
synthesized from among many risks identified in the literature sources, other risk
concerns that may or may not prove relevant to SaaS selection were deliberately
excluded to improve the manageability of the research. Thus, the questionnaire used
may not anticipate all the wide-ranging and potentially relevant SaaS risk concerns. The
implication here is that another researcher could possibly identify other SaaS risk
concerns and derive an entirely different set of research questions, in addition to using a
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 92
different methodology for collecting the data. It is possible to repeat this study using
more risk dimensions and a larger representative sample, but the selection of additional
SaaS dimensions would need to be scrutinized and the research design would need to
accommodate participation by both phone and email.
The S-CRA framework’s limitations are largely derived from the limitations
apparent in the research model that feeds the framework. The relevant risk dimensions
determined from the research analysis and the corresponding certainty questions form
the basis of the framework. Although several risk evaluation questions were discovered
to be irrelevant to their respective constructs during the analysis, the onus is on the
adopting organization to determine the necessity of including these irrelevant questions
in their SaaS risk assessment exercises or creating additional questions that address
their unique requirements. Somewhat ironically, a major limitation of the S-CRA
framework is that the scope of the risk assessment questions may not be
comprehensive enough to accommodate all organizations and may require that
organizations modify the S-CRA framework’s evaluation questions as they see fit.
Nevertheless, the framework is flexible enough to support extension or reduction in the
number and scope of the original evaluation questions.
Managerial Implications
With SaaS and its underlying cloud-computing technology still in their infancy,
this research fills some of the prevailing gaps between theory and practice with regard
to evaluating the risk of SaaS technology. The issue of the efficacy of managerial
decision-making for IT investments is at the core of this research. From a theoretical
perspective, the research findings suggest that if managerial decision-makers are aware
of the risks beforehand, their adoption experience will likely be uneventful and the
organization can successfully leverage all the cost and efficiency benefits inherent in
SaaS technology. The findings also suggest that the key to meeting the requirement for
risk awareness is information. A risk assessment exercise is only as good as the
information attained. As discussed in chapter 2, the rational and unbiased decision
process entails establishing goals, gathering relevant information, processing that
information, and making an informed decision. The SaaS selection process must
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 93
embrace the rational decision process to be effective. As is the case in any technology
acquisition, uncertainty exists in SaaS selection, but this research provides a theoretical
foundation that sheds light on and describes these ambiguities from a risk concern
perspective comprehensible to nontechnical and technical managers.
References
Corran, E. R., & Witt, H. H. (1982). Reliability analysis techniques for the design
engineer. Reliability Engineering, 3(1), 47-57.
Creating the cumulus. (2008, October). Economist. Retrieved from
http://www.economist.com/node/12411908
Cummings, L. (1998). The scientific reductionism of relevance theory: The lesson from
logical positivism, Journal of Pragmatics, 29(1), 1-12.
D’Amico, V. (2005). 10 easy steps to the right business software. Consulting to
Management, 16(2), 47-53.
Damodaran, A. (2008). Strategic risk taking: A framework for risk management.
Pennsylvania: Wharton School Publishing.
D'Andrea, G. (2006). Tools for effective decision-making. The Case Manager, 17(1), 43-
59.
Davies, J. (2008, July). SaaS impact on enterprise feedback management. Gartner
Research.
DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information
systems success: A ten-year update. Journal of Management Information
Systems, 19(4), 9-30.
Department of Labor, Bureau of Labor Statistics (2008). U.S. telecommunications
report, Q1 2008. Retrieved from http://www.bls.gov/
Desisto, R. P., Paquet, R., & Pring, B. (2007, June). Hybrid SaaS: Questions and
answers. Gartner Research.
Desisto, R. P., & Pring, B. (2010, May). Essential SaaS overview and 2010 guide to
SaaS research. Gartner Research.
Di Maio, A. D., & Claps, M. (2010, May). Government in the cloud. Gartner Research.
Donston, D. (2008). Shaklee cleans up with SaaS. eWeek. Retrieved from
http://www.eweek.com/c/a/Virtualization/Shaklee-Cleans-Up-with-SAAS/1/
Ellis, D. B., & Banerjea, D. K. (2007). Successful software selection. Quality, 46(8), 44-
46, 48.
Ersdal, G., & Aven, T. (2008) Risk informed decision-making and its ethical basis.
Reliability Engineering and System Safety, 93(2), 197-205.
European Network and Information Security Agency (ENISA). (2009). Cloud computing
risk assessment. Retrieved from
http://www.enisa.europa.eu/act/rm/files/deliverables/cloud-computing-risk-
assessment/?searchterm=cloud%20computing
eWeek (2008, April). Software as a service survey. Retrieved from www.eweek.com
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 98
Fabbi, M. (2010, November). Case study: Reducing cloud service support cost, speed is
important. Gartner Research.
Federal Risk and Authorization Management Program (FedRAMP) (2010). Proposed
Security assessment and authorization for U.S. government cloud computing.
Retrieved from https://info.apps.gov/sites/default/files/Proposed-Security-
Assessment-and-Authorization-for-Cloud-Computing.pdf
Fornell, C., & Larcker, D. (1981). Evaluating structural equations models with
observable variables and measurement error. Journal of Marketing Research,
18, 39-50.
Gilboa, I, (2009). Theory of decision under uncertainty. New York: Cambridge University
Press.
Gopinath, M., & Nyer, P.U. (2009). The effect of public commitment on resistance to
persuasion: The influence of attitude certainty, issue importance, susceptibility to
normative influence, preference for consistency and source proximity.
International Journal of Research in Marketing, 26(1), 60-68.
Gutnik, L.A., Hakimzada, A. F., Yoskowitz, N. A., & Patel, V. L. (2006). The role of
emotion in decision-making: A cognitive neuroeconomic approach towards
understanding sexual risk behavior. Journal of Biomedical Informatics, 39(6),
720-736.
Hall, D. J., & Davis, R. A. (2007). Engaging multiple perspectives: A value-based
decision-making model. Decision Support Systems, 43(4), 1588-1604.
Havenstein, H. (2008). Google adds a weapon in its battle to kill Windows.
Computerworld. Retrieved from
http://www.computerworld.com/action/article.do?command=viewArticleBasic&arti
cleId=9114004
Hayes, B. (2008, July). Cloud computing. Communications of the ACM, 51(7), 9-11.
Heiser, J. (2010a, March). Analyzing the risk dimensions of cloud and SaaS computing.
Gartner Research.
Heiser, J. (2010b, April). Survey results: Assessment practices for cloud, SaaS and
partner risks. Gartner Research.
Hofstede, G. (1991). Cultures and organizations: Software of the mind. London:
McGraw-Hill.
Holford, W. (2009). Risk, knowledge and contradiction: An alternative and
transdisciplinary view as to how risk is induced. Futures, 41(7), 455.
Hollander, N. (2000). A guide to software package evaluation and selection: The R2ISC
method. New York: AMACOM.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 99
Hurst, P. M., & Siegel, S. (1956). Prediction of decisions from a higher ordered metric
scale of utility. Journal of Experimental Psychology, 52(2), 138-144.
Info-Tech Research Group. (2006, September). SaaS: What it is and why you should
care. Retrieved from http://www.infotech.com/
Olson, M. H., & Ives, B. (1981). User involvement in system design: an empirical test of
alternative approaches. Information & Management, 4(4), 183.
Janus, I. L. (1989). Crucial decisions: Leadership in Policymaking and Crisis
Management. New York: Free Press.
Jeffrey, R. C. (1983). The logic of decision (2nd ed.). Chicago: University of Chicago
Press.
Jeffrey, R. (2004). Subjective probability: The real thing. Cambridge: Cambridge
University Press.
Kaiser, J. F. (1974, April). Nonrecursive digital filter design using the Lo-sinh window
function. IEEE Symposium on Circuits and Systems. Retrieved from
http://www.mathworks.com/help/toolbox/signal/kaiserord.html.
Khattab, A., Aldehayyat, J., & Stein, W. (2010). Informing country risk assessment in
international business. International Journal of Business and Management, 5(7),
54-62.
Kim, D., Ferrin, D. L., & Rao, H. R. (2007). A trust-based consumer decision-making
model in electronic commerce: The role of trust, perceived risk, and their
antecedents. Decision Support Systems, 44, 544-564.
Kitchenham, B. A., Pickard, L., Linkman, S., & Jones, P. (2005). A framework for
evaluating a software bidding model. Information and Software Technology,
47(11), 747-760.
Koller, G. (2005). Risk assessment and decision-making in business and industry: A
practical guide. Cambridge: Cambridge University Press; Boca Raton, FL:
Chapman & Hall/CRC.
Krill, P. (2008). Cloud computing, Web 2.0 trends emphasized. Infoworld. Retrieved
from http://www.infoworld.com/d/developer-world/cloud-computing-web-20-
trends-emphasized-075
Kuhn, T. S. (1996). The structure of scientific revolutions (3rd ed.). Chicago and London:
University of Chicago Press.
Lamarre, E., & Pergler, M. (2010). Risk: Seeing around the corners. McKinsey
Quarterly, (1), 102-106. Retrieved from Business Source Complete database.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 100
Lee, J. W., & Kim, S. H. (2000). Using analytic network process and goal programming
for interdependent information system project selection. Computers and
Operations Research, 27(4), 367-382.
Liang-Chuan, W., & Chorng-Shyong, O. (2008). Management of information technology
investment: A framework based on a real options and mean–variance theory
perspective. Technovation, 28, 122-134.
Longwood, J. (2009, June). Evaluating, selecting, and managing cloud service
providers. Gartner Research.
Maiden, N. A. & Ncube, C. (1998, March/April). Acquiring COTS software selection
requirements. IEEE Software, 15(2), 46-56.
Maoz, M., Collins, K., Alvarez, G., Thompson, E., Fletcher C., Jacobs, J., Woods, J., &
Davies, J. (2010, August). Q&A: Customer service for 2011 and the Gartner
CRM research team. Gartner Research.
Marston, S., Li, Z., Bandyopadhyay, S., Zhang, J., & Ghalsasi, A. (2011). Cloud
computing: The business perspective. Decision Support Systems, 51(1), 176-
189.
Mauer, W., & Bona, A. (2007, August). Best practices for negotiating software as a
service contract. Gartner Research.
McKinney, V., Yoon, K., & Zahedi, F. (2002). The measurement of Web-customer
satisfaction: An expectation and disconfirmation approach. Information Systems
Research, 13(3), 296-315.
McNee, W. S. (2007). SaaS 2.0. Journal of Digital Asset Management, 3(4), 209-214.
Meade, L., & Sarkis, J. (1998). Strategic analysis of logistics and supply chain
management systems using the analytical network process. Transportation
Research Part E: Logistics and Transportation Review, 34(3), 201-215.
Mertz, S. A., Eid, T., Eschinger, C., Huang H. H., Pang, C., & Pring, B. (2008,
September). Market trends: Software as a service, worldwide, 2007-2012.
Gartner Research.
Miller, M. (2008). Cloud computing: Web-based applications that change the way you
work and collaborate online. Indianapolis, IN: Que.
Moltzen, E. F. (2008). Customer relationship management: Intuit goes SaaS with
QuickBooks. CRN: CRNTech, 21, 10.
National Institute of Science and Technology (NIST) (2010a, February). Guide for
applying the risk management framework to federal information systems.
(Special Publication 800-37). Retrieved from
http://csrc.nist.gov/publications/nistpubs/800-37-rev1/sp800-37-rev1-final.pdf
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 101
Saaty, T.L. (1980). The analytic hierarchy process. New York: McGraw-Hill.
Sang-Yong, T. L., Hee-Woong, K., & Gupta, S. (2009) Measuring open source software
success. Omega, 37(2), 426-438.
Saunders, M., Lewis, P., & Thornhill, A. (2007). Research methods for business
students (4th ed.). London: Prentice Hall Financial Times.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 102
Subashini, S., & Kavitha, V. (2010). A survey on security issues in service delivery
models of cloud computing. Journal of Network and Computer Applications,
34(1), 1-11.
Svantesson, D., & Clarke, R. (2010). Privacy and consumer risks in cloud computing.
Computer Law and Security Review, 26(4), 391-397.
Tasa, K., & Whyte, G. (2005). Collective efficacy and vigilant problem solving in group
decision making: A non-linear model. Organizational Behavior and Human
Decision Processes, 96(2), 119-129.
Torkzadeh, G., & Doll, W.J. (1999). The development of a tool for measuring the
perceived impact of information technology on work. Omega International Journal
of Management, 27(1), 327-339.
Trochim, W. M. K. (2006). Research methods knowledge base. Retrieved from
http://www.socialresearchmethods.net/kb/index.php
U.S. Census Bureau (2007). 2007 economic census. Retrieved from
http://www.census.gov/econ/census07/
Van Ginkel, W. P., & Van Knippenberg, D. (2008). Group information elaboration and
group decision-making: The role of shared task representations. Organizational
Behavior and Human Decision Processes, 105(1), 82-97.
Walter, S. D., Mitchell, A., & Southwell, D. (1995). Use of certainty of opinion data to
enhance clinical decision-making. Journal of Clinical Epidemiology, 48(7), 897-
902.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 103
Wasserman, A., Murugan, P., & Chan C. (2006). Business readiness rating for open
source: A proposed open standard to facilitate assessment and adoption of open
source software. Retrieved from http://www.openbrr.org
Weick, K., & Quinn, R. (1999). Organizational change and development. Annual Review
of Psychology, 50, 361-386.
Weil, N. (2008a, November). How fast is the road to SaaS: Vendor would make it easier
to migrate apps to hosted model. CIO, 22(4), 8.
Weil, N. (2008b, October). SaaS and the IT staff: As software-as-a-service offerings
expand, IT jobs will change. Here's what the shift may mean to IT
departments. CIO, 22(4), 12.
Weiss, A. (2007). Computing in the clouds. NetWorker, 11(4), 16-25.
Where the cloud meets the ground. (2008, October). Economist. Retrieved from
http://www.economist.com/research/articlesbysubject/displaystory.cfm?subjectid
=348981&story_id=E1_TNQTTJND
Wilson, J. K., & Rapee, R. M. (2006). Self-concept certainty in social phobia. Behaviour
Research and Therapy, 44(1), 113-136.
Wixom, B. H., & Watson, H. J. (2001). An empirical investigation of the factors affecting
data warehousing success. MIS Quarterly 25(1), 17-41.
Worthem, B. (2008, September 23). Overuse of the term “cloud computing” clouds
meaning of the tech buzz phrase. Wall Street Journal. Retrieved from
http://online.wsj.com/article/SB122214259441966713.html
Yamin, M., & Sinkovics, R. (2010). ICT deployment and resource-based power in
multinational enterprise futures. Futures, 42(9), 952.
Yeo, R. K., & Ajam, M. Y. (2010). Technological development and challenges in
strategizing organizational change. International Journal of Organizational
Analysis, 18(3), 295-320.
Zellman, E., Kaye-Blake, W., & Abell, W. (2010). Identifying consumer decision-making
strategies using alternative methods. Qualitative Market Research, 13(3), 271-
286.
Zeng, J., An, M., Smith, N. J. (2007). Application of fuzzy logic decision making
methodology to construction project risk management. International Journal of
Project management, 25(1), 589-600.
Zissis, D., & Lekkas, D. (in-press). Addressing cloud computing security issues. Future
Generation Computer Systems.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 104
Study Title: Survey of Online Software Selection and Risk Assessment (Software-As-a-Service)
Thank you for taking the time to complete this survey on online software selection and risk assessment.
The aim of this survey is find out whether or not the awareness of the risks associated with adopting
subscription based online software in your organization can lead to a better experience in using the
software and a better relationship with the software vendor.
Completing this survey will take no more than 15 minutes of your time. The survey is completed
anonymously and all data collected in this study will be kept confidential. Your responses will not be
passed on to any third-parties and will only be used for academic research.
There are no anticipated risks to participating in this study. There may be no direct benefit to you other
than the sense of contributing to knowledge in this particular area.
If you would like further information about the study please contact me (Lionel Bernard) at
lbernard@arc.gov or call me at 202.731.8402.
This study has been reviewed and approved by the Human Subjects Review Board, University of
Maryland University College. If you have concerns about ethical aspects of the study, please contact the
Dr. John Aje, Graduate School Representative, University of Maryland University College.
I have read and understand the above information. I agree to participate in this study.
o Yes, I agree.
o Non-Profit (3)
o IT Staff (6)
o Consultant (7)
QG3: Does your organization subscribe to any online based (pay-as-you-go) software that you
primarily use via a web browser and that is owned by an outside vendor?
o Yes (1)
o No (2)
QG4: If yes (to question #3), can you recall how long it has been since your organization began
subscribing to online based (pay-as-you-go) software?
o 1 to 3 years (2)
o 4 to 6 years (3)
QG5: Were you involved in the evaluation and selection process for any of the online software
that your organization uses? (If yes, go to Q6, if no, end survey.)
o Yes (1)
o No (2)
Please respond to the following question by thinking about one (1) of the online software
applications used in your organization that your were involved in selecting and that you are
familiar with and can give your honest opinion about this online software.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 109
QG6: In your opinion, how satisfied are you or your organization overall with this online based
software that your organization is using or has used in the past?
o Yes (1)
o No (2)
Please respond to the following questions by thinking about the online software and the vendor
who owns the software that your organization subscribes to. Think about how well you know the
online software and the vendor and respond based on information you have or can remember.
Please respond to each question honestly and as best as you can. Respond based on how certain
or uncertain you were about a specific item related to the online software or the vendor.
QR8: How certain are you that the vendor conducts security checks on their staff?
QR9: How certain are you that the vendor has controls in place to restrict access to your data by
their staff?
QR10: How certain are you that the vendor has policies in place about informing you regarding a
security breach that allowed someone to get access to your data?
o Very Certain o Somewhat o Not very o Not at all o This Does Not
(1) Certain(2) certain(3) certain(4) Apply(0)
QR11: How certain are you that the vendor has internal controls in place to prevent intrusion,
phishing, and malware attacks against your data?
QR12: How certain are you that the online software requires a username and password (or some
other form of authentication) to gain access to the online software?
QR13: How certain are you that the vendor will allow you to investigate your usage logs and
access records?
QR14: How certain are you that the vendor encrypts the communication whenever someone in
your organization uses a web browser to access the online software?
QR15: How certain are you that the online software stores your data in the same country where
your organization is a legal entity?
QR16: How certain are you about who owns your data that is stored in the online software?
QR17: How certain are you that the vendor enables you to retain your Sarbanes-Oxley (SOX)
and/or HIPAA compliance (if applicable)?
QR18: How certain are you that the vendor furnished or can furnish a recent Statement on
Auditing Standards (SAS70) report?
QB19: How certain are you that your organization tested the online software before adopting it?
QB20: How certain are you that the vendor has a disaster contingency plan?
QB21: How certain are you that the vendor is able to recover your lost or archived data at your
request?
QB22: How certain are you about the uptime and downtime performance requirements for this
online software?
QB23: How certain are you that the online software can scale to accommodate an increase in
usage or storage volume by your organization?
QB24: How certain are you that the vendor has a policy for notifying you before an upgrade to the
online software?
QB25: How certain are you that the vendor provides support by phone?
QB26: How certain are you that the vendor provides support by email?
QB27: How certain are you that the vendor provides support by a web-based trouble ticket
system?
QB28: How certain are you about the subscription rates/fees charged by the online software
vendor?
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 112
QB29: How certain are you about the contract payment terms with the vendor (i.e. monthly,
quarterly and annually)?
QB30: How certain are you about whether or not the vendor can increase subscription fees at any
time or only when the contract is renewed?
QB31: How certain are you that the vendor imposes a penalty for early termination of your
subscription?
QB32: How certain are you that the vendor will return your data if your organization cancels its
subscription to the online software?
QB33: How certain are you about whether or not the vendor allows for customizing the online
software?
QB34: How certain are you about whether or not the vendor charges customization fees?
QB35: How certain are you about whether or not the vendor provides training on using the online
software?
QB36: How certain are you about the whether or not the vendor charges training fees?
QB37: How certain are you that the vendor has provided documentation for the online software in
electronic and/or print format?
QB38: How certain are you about the whether or not someone in your organization is the primary
person responsible for communicating with the vendor?
QI39: How certain are you that a list of functional requirements was outlined by your organization
before selecting the online software?
QI40: How certain are you that the online software meets all or most of your functional
requirements?
QI41: How certain are you the online software can work/communicate with other software used by
your organization if required (i.e. exchange data)?
QI42: How certain are you that the online software vendor will assist in transferring your data from
your in-house system to the online software, if necessary?
QI43: How certain are you that the online software has adequate reporting functionality to meet
your organization’s needs?
QI44: How certain are you that the online software is easy to navigate and use based on your
experience with the online software?
QOE45: Please provide any feedback and comments about your experience in completing this
survey and/or your experience in using online (pay-as-you-go) software in your organization that
you think is helpful to this research.
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 115
Mr. Bernard –
You are quite welcome. Please consider this formal approval to go ahead with this survey to our clients.
Regards,
Eliot
Mr. Ware,
Thanks for agreeing to forward my SAAS survey to AllCovered clients. Please respond to this email by stating your approval to
go ahead with this survey of your clients. I simply need this formal approval as documentation for the University of Maryland’s
research board.
Thanks again,
Lionel Bernard
Hi Craig,
Thanks for agreeing to forward my doctoral survey to DS3 clients via email. Below is a link to the survey. My research is on SaaS
risk assessment. Once you start the survey it will give you a full explanation of the research. Please review the survey and give
any feedback you can. I’m looking to have it sent out during the week of November 16. When you are ready to send it out I’ll
send you an introductory email to use when sending.
https://www.surveymonkey.com/s.aspx?sm=372M8xMSPZVDDMbeIC29Vw_3d_3d
Thanks,
Lionel Bernard
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 116
How certain are you that the vendor will allow you to
Investigate Log/Access Certainty (Segregation)
investigate your usage logs and access records?
Data in Same Country Certainty (Location) How certain are you that the online software stores your data
in the same country where your organization is a legal entity?
How certain are you about who owns your data that is stored
Data Ownership Certainty (Ownership)
in the online software?
How certain are you that the vendor enables you to retain
SOX/HIPAA Compliance Certainty (Compliance) your Sarbanes-Oxley (SOX) and/or HIPAA compliance (if
applicable)?
Vendor SAS70 Report Certainty (Compliance) How certain are you that the vendor furnished or can furnish
a recent Statement on Auditing Standards (SAS70) report?
Business Continuity How certain are you that the vendor has a disaster
Vendor Disaster Plan Certainty (Recovery)
Dimension (BC) contingency plan?
How certain are you that the vendor is able to recover your
Recover Lost Archive Data Certainty (Recovery)
lost or archived data at your request?
Uptime/Downtime Performance Certainty How certain are you about the uptime and downtime
(Availability) performance requirements for this online software?
How certain are you about the contract payment terms with
Payment Terms Certainty (Pricing) the online software vendor (i.e. monthly, quarterly and
annually)?
How certain are you about whether or not the vendor can
Subscription Fee Increase Certainty (Pricing) increase subscription fees at any time or only when the
contract is renewed?
How certain are you that the vendor imposes a penalty for
Early Termination Penalty Certainty (Termination)
early termination of your subscription?
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 117
How certain are you that the vendor will return your data if
Data Return on Cancellation Certainty
your organization cancels its subscription to the online
(Termination)
software?
How certain are you about whether or not the vendor allows
Customization Allowed Certainty (Customization)
for customizing the online software?
Integration Risk How certain are you that the online software has adequate
Adequate Report Certainty (Reporting)
Dimension (I) reporting functionality to meet your organization’s needs?
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 118
0 = Not Applicable)
Business Continuity Risk Dimension Questions and Sample Certainty Rating Weights
0 = Not Applicable)
Testing 1 10(10)
Sample SaaS Provider Business Continuity Risk Score 10/20 (50%) 50(100)
0 = Not Applicable)
necessary?
Access Risk concerns regarding who has access to data, the type of access, and provisions to
prevent unauthorized access.
Integrity/Confidentiality Risk concerns regarding the privacy and protection of data while it is stored,
retrieved and transferred.
Transmission Risk concerns regarding encrypting and safeguarding data while it is being
transmitted electronically over the Internet.
Data Location Risk concerns regarding where data is physically stored (in-country or out-country)
and whether data is protected under local laws.
Data Segregation Risk concerns regarding multi-tenancy provisions to ensure that each SaaS client
data and usage is separate from other clients.
Compliance Risk concerns regarding client’s ability to meet certain legal reporting and
operational requirements and provider’s ability to meeting accreditation
requirements.
Recovery Risk concerns regarding the provider’s recoverability in the event of a disaster and
contingency plans.
Scalability Risk concerns regarding the flexibility of the SaaS solution to accommodate increases
in usage and storage.
Training Risk concerns regarding availability of training for clients on using the SaaS solution.
Testing Risk concerns regarding provisions allowing for client testing of the SaaS solution.
Upgrade Risk concerns regarding client’s receiving timely notifications of system upgrades
and upgrades being conducting during non-peak usage hours.
Support Risk concerns regarding availability of phone, email, and/or web trouble ticket
RISK ASSESSMENT FRAMEWORK FOR EVALUATING SAAS 122
Pricing Risk concerns regarding penalties, subscription price increases, and pricing related to
training, customization, and other tie-in services.
Provider Management Risk concerns regarding client personnel responsible for managing the service
relationship with the provider.
Customization Risk concerns regarding level and type of customization allowed by the provider and
availability of tools for making the customization.
Compatibility Risk concerns regarding ability of the SaaS solution to integrate with internal systems
if necessary and the provider’s ability to support integration.
Functionality Risk concerns regarding ability for SaaS solution to meet established functional
requirements set by the client.
Reporting Risk concerns regarding availability of adequate reporting functions in the SaaS
solution.