You are on page 1of 41

4/22/2019 Skillsoft Course Transcript

Course Transcript

Cloud System Architecture – Concepts and


Design
Cloud Computer Concepts
1. Course Introduction

2. Cloud Computing Definitions

3. Cloud Computing Participants

4. Cloud Computing Characteristics

5. Cloud Computing Infrastructure

Cloud System Architecture


1. Cloud Computing Activities

2. Cloud Computing Service Capabilities

3. Cloud Service Types (the Cloud Stack)

4. Cloud Deployment Models

5. Cloud Cross-cutting Aspects

Cloud System Security


1. Cryptography - Asset Security

2. Asset Access Control

3. Asset Removal and Storage Media Sanitization

4. Cloud Network Security

5. Security in the Virtualized Environment

6. Infrastructure and Data Threats

7. Platform-specific Security

Cloud Security Design


1. Cloud - Data Life Cycle

2. Cloud Service Continuity

3. Cloud Service Investment

4. Cloud Functional Security

Trusting Cloud Services


1. Cloud Service Certification Assessment

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 1/41
4/22/2019 Skillsoft Course Transcript

2. Product Certification

Practice: Cloud System Architecture


1. Exercise: Specifying Architectural Security

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 2/41
4/22/2019 Skillsoft Course Transcript

Course Introduction
Learning Objective
After completing this topic, you should be able to
start the course

1.
Cloud services vary in size and complexity, and the deployed architecture impacts directly on service
and data asset security. Hi, I'm Dan Lachance and in this course I'll explain aspects of Cloud
computing architectural design and I will define the associated Cloud systems and Cloud
components. In addition to Cloud Reference Architecture, I'll also talk about Cloud Security,
networking, and data encryption. The discussion will then get into aspects of cloud interoperability,
trusted cloud services, cloud system management, and operational considerations.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 3/41
4/22/2019 Skillsoft Course Transcript

Cloud Computing Definitions


Learning Objective
After completing this topic, you should be able to
define and describe cloud components

1.
In this video, I'll go over cloud computing definitions. ISO/IEC publication 17788 deals with cloud
computing in terms of an overview and vocabulary. ISO is the International Organization for
Standardization. And IEC is the International Electrotechnical Commission. These bodies publish a
number of documents that have standards and best practices including those related to security and
cloud computing. Document 17788 provides definitions of common cloud computing terms and a
cloud computing overview that relates to cloud characteristics, capability types, cloud service
categories, deployment models, and so on. Publication 17788 has supporting standards, such as
ISO/IEC 20000-1, which deals with information technology and service management. It is also
supported by definitions developed by the National Institute of Standards and Technology or NIST,
who also have a number of publications such as SP 800-145.

Special Publication 800-145 deals with The NIST Definition of Cloud Computing. And it's available for
everyone on the Internet to look at. If we scroll down through this online documentation, we'll
eventually come across cloud computing characteristics such as on-demand self-service, broad
network access, resource pooling, rapid elasticity, and so on. So, in summary, publication 17788
deals with cloud definition and abbreviations. It defines six cloud characteristics, four deployment
models, and three cloud capability types. Some definitions from that publication include cloud
application portability. This is defined as the ability for a cloud customer to migrate a service or an
application from one cloud service provider to another. This way, we're not tied in or locked in to a
specific vendor.

Cloud capabilities type is a classification of the functionality that gets provided by a specific cloud
service to the customer such as data archiving in the cloud versus provisioning new virtual machines
in the cloud. A measured service is a metered deliver of a cloud service to the customer where the
usage gets monitored and the customer pays for what they used – much like a utility like water or
electricity. Multitenancy is defined as having multiple customers that are allocated compute resources
by the cloud service provider, but their computations and data are kept isolated from one another. In
this video, we discussed cloud computing definitions.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 4/41
4/22/2019 Skillsoft Course Transcript

Cloud Computing Participants


Learning Objective
After completing this topic, you should be able to
define cloud system participants: consumers, providers, partners, auditors,
regulators

1.
In this video, I'll discuss cloud computing participants. There are many parties involved in order to
make a cloud service available to customers. The first is the Cloud Service Provider. According to
ISO/IEC publication 17788, the provider is the party that makes cloud services available to the cloud
service customer. Now some cloud providers have a small number of services they offer where
others have a wide array. The cloud service customer is the party that is in the business relationship
with the purpose of using or consuming those cloud services from the provider. The cloud service
user is either a person or an entity acting on their behalf associated with the cloud service customer
that consumes cloud services. So, in this case, it would be a reseller of some kind of compute
resource. The cloud service customer could be an individual user over the Internet or it could be an
organization or it could actually be a single department within an organization.

A cloud service partner is party that is engaged in the support of activities related to the Cloud
Service Provider or the cloud service customer or in some cases, both. An example of a cloud
service partner might be one that offers network links directly from an on-premises customer network
to a cloud provider network for the purposes of increased network throughput and security through
isolation. Using this kind of a link through a cloud service partner to link those two networks would
bypass Internet connectivity. Cloud auditors conduct audits against cloud services. Cloud Service
Providers must undergo third-party audits to remain compliant with specific standard such as PCI
DSS compliance for retail industry – locations that use customer credit cards and debit cards. A cloud
broker is a service partner that negotiates relationships between cloud service customers and other
Cloud Service Providers – much like a mortgage broker works on your behalf if you're looking for a
mortgage for a home and talks to various lenders.

There is a distinction between the roles that apply at the cloud service partner level, the cloud service
customer level, and the Cloud Service Provider level. For example, at the cloud service partner level,
we have developers that might be engaged, we have auditors and service brokers. And that might be
done as in our example of having a network provider that allows direct links between an on-premises
customer network and a cloud provider network. The Cloud Service Provider in turn has a number of
different roles that must be fulfilled such as a service operations manager, customer support and care
representatives, cloud service deployment managers, and so on. At the cloud service customer level,
of course, we've got the users that consume cloud services. In some cases, it might not be users, but
could be applications or web services that consume cloud services. In this video, we discussed cloud
computing participants.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 5/41
4/22/2019 Skillsoft Course Transcript

Cloud Computing Characteristics


Learning Objective
After completing this topic, you should be able to
outline the operational characteristics of cloud computing

1.
In this video, I'll talk about cloud computing characteristics. There are a number of characteristics that
must be in place in order to have a cloud computing environment. According to ISO/IEC publication
17788, the characteristics – of which there are six – include broad network access, on-demand self-
service, multitenant capability, resource pooling, rapid elasticity and scalability, and finally measured
service. Let's dive into each of these in more detail. Broad network access means that cloud services
are available over a network such as the Internet or – in the case of a private cloud – it could be a
local area network. And they could be accessed through standard mechanisms. Whether we're using
a mobile phone, a laptop, a desktop, running the Linux or Windows operating systems, it doesn't
matter. Another aspect of broad network access is that the network access to the IT service is
accessible via a low cost...ideally using a high broadband communication link.

The next characteristic is on-demand self-service. This means that cloud service customers can
provision additional computing capabilities as needed. For example, in the case of using e-mail in the
cloud, an organization could very quickly provision new e-mail accounts for new employees that get
hired without worrying about ordering a server or acquiring additional licenses and so on. Another
key aspect to on-demand self-service is that consumers can provision the resources that they need
whether it would be e-mail, storage space, virtual machine instances, database instances, and so on
without first going through the Cloud Service Provider. Often the Cloud Service Provider will have a
web-based portal through which customers can provision new resources as they need them. The
other thing about on-demand and self-service is that billing is usually on a monthly basis via a
monthly subscription fee and also based on "pay for what you use" – much like electricity or water.
Cloud Service Providers had offer – this type of on-demand self-service include Amazon Web
Services, Microsoft, Google, IBM, and Salesforce.

Here, on the Amazon Web Services page – once a customer has signed into their account – all of the
various cloud services offered by the provider are available and the cloud customer can provision
them as needed without talking first to the Cloud Service Provider. Now of course, depending upon
the service that the customer provisions, they may incur additional charges. For example, I could
click here – on EC2 – to launch new virtual server instances in the cloud or I could make a new S3
storage bucket for storing data in the cloud and so on. So it's done on demand and it's self-service
because the customer decides when they want to scale up with it or scale down if they want to
deprovision those resources. The next cloud computing characteristic is multitenancy. This allows
multiple customers to share the same applications or the same physical infrastructure. However,
there is isolation kept between those customers. For example, multiple customers might use the
same e-mail service in the cloud, but each tenant or cloud customer might have a different instance
of that application where they have their own customized settings, their own mailboxes, and so on.

[The presenter is logged in to the AWS console web page. Running along the top of the web page is
a menu bar. The menu bar includes the Directory Service menu option and three drop-down menus,
namely AWS, Services, and Edit. The menu bar also includes an icon for AWS on the left-hand side
of the AWS drop-down menu. The page shows the title Amazon Web Services on the left and
consists of three columns. The first column includes sections such as Compute, Storage & Content
Delivery, and Database. The second column includes sections such as Administration & Security and
Deployment & Management. The third column includes sections such as Application Services and
Mobile Services. The section named Compute includes links named EC2, Lambda, and EC2

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 6/41
4/22/2019 Skillsoft Course Transcript

Container Service. The section named Storage & Content Delivery includes links named S3, Elastic
File System, Storage Gateway, Glacier, and CloudFront.]

Client data isolation is a major security concern for multitenant application services. So there are
ways that this can be done even at the customer level where they could provision their own virtual
networks in the cloud, they could enable encryption at the network or storage level, and so on.
Another characteristic that identifies cloud computing is resource pooling. This means that multiple
customers are serviced from the same physical resources. So the cloud provider would have much
equipment in way of network infrastructure, server computing power, physical disk storage, and so
on. However, at the software level, this gets provisioned accordingly as customers provision these
types of resources. The resource pool should be very large and flexible enough to service multiple
client requirements and to provide for economy of scale. And that gets passed on to the cloud
customer often where they only pay for what they use. And it's a cheap operational cost compared to
a fixed capital cost if they were to buy their own equipment and store it on-premise. When it comes to
resource pooling, resource allocation must not impact application performance. Also, resources can
be located at multiple geographical locations. In some cases, customers can be permitted to select
the resource location themselves. Many cloud providers will even replicate information between
regions for high availability.

Here, in Amazon Web Services, I can launch a new Virtual Server instance in the cloud. I'll just
choose one of the templates and I'll accept the first set of defaults. On step three of launching a new
instance, this is where I have a number of interesting options such as determining exactly where I
want this to be available. For example, geographically I can have this Virtual Server instance
provisioned in us-west-2a region, 2b, 2c, and so on. At the same time, further down below for
Tenancy, I can choose from Shared tenancy (multi-tenant hardware) or Dedicated tenancy (single-
tenant hardware) – hardware that is actually dedicated to a single cloud customer or tenant, which
would incur additional charges. Rapid elasticity and scalability is a key feature of cloud computing.
This is the cloud service's ability to expand or contract as the user decides that they need additional
compute resources or it could be automated. Now, if you compare this to what we used to do
previously – which would be to order hardware for computing, physically wait for it to be shipped –
once it arrives, unbox it, set it up, install an operating system, patch it, configure it, and so on. That
takes a lot longer than simply provisioning or deprovisioning resources as you need them in the
cloud.

Resource provisioning in the cloud can be automated based on triggers or operational parameters
that users can configure. At any point in time, applications and features will only consume required
capacity on a needs basis. The last characteristic is measured service. This means that cloud
resource usage whether it's virtual server instances that are running or storage in the cloud – all of
this usage gets monitored by the Cloud Service Provider. Then it gets reported. The cost of
consumption is based on the utilization of those individual resources. So the cost model is based on
"pay for what you use". Many cloud providers will provide a web interface whereby – at any point in
time – you can see how much your monthly bill is currently based on the cloud resources that you've
been using. Often they'll also have a projected forecast value based on your current usage and also
a breakdown on the specific cloud services that you were using that happened to have incurred these
charges. In this video, we discussed cloud computing characteristics.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 7/41
4/22/2019 Skillsoft Course Transcript

Cloud Computing Infrastructure


Learning Objective
After completing this topic, you should be able to
outline the supporting architectural components and infrastructure of cloud
computing

1.
In this video, I'll talk about cloud computing infrastructure. There are a number of building blocks that
must be in place before we've got a cloud computing environment. At the compute level, this would
include physical hosts on which we can run multiple virtual machines. Now, on a single physical host,
we might have virtual machines owned by separate cloud tenants or customers. At the network level,
the cloud provider must have the physical network cabling and equipment – such as switches,
routers, and so on – in place, on top of which a virtualized network infrastructure can be built whether
it's virtual LANs or, what some cloud providers would call, Virtual Private Clouds. Virtualizing
networks allows one cloud customer to keep their network traffic isolated from another cloud
customer. This is good from a performance as well as a security standpoint. The storage building
blocks consist of physical disk volumes that are pooled together into what are called shared storage
pools. This can then be used to provision storage base as cloud customers require it.

Services include items like back-end databases whether it's a relational database like Oracle or a
network type of database, like Active Directory, to store things like user accounts. This also includes
things like clustering and redundancy to make sure that these back-end services are highly available.
With clustering, if one node fails where service was running, the service is simply failed over to
another remaining node in the cluster. Often a public IP address that was associated with that service
moves along with the service when it fails over to another node. Redundancy can be achieved in a
number of ways, one of which is for the Cloud Service Provider to replicate data center information
between different geographical regions. There are a number of different types of IT workloads that
we might run in a cloud computing environment. But, regardless of whether it's a server-based
workload related to files or a database or users and e-mail – regardless of the workload type – we've
got to make sure that dynamic provisioning is available. This allows, for example, to quickly add
additional storage or users to an e-mail system or to remove those items as required. This is often
done through a self-service portal available to the cloud customer.

At the management level, we've got to think about virtual servers that we might launch in a cloud
environment. For example, if a development team has a project in which they must develop a new
application, they might want a testing environment. Well, instead of using physical equipment – which
could be costly and might not be needed at the termination of the project – instead, the development
team might launch virtual server instances in the cloud to test their application as they develop it.
Then they would deprovision those virtual servers when they are finished. So they are only paying for
what they use. The same is true for virtual storage in the cloud as well as building virtual networks.
It's possible for a single cloud customer to build more than one virtual network. For example, a single
customer might have a production network in the cloud where they might host a publically visible e-
commerce website, but they might have a second virtual network in the cloud used for testing of new
applications built by developers. All of this is available to multiple cloud customers or tenants through
virtualization, which sits on top of the physical compute resources that make it all happen like
servers, disk storage, and networking components. In this video, we discussed cloud computing
infrastructure.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 8/41
4/22/2019 Skillsoft Course Transcript

Cloud Computing Activities


Learning Objective
After completing this topic, you should be able to
detail Cloud Computing Activities with reference to ISO/IEC 17789, Clause 9

1.
In this video, I'll discuss cloud computing activities. ISO/IEC standard 17789 publishes information
about the Cloud Computing Reference Architecture, which is often shortened to CCRA. Having
virtualization that we can use to provision virtual machines as needed doesn't comprise a cloud
computing environment. There are other factors. Part of that is encompassed with the CCRA where it
deals with cloud computing roles, cloud computing activities, and cloud computing functional
components. Standards publication 17789 states that the CCRA serves the following goals – to
describe the community of stakeholders related to cloud computing, to describe the fundamental
characteristics of cloud computing, to specify cloud computing activities and the functional
components of which they are composed of. These principles guide the design and evolution of the
Cloud Computing Reference Architecture.

The document goes on to further describe cloud computing systems using various viewpoints. There
are four of them. The user viewpoint deals with the parties, the roles, and the subroles as well as the
cloud computing activities related to a cloud environment. The functional view also falls within the
scope of the standard. The functional components together comprise cloud services that are offered
by a provider to a consumer. The implementation and deployment view at the cloud level do not fall
within the scope of the ISO 17789 standard. The user view of cloud computing deals with items such
as the roles, such as the Cloud Service Provider, the cloud service consumer, and the partner. These
are the various parties that have a stake in cloud computing. The cloud services are organized into
capability types and categories. For example, platform as a service would relate to developers that
can provision virtual machine and database instances and develop and test applications in the cloud.
The various cloud deployment models deal with things such as whether we're dealing with a public
cloud, a private cloud, a community cloud, or a hybrid cloud.

Cross-cutting aspects relates to items programmatically in one component in the cloud that might
affect other components. Cross-cutting is related to lose coupling whereby we have various
components in an overall IT solution that don't have a strict dependency upon another one. However,
sometimes it's unavoidable. Cloud computing activities can be defined as a specified pursuit or set of
tasks that have a purpose and deliver one or more outcomes and are conducted using functional
components. In Clause 8 of document 17789, there are cloud computing activities defined as
activities that use services. This would be the cloud services consumer or customer. Activities that
provide services – this would be from the Cloud Service Provider. And the activities that support the
services – this would be a cloud services partner. The stakeholders in the cloud service relationship
have a number of subroles. For example, at the Cloud Service Provider or CSP, we've got the
customer support and care representative subrole as well as the network provider subrole. The cloud
service partner will have roles related to auditing as well as brokering – what the client needs against
what the provider has as offerings.

The subroles, for example, could deal with the customer service role or the cloud service
administrator on the customer side. This is kind of an evolution from having a traditional local server
administrator now being in control of cloud services for an organization. The cloud service
administrator's job would be to do things such as perform a service trial from one of more Cloud
Service Providers to find the best fit that suits the business need. The cloud service administrator's
job is also ongoing in the sense of they need to monitor the various cloud services being utilized for
performance, security, and also to ensure that it's cost effective. They'll also handle problem reports

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 9/41
4/22/2019 Skillsoft Course Transcript

and deal with the Cloud Service Provider to resolve the issues and also administer tenancies in the
cloud. In this video, we discussed cloud computing activities.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 10/41
4/22/2019 Skillsoft Course Transcript

Cloud Computing Service Capabilities


Learning Objective
After completing this topic, you should be able to
define how cloud services are categorized based on supported services and
capabilities

1.
In this video, I'll talk about cloud computing service modes. Originally, cloud services were
categorized as what were known as types. The three that were defined by the NIST were IaaS –
infrastructure as a service, PaaS – platform as a service, and SaaS – software as a service. Now
what they're defined as are capability types and service categories. The three capability types are
infrastructure capability, platform capability, and application capability. According to the ISO 17788
standard's publication, infrastructure capability is a cloud capabilities type where the cloud service
customer can provision and use processing, storage, or networking resources. Now, not only can this
be provisioned when needed, but deprovisioned when no longer needed. Remember, in the cloud,
we often are using a payment model of "pay for what you use".

Platform capability is defined as a cloud capability type where the cloud service customer can deploy,
manage, and run customer-created or customer-acquired applications using one or more
programming languages and one or more execution environment supported by the Cloud Service
Provider. So platform capability then would be of great interest to developers. Infrastructure capability
would be of interest to cloud customers that require storage or virtual machines running in the cloud
or virtual private networks and so on. Application capability is defined as a cloud capability type in
which the cloud service customer can use the Cloud Service Provider's applications. So that wouldn't
be of interest to developers like platform capability would. With application capability, we've got end-
user apps that already exist and are provided by the Cloud Service Provider. An example of this
would be an e-mail application or a word processor that runs in the cloud or something that's
available to the public like Facebook or Twitter.

The thing you watch out for is that when you work with a specific Cloud Service Provider, be aware
that they may use different terminology. So infrastructure capability, for example, would turn out to be
things like launching Virtual Servers in the Cloud or using Scalable Storage in the Cloud that you can
provision and add more storage to as needed. In terms of platform capability for developers, we've
also got items that are designed to deploy code in the cloud for developers. For application capability,
we've got applications that users would interact with such as some kind of mail system in the cloud or
some kind of document management system and so on. In this video, we discussed cloud computing
service modes.

[The presenter is logged in to the AWS console web page. Running along the top of the web page is
a menu bar. The menu bar includes the Directory Service menu option and three drop-down menus,
namely AWS, Services, and Edit. The menu bar also includes an icon for AWS on the left-hand side
of the AWS drop-down menu. The page shows the title Amazon Web Services on the left and
consists of three columns. The first column includes sections such as Compute, Storage & Content
Delivery, and Database. The second column includes sections such as Administration & Security,
Deployment & Management, and Analytics. The third column includes sections such as Application
Services, Mobile Services, and Enterprise Applications. The section titled Compute includes links
named EC2, Lambda, and EC2 Container Service. The section titled Storage & Content Delivery
includes links named S3, Elastic File System, Storage Gateway, Glacier, and CloudFront. The link
EC2 displays the description "Virtual Servers in the Cloud." The link S3 includes the description
"Scalable Storage in the Cloud." The presenter scrolls down the page and refers to the link
CodeDeploy under the section named Deployment & Management. The CodeDeploy link displays
the description "Automated Deployments." The section named Enterprise Application includes three
https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 11/41
4/22/2019 Skillsoft Course Transcript

links named WorkSpaces, WorkDocs, and WorkMail. The link WorkSpaces displays the description
"Desktops in the Cloud," the link WorkDocs displays the description "Secure Enterprise Storage and
Sharing Service, " and the link WorkMail displays the description "Secure Email and Calendaring
Service."]

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 12/41
4/22/2019 Skillsoft Course Transcript

Cloud Service Types (the Cloud Stack)


Learning Objective
After completing this topic, you should be able to
describe the industry-defined standard categories of cloud computing

1.
In this video, I'll discuss cloud service models. Now called cloud service categories, originally, cloud
service models were defined by NIST. And they included infrastructure as a service, platform as a
service, and software as a service. Now both NIST and ISO publication 17788 define cloud service
categories as being a group of cloud services that possess some kind of common set of qualities.
Plus, it goes on to state that a cloud service category can include capabilities from one or more cloud
capability type. Let's dive into more detail that's related to cloud service categories beginning with
infrastructure as a service – IaaS. This provides access to cloud service consumers to compute
resources in a virtualized environment. This allows the customer to build platforms, data centers, web
site hosting, virtual network configurations, and so on. The resources include virtual server instances,
storage space, virtualized network configurations – including private and public IP addresses that can
be used – load balancers, and so on. Some characteristics of infrastructure as a service include
elastic provisioning. We can rapidly provision or deprovision compute resources that we require such
as storage space as needed.

Often providers will give customers a self-service interface where this can be done. And, in some
cases, it might even be automated. So an autoscaling option would allow these resources to be
provisioned or deprovisioned when certain thresholds are met. Often we have up to 100% uptimes
guaranteed by the Cloud Service Provider. On the security side, confidentiality in the way of server
side encryption with storage space might be available from some Cloud Service Providers.
Authentication could take the form of user accounts that we build in the cloud that can then
authenticate and are authorized to use cloud services. And data integrity can also be used to verify
the data has not been tempered with. IaaS allows for rapid scalability. There is no upfront investment
in hardware. Instead, it's utility style costing where you pay for what you use. We have location
independence in the sense that as long as we've got an Internet connection, we could be anywhere
really using any type of computing device and gain access to our cloud computing services. Then we
have to consider the physical security of data center locations on the Cloud Service Provider side.
They must undergo third-party audits to maintain their various accreditations for security from
different bodies around the world. So the provider then manages the physical security of the data
centers as well as the security of the physical IT infrastructure.

One of the beauties as well is that there's no single point of failure. Cloud Service Providers make
sure that there's duplication in place even at the level of data centers where they might replicate data
between data centers. The next cloud service category is platform as a service – PaaS. This is of
interest to developers. It gives them a development environment that allows them to build web,
desktop, and web service applications. They can also test and deploy their applications using PaaS.
Characteristics include development tools that are constantly upgraded. This is on the provider side.
Yet, the customer can select the required tools. This type of service – PaaS – is provided on a
subscription basis with clients paying for what they use. PaaS offerings include operating system
instances that we could launch in the cloud, server-side scripting environments, database instances
that we might launch of various types. We might launch a Microsoft SQL server database instance or
a MySQL database or an Oracle database and so on. There's no upfront investment once again in
development hardware or infrastructure. Its utility style costing is we pay for what we use. Some
Cloud Service Providers will allow you to use your own license depending on the type of software
that you're running in the cloud – for example, with the database instance. But, in other cases, you
might not be required to specify a license. It's part of what you get charged for when you launch, for
example, a database instance.
https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 13/41
4/22/2019 Skillsoft Course Transcript

Platform as a service will often give developers a team development environment. This is a
centralized repository of code that also tracks changes. And, of course, we have the security
paradigm built in with platform as a service. For example, various user accounts that are created in
the cloud would have different access to a code repository or to database instances that we've
launched. The next cloud service category is software as a service – SaaS. This is where consumers
access software applications over the Internet. Applications such as Google, Twitter, Facebook,
Imgur, and Dropbox to name just a few are considered software as a service. So we don't have
developers building this on our behalf. It's already built and provided by the Cloud Service Provider.
This is often rented on a per user or usage basis. So for example, depending on how many different
accounts we need for Dropbox storage, we might pay a fee. And certainly, depending upon the space
consumed, we pay another fee. Software as a service is supported by infrastructure as a service. For
example, with Dropbox – for storage – there needs to be some underlying storage mechanism.
Software as a service can also be created using platform as a service components.

SaaS offers minimized hardware costs because we don't have to physically purchase hardware,
which is sometimes a capital investment. There are minimal setup costs. We only pay for what we
use. The usage is scalable. For example, if we need more storage space through Dropbox or more
cloud e-mail accounts because we've hired new employees, we can provision them very quickly. And
then, when we don't need it, we can deprovision it because we don't want to pay for it if we're not
using it. Another benefit is the automated updates of applications. We don't have to worry about
applying the latest updates to productivity software that users use and making sure it functions
correctly. That is the responsibility of the Cloud Service Provider. We have mobility in the sense that
we can use a mobile smartphone, a tablet, desktop, laptop, and so on to connect to our SaaS
offerings. Some Cloud Service Providers will allow customers to brand applications to customize
them in some way for their business needs. There are also cloud service sub categories such as
communications as a service. This is real-time interaction and collaboration. There's compute as a
service, which provides processing resources that are required to deploy and run application
software. There's data storage as a service, security as a service, network as a service, and so on.

Identity as a service is fundamental because this is how our users would identify themselves to the
Cloud Service Provider, after which they would be authorized to use various cloud service offering
such as cloud-based e-mail or a word processor. We've also got database as a service where the
installation and maintenance of the databases are performed by the provider. Most Cloud Service
Providers will have some kind of Identity & Access Management interface whereby we can provision
user accounts that need to get authorized to the provider before they're allowed to use resources.
And, after I've created a user, if I select the user, often I can put them in Groups. And I could even
attach policies either to the user or the group that gives them access to some aspect of the cloud
service offerings.

For platform as a service, most Cloud Service Providers will allow us to launch a database instance
of various types such as MySQL, PostgreSQL, Oracle, Microsoft SQL Server. And I can choose from
a subset of offerings that makes sense for my need. The beauty is that we don't have to worry about
acquiring the software. In some cases, we don't have to worry about the licensing. It runs on cloud
provider equipment. Now remember that some cloud service categories might include capabilities of
varying types. For example, network as a service includes infrastructure, platform, and software as
the service components. Whereas, if we look at something like platform as a service as a cloud
service category, it only relates to the cloud capability type platform. In this video, we discussed cloud
service models.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 14/41
4/22/2019 Skillsoft Course Transcript

Cloud Deployment Models


Learning Objective
After completing this topic, you should be able to
describe the defined deployment models of the cloud services

1.
In this video, I'll discuss cloud deployment models. According to the ISO and NIST definitions, there
are four cloud deployment models. The first is a private cloud. This is where cloud services are
consumed exclusively by one cloud service customer or federated partners and resources are
controlled by that same cloud service customer. So, in other words, a private organization could have
their own private cloud running on their own equipment that they control. A public cloud is where
cloud services are available to and consumed by all cloud service customers such as customers over
the Internet whether they be individuals or organizations. However, the resources remain under
control of the Cloud Service Provider – resources such as physical servers, physical disk pools,
physical network components, and so on. A community cloud – the third cloud deployment model
type – is where cloud services are used exclusively by a specific collection of cloud service
customers that have the same needs. For example, some larger cloud providers offer government
cloud services in certain regions around the globe. So we could consider a government cloud to be a
community cloud where they have the same sets of security standards that they might need to abide
by.

Now bear in mind that a government cloud doesn't have to be based through a public cloud provider.
This could be under the complete control of the government or government agency. Therefore, it
would also be considered a private cloud. A Hybrid Cloud model, like the name implies, is a cloud
deployment model that uses at least two different cloud deployment models that we've already
mentioned. For example, an organization might have both a public and private cloud. So they might
have a private cloud running on equipment that they own and control whereby there's self-
provisioning and elastic provisioning of resources by users as required. However, at the same time,
that private cloud might also use public cloud services for firewalling or antivirus scanning, for
example. In this video, we discussed cloud deployment models.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 15/41
4/22/2019 Skillsoft Course Transcript

Cloud Cross-cutting Aspects


Learning Objective
After completing this topic, you should be able to
describe the additional operational aspects of the cloud service environment

1.
In this video, I'll talk about cloud service-related facets. Cross-cutting refers to aspects of an
application or a system that can impact other parts of that same application or system. So then there
is an interdependency, and we have to think about these as related to cloud items such as protection
of PII – Personally Identifiable Information. Examples of PII would include things like e-mail
addresses, credit card numbers, medical information, social security numbers, and so on. There
could be regulations that dictate how application components work together. So for example, for
privacy concerns, the use of PII to authenticate users might be forbidden, or regulations might
stipulate that PII must be encrypted. One way to ensure the resiliency of data is to keep multiple
backups. A cross-cutting concern here would relate to encrypt the data while it's being used versus
data that gets backed up and isn't stored in an encrypted form.

On the security side, the decision to use a public cloud service provider comes with the issue of
transmitting potentially sensitive information over the Internet, where previously it would have been
transmitted only internally. So we might take a look at things like transport-level encryption such as
using SSL to ensure the confidentiality of transmitted data. The Service Level Agreement, or SLA,
defines expected response times and uptimes for cloud-related services. So one aspect of this is that
if we have a component that's behaving negatively, it could negatively impact other cloud services as
well. For example, a problem with an authentication system running in the cloud could also translate
to problems when our web users coming in through a web site, for example, take a long time to
authenticate. Other areas of interest related to cross-cutting include auditability. This means that we
should be keeping audit logs related to the usage of sensitive information, and this might even be
required by laws or regulations. Now this might affect other components because, when we log
information, depending upon the degree to which we are doing that, it could impact the performance
of something or take more storage space.

Cloud governance is all about creating and applying policies to get the best performance, reliability,
and efficiency out of cloud services. In the case of software as a service for example, the
responsibility of software maintenance and versioning would fall on the cloud service provider and
not the cloud service consumer. So maintenance and versioning then, in some cases, would be the
responsibility of the consumer, and in other cases the responsibility of the provider. When it comes to
portability and reversibility, we want to make sure that we don't have vendor lock-in. Vendor lock-in
means that we are locked into a specific cloud vendor's solutions, and they might store that data in a
proprietary format so we do have an easy way to extract it in the case that we wanted to switch to
another service provider. In this video, we discussed cloud service-related facets.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 16/41
4/22/2019 Skillsoft Course Transcript

Cryptography - Asset Security


Learning Objective
After completing this topic, you should be able to
describe the encryption of cloud-hosted assets

1.
In this video, I'll discuss cryptography. Cryptography is the science of preventing unauthorized access
to private information. The main objectives include confidentiality, whereby information cannot be
understood by anyone other than those for whom it was intended. Authorized parties, for example,
would have a decryption key so they could decrypt the data. With integrity, stored or transmitted data
that gets altered can be detected. Now when we detect a change, it either might have been
intentional or unintentional. Either way, it's one way that we know that data has changed from one
point in time to another. Nonrepudiation deals with the fact that a creator or sender of information
can't deny in the future their explicit intention in creating or transmitting the information. For example,
if a user digitally signs an e-mail message and sends it to somebody, they can't refute the fact that
they sent it because the digital signature is built with a private key that only that user has possession
of. Authentication means proving of one's identity whether one is a person, a web service for
example, or a device. So with authentication, the sender and the receiver can confirm each other's
identity along with the origin and destination of transmitted information. These all fall under the
umbrella of cryptography.

Cloud encryption prevents unauthorized access to stored and transmitted private information.
Depending upon the specific cloud service you're using from a cloud service provider will determine
whether or not encryption is supported – both at the storage and transmission levels. Encryption uses
a mathematical algorithm to transform plain text into a nonreadable form known as ciphertext. Now
the reverse process of decrypting that ciphertext decodes the information back to plain text. All these
algorithms will require some kind of a secret value known as a key which is used to encrypt or
decrypt the data. Now that's not to say that we always use the exact same key to both encrypt and
decrypt. In some cases, such as with asymmetric encryption algorithms – asymmetry being different
– we have two different keys that are used. They're related, but they are not the same. So for
example, a public key will be used to encrypt data and the mathematically related private key would
be used to decrypt it. Now in the case of cloud service providers, often public keys can be stored with
the cloud provider if you're supplying your own. But the private key is stored by you.

Cryptography looks at data states such as data in use which is data in process. Data at rest, which is
passive data that's being stored. And data in motion, which is data being transmitted. So data in use
then would deal with something like a database where records are constantly being updated or
spreadsheets that people are updating on a constant basis. Data at rest is data that's not active. So it
might be stored in a data warehouse or some kind of an archive. Data in motion is data that is
traversing a network or temporarily residing in computer memory. Now the issue here is do we
encrypt, for example, data in use at rest and in motion for confidentiality? And the answer will really
depend upon your needs and in some cases regulatory compliance. Certainly, we can encrypt data
that's being used, data at rest and data in motion.

We can protect data in motion by using encryption tunnels or virtual private networks. We can also
use network session encryption or higher level specific encryption mechanisms like Secure Shell or
SSH. This is used to securely make a connection to a Linux host or a piece of network equipment in
a secured manner. The connection is encrypted. We could also use Secure Sockets Layer, for
example, with a web site to ensure that data is encrypted and kept private. With data at rest, we can
use various encryption algorithms such as AES, RSA, SHA-256, and so on. We might even use
database cryptographic function specific to a database provider like Oracle for example. We could
also use storage cryptography, which might be provided by the cloud service provider or it could even
https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 17/41
4/22/2019 Skillsoft Course Transcript

be built into an operating system that you're running in a virtual machine instance. For example, you
might decide to use BitLocker Drive Encryption in a Windows virtual machine instance that's running
in the cloud.

We also have the option of scheduling the update of encryption keys. Encryption keys will often be
changed for security purposes. There should also be auditing in place. Auditing makes sure that the
appropriate parties have the keys that they need and that they're using those keys appropriately. We
also have the option of considering minimizing our storage of sensitive and classified data where
possible. Again, in some cases due to regulatory compliance, we might have to force encryption for
data at rest. Key management deals with key ownership. Now the management side of the keys
would include the issuing of keys, what to do once they expire, the revoking of keys if there's been
some kind of a security compromise, and so on. In the cloud, key management can be a shared
model – a consumer provider model or a scenario where keys are maintained exclusively by the
cloud service provider.

For example, if we want to use encryption to connect to a virtual machine instance in the cloud, the
cloud service provider might generate key pairs for us. They would store the public key, we would
store the related private key, both would be required to make a connection for example to a virtual
machine instance. The cloud category can also have an impact on key ownership and management.
So, if we're dealing with infrastructure where we're dealing with things like networking, the keys could
be used differently than they would be for example if we're using software as a service. Multitenant
cloud environments definitely add complexity to key management because it could run in the tens of
thousands. Keys can be assigned per customer or shared group keys can even be deployed. This is
covered in the NIST Special Publication 800-57 Part 1 and Part 2. In this video, we discussed
cryptography.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 18/41
4/22/2019 Skillsoft Course Transcript

Asset Access Control


Learning Objective
After completing this topic, you should be able to
define access and access control to cloud-hosted assets (data, files, and resources)

1.
In this video, I'll discuss asset access control. The goal of access control is to identify users, services,
or devices and limit their access to resources. So we can use Identity and Access Management or
IAM where we might build users and groups and determine what resources they have access to. This
will allow us to secure access to cloud-hosted applications, resources, and data assets. We can build
Unified Access Control policies for both cloud as well as local network applications that we still might
be running on-premises. This ensures an efficient management of identity creation and destruction
such as the creation of user accounts and the removal of them when no longer required. Another
goal of access control is to avoid the use of multiple authentication mechanisms, specifically multiple
identity stores. We don't want to have a directory service that contains user credentials – on-
premises as well as with each specific cloud application. Instead, we want a single unified identity
provider, so that's easier for management and it scales well. And after people are authenticated to
that centralized identity provider, then they would be authorized to use a wide array of services.

Access to resources can be managed via federation. Federation allows us to use a central identity
store that is trusted by multiple applications. This way we avoid having multiple identity stores. It also
lends itself to Single Sign-On or SSO. What this means is that instead of the user being prompted to
authenticate every time they access a cloud service. As long as they've authenticated once and their
session hasn't timed out, they'll be seamlessly taken into what they're trying to access. This can also
be done via the cloud service provider management solution. Federation uses what are called claims.
A claim is basically a statement about something or someone. For example, for a user that
authenticates, a claim might be such that this user has this specific e-mail address, they are a
member of groups 1 and 2, and here is their date of birth. Those would be claims. Now claims-based
identity can be integrated with existing internal security frameworks. For example, if we're using
Active Directory on our network, then we could integrate it with the identity management solution
provided by a cloud service provider so that claims can be constructed. And this is relevant because
those claims are consumed in some cases by cloud applications. So claims or applications can
authenticate users inside or even outside of a corporate firewall. It's not limited to a single
organization.

Federation utilizes a Security Token Service or STS to create a SAML ticket to present to a token-
aware cloud-based application. Think of it, kind of, like going to an amusement park and being issued
a ticket for a ride and then presenting that ticket when you want to get on the ride. That's the same
type of thing here. Once you get your SAML ticket issued, you then present it to various applications.
And through Single Sign-On, you will not be prompted to authenticate again. Instead, they authorize
you to use those applications. Cloud providers maintain the security token service to allow claim-
based access to their applications after a trust relationship is established with your in-house identity
system – for example, Microsoft Active Directory. So, there's some procedure and it may vary from
cloud provider to cloud provider, whereby the cloud provider needs to trust your list of user accounts
if you've got it on-premises. This could be done by supplying a public key, if you're using PKI – Public
Key Infrastructure – or you might install a specified agent on your Active Directory server, for
example, so that it is trusted by the cloud provider.

Single Sign-On and Single Sign-Off allow the user to login once to gain access to one or more
software systems without being prompted to log in again. This is often a result of some kind of token
or a ticket exchange using SAML – the Security Assertion Markup Language. Most identity federation
solutions will support the SAML standard. Single Sign-On and Sign Off also supports the
https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 19/41
4/22/2019 Skillsoft Course Transcript

enforcement of enterprise authentication and authorization policies. Access control could be the
responsibility of the cloud service consumer or the cloud service provider or both. In some cases, a
cloud service consumer or customer might not have user accounts and groups created on-premises,
such as in Active Directory. They might completely rely on the cloud service provider solution so they
would register user accounts entirely in the cloud. Now we do have the option of integrating this type
of IAM solution. IAM stands for Identity and Access Management with something we have on-
premises like Active Directory.

This way, we could define policies related to password change, password complexity architecture,
and so on. We should also consider secure data deletion when we change different cloud service
providers. We might decide, for example, to end our business relationship with one cloud provider
and move to another. We want to make sure that if we had our user accounts stored with the other
cloud provider or replicated from our on-premises systems, we want to make sure that those are
securely removed. We should also consider general security attack and security breaches that might
take place. For example, we might decide that we're going to enable multifactor authentication for our
users so that it's harder to crack from a hacker perspective. Many cloud service providers will allow
you to integrate a Directory Service in the cloud with your on-premises directory service or to
completely host a directory service entirely in the cloud. For example, here we could connect to our
Active Directory on-premises environment or we could create an Active Directory configuration
entirely in the cloud. Now at the same time, if we just want a simple list of user accounts, we might go
instead into an Identity and Access Management solution offered by the provider where we could
build users, we could build groups and add users to the groups, and build policies to control what is
allowed to be done by the users.

[The presenter is logged in to the AWS console web page. Running along the top of the web page is
a menu bar. The menu bar includes the Directory Service menu option and three drop-down menus,
namely AWS, Services, and Edit. The menu bar also includes an icon for AWS on the left-hand side
of the AWS drop-down menu. The page shows the title Amazon Web Services on the left and
consists of three columns. The first column includes sections such as Compute, Storage & Content
Delivery, and Database. The second column includes sections such as Administration & Security,
Deployment & Management, and Analytics. The third column includes sections such as Application
Services, Mobile Services, and Enterprise Applications. The Administration & Security section
includes many links, some of which are Directory Service and Identity & Access Management. The
presenter clicks the link Directory Service under the section Administration & Security. The AWS
Directory Service page opens. The page includes a button named Get Started Now. Below the button
are three icons named Create Directories, Connect to the Cloud, and Manage Access. The presenter
clicks the button Get Started Now and the page displays information on Directory Setup. On the left-
hand side of the page, three tabs are displayed, namely Step 1: Directory Type, Step2: Directory
Details, and Step3: Review. The tab, Step 1: Directory Type is chosen by default and the area on the
right-hand side displays information related to this tab. The area on the right-hand side includes three
sections named Choose Directory Type, Create a Simple AD, and Connect using AD Connector. The
section named Create a Simple AD includes a button named Create Simple AD and the section
named Connect using AD Connector includes a button named Create AD Connector. The presenter
clicks the icon for AWS in the menu bar and returns to the Amazon Web Services home page. The
presenter clicks the link Identity & Access Management under the section Administration & Security.
The Identity and Access Management page opens. On the left-hand side, the Dashboard is displayed
and is selected by default. The area on the right-hand side displays information related to the
Dashboard. Below the Dashboard, there is a section named Details. This section includes many
subsections, some of which are Groups, Users, Roles, and Policies. The presenter clicks the
subsection Users and the area on the right-hand side now displays information related to the Users.
He then clicks the subsection Groups and the area to the right-hand side displays information related
to the Groups. The presenter clicks the subsection Policies and the area to the right-hand side
displays all the Policy Names.]

Let's take a look at the traffic flow that's related to identity federation. In step one, we see here on the
left, the user would log on to Active Directory and get their Kerberos ticket. In step two, the user
would then attempt to establish a session with a web app. In step number three, the app would need
a session or a token that is trusted. And, if it doesn't have one from the user, then it will redirect the

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 20/41
4/22/2019 Skillsoft Course Transcript

user station to the security token service of the relying party. Now the relying party is the party that
actually hosts apps and the relying party needs to trust an identity provider elsewhere. In step four,
the relying party's security token service would send a token request to the identity provider security
token service. Now this could be within a single organization or it could be between two different
organizations. For example, your on-premises Active Directory environment could be the identity
provider, whereas the relying party might be the cloud provider. Either way, in step four, the request is
sent to the identity provider, if the user wasn't already authenticated.

In step five, the security token service authenticates user information, in this case from Active
Directory, and then creates a SAML token. Now the SAML token is where the claims come in,
perhaps the application needs to see a date of birth or group membership or an email address or
anything like that. That means that we would have to configure our identity provider security token
service to put that claim information into the SAML token. In step six, the identity provider security
token service redirects the user back to the relying party security token service with the SAML token.
Now the relying party, remember, is where the application is hosted that the user is trying to get into
in the first place. In step seven, the security token service will redirect the user back to the app with
either a created session or a token. Finally, in the eighth step, the user is authenticated in the
application and can work with the content. Now the user will not see this happening and that happens
very quickly. In this video, we discussed asset access control.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 21/41
4/22/2019 Skillsoft Course Transcript

Asset Removal and Storage Media Sanitization


Learning Objective
After completing this topic, you should be able to
outline asset and media management with respect to deletion/removal/overwrite on
a cloud platform

1.
In this video, I'll discuss asset removal and storage media sanitization. Sanitization in this context
deals with ensuring that there are no data remnants left behind when we remove data. So when a file
is deleted, we need to make sure that it stays deleted and really is, in fact, deleted and not
recoverable. When a data record in a database is deleted, same way, we need to make sure that it
stays deleted and cannot be recovered. But at the same time, what about when we transmit data?
We might want to make sure that if we're transmitting sensitive information like credit card numbers,
that they are fully deleted and not resident in memory when no longer needed. The cloud service
consumer must be aware of the cloud service provider's deletion policy and mechanisms. As a cloud
service consumer, you may not be responsible for the erasure of data. So nonsensitive data versus
sensitive data would have different needs in terms of sanitization. So one question we should ask
ourselves is how does the cloud service provider manage deletion. If this isn't posted in their
documentation, then we would talk to one of their customer service representatives to find out. Is the
deletion audited and are deletion audit reports available to us, the customer?

Media sanitization is the process where the data is irreversibly removed from the media or the media
itself is permanently destroyed. In some cases, it could be to the point where physical disks get
drilled or cut up. There are some issues that we should consider related to storage media
sanitization. As a cloud service consumer, you won't be responsible for the erasure or destruction of
the devices that previously stored your data because that physical equipment is not under your
control, it's under the control of the cloud service provider. But then think about what would happen if
we change our cloud service provider. How can we guarantee that our data doesn't still reside on the
old cloud service provider's hardware? In some cases, you might not be able to get that guarantee. It
depends on the provider. When a cloud service provider destroys, wipes, or discards a storage
device, how can you be sure that your private data does not still reside on the device?

There are utilities out there that we can use to wipe disks. The standard old-school format tool only
creates a new file allocation table and root file system table. All previous data is still on the disk and
is recoverable. There are utilities out there that might be considered acceptable by various
government bodies or various institutions that can be used. If your cloud service provider doesn't
support secure sanitization, then you might not be able to use them for some aspects of your needs.
There are some additional storage media sanitation issues related to the fact that as a consumer you
should recognize that data categorization might also play a part in the cloud service provider's data
and media storage policies. They might have different levels of data storage. In other words, data
gets flagged with metadata and based on those tags will determine how it gets deleted or how long it
gets retained. We should also ask ourselves, does the cloud service provider utilize self-encrypting
drives or cryptographic erase?

Cryptographic erase means that devices would use a randomly generated encryption key that is
stored separately from the encrypted data. Cryptographic erase itself is the process of knowingly
erasing the media that stored the encryption and decryption keys. Now this means that we don't have
a key to decrypt data anymore. So the ciphertext would remain on the media device. But it's
effectively sanitized because there are no keys that can decrypt it, not even available to us. So this
would have no value to a hacker and it is a form of sanitization. Even where a cloud service provider
might deploy cryptographic erase, legislation that affects our business might still require that a media
device is physically destroyed in order to be deemed acceptably sanitized. NIST has special
https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 22/41
4/22/2019 Skillsoft Course Transcript

publication 800-88 revision 1 that deals with the destruction of media. Here it deals with how data
gets removed so that there are no artifacts or data remnants that can be retrieved by other parties. In
this video, we discussed asset removal and storage media sanitization.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 23/41
4/22/2019 Skillsoft Course Transcript

Cloud Network Security


Learning Objective
After completing this topic, you should be able to
define issues and solutions relating to cloud network structures

1.
In this video, I'll talk about cloud network security. A lot of the network security mechanisms that we
might have employed with an on-premises network will also apply to cloud computing networks.
Network security includes local area networks, virtual local area networks or VLANs, wireless
networks, and of course internet security. In the end with network security, we have a couple of goals
– one of which is to prevent unauthorized access to the network and its systems as well as
preventing unauthorized access to data traversing over the network. Access to the network can be
controlled via single or multi-factor authentication. Single factor authentication would be, for example,
username and password. Even though it's two items, they both fall under the category of something
you know – you know your username and you know your password – hence single factor
authentication. So those could be guessed. However, multi-factor authentication is considered much
more secure. For example, besides a username and a password, we might require the use of a
physical smart card and we have to know the PIN to use the card. So something we know would be
the PIN. Something we physically must have would be the card itself, hence multi-factor
authentication.

Now we could use this to control access to the network in the first place. So before someone gains
access to the network itself, they would have to be successfully authenticated or we might use multi-
factor authentication only for a specific cloud service. 802.1X is a security standard where it supports
things like a RADIUS authentication server. This is a centralized server that various network
connectivity devices will forward authentication requests to. Now these types of network devices,
which are called RADIUS clients, would include things like VPN concentrators, network switches,
wireless access points, and so on. So, if we think for example about a VPN concentrator, it is
exposed to the public internet. People connect to it over the internet to establish the encrypted
tunnel. Now because it's exposed to the internet, we don't want the VPN concentrator itself doing
authentication. Instead, when a user tries to authenticate to the VPN over the internet, the VPN
concentrator would forward that authentication request to an internal-protected RADIUS
authentication server. So 802.1X is a security standard. It doesn't apply just to wireless, it also
applies to wired networks.

Some network vulnerabilities are related to packet sniffing. Packet sniffing is the active capturing
network communication traffic, whether it's a wireless or wired network. From this, malicious users
could steal passwords or retrieve sensitive data if it was not encrypted. There are plenty of packet
sniffing tools that are available out on the Internet for free, even for smartphone appliances. Here I've
used the Wireshark application to capture network traffic. Each line in this output represents a single
packet where I can see the source and destination addresses. And when I select the packet, I can
see the packet headers in the middle and ultimately down at the bottom, I can see the data. One way
to mitigate packet sniffing on your network is to control very strictly who can connect to a network in
the first place even in the cloud. Another way to mitigate this issue is to use intrusion detection
sensing technology that can detect when capturing is occurring on a network. Man-in-the-middle
attacks alter the contents of data packets that traverse the network. This also can use session
hijacking, whereby once we modify the contents of data packets that we've captured and sent them
on their way, we can overtake a session that was in the midst of occurring.

DNS attacks pollute the DNS database with rogue records that divert messages to rogue sites. So for
example, if a user connects to www.mybank.com, that's normally where they do their online banking,
a malicious user might have poisoned DNS so that mybank.com redirects the user to a web server
https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 24/41
4/22/2019 Skillsoft Course Transcript

under their control. So it might be a website that looks like the my bank website, but really the hacker
set it up just to capture personal user information. We should also think about Wi-Fi configurations.
Wireless networks are inherently insecure because we are transmitting data through the air.
However, there are various security mechanisms that we can employ, such as controlling which
devices can connect to a wireless network and also encrypting traffic being transmitted through the
air. In the cloud there are also virtual network vulnerabilities such as rogue virtual machines. Now a
rogue virtual machine is a virtual machine that's not being controlled or managed and this often takes
the form of a user running a virtual machine, perhaps on their desktop either knowingly or
unknowingly. The problem with this is if that virtual machine is bridged to the physical network, it's
actually transmitting on the real network with everybody else. And so, if that virtual machine for
instance had a DHCP server running, it could hand out incorrect IP configuration information to
clients on the network.

Other vulnerabilities include IP or MAC address spoofing. Spoofing means forging. It's quite easy to
forge an IP or a MAC address, so the traffic looks like it came from particular location. In the cloud,
when we work with our virtual networks, we can build firewall rules that take a look at where a
request is coming from. And we just need to understand that that could have been forged. VLAN
hopping allows traffic to hop from one VLAN to another. A VLAN is like a local area network segment
but instead of it being physically separated, it can be done for example through switch ports or
through the IP address. And finally, traffic snooping or packet capturing can also compromise security
on a network. In this video, we discussed cloud network security.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 25/41
4/22/2019 Skillsoft Course Transcript

Security in the Virtualized Environment


Learning Objective
After completing this topic, you should be able to
define issues and solutions relating to cloud virtualization infrastructures

1.
In this video, I'll talk about securing the virtualized environment. Cloud computing implies the use of
virtualization. But there are some potential security concerns. The Cloud Security Alliance or CSA
publishes a Security Guidance for Critical Areas of Focus in Cloud Computing. And Domain 13 of
that publication deals with virtualization security such as virtual machine guest operating systems
which should be hardened just as an operating system running on physical hardware should be.
Hardening means patching the operating system and applications, disabling unnecessary user
accounts or services, ensuring that there is a valid firewall and antivirus solution running in the guest
OS, and so on. Virtual machines are essentially collections of files for the most part. In some cases,
virtual machines might not use a virtual hard disk file, but rather...instead might go directly onto a
storage area network to talk to their storage. However, virtual machines will always have at least a
configuration file. We need to make sure that these files aren't accessible by malicious users.

Virtual machines sprawl results in loss of management. Virtual machines sprawl results from the fact
that it's very easy to quickly provision new virtual machines in a cloud environment. As such we might
end up overtime with virtual machines that are still running that nobody needs anymore. So not only
are we incurring charges for these running in the cloud, but it also increases our attack service. We
should only have virtual machine instances running that are required. We should also harden the
hypervisor operating system as well as guest operating systems. Shielded virtual machines allow
data center administrators to perform basic tasks against virtual machines such as starting, stopping,
perhaps creating a snapshot, or backing up virtual machines. But a shielded VM would not allow a
data administrator to get into the contents of the virtual machines such as virtual hard disk file
contents or processes running within the VM. Other areas of concern include VM security gaps. In
some cases, we have to be careful because virtual machines can be switched on and off simply
through software. We want to make sure that we've hardened the guest operating system properly to
make sure that this isn't simple. At the same time, when a virtual machine is turned off, a new threat
could be injected into one of the virtual machine files whether it's a startup file or whether it's the
configuration file or a hard disk file.

Virtual machines are essentially collections of files. And as such we should consider encrypting
virtual machine related files for additional security. The same thing could also apply to virtual machine
snapshots which could contain sensitive information. Virtual machines sometimes need to be isolated
from other virtual machines. Certainly this is true between different cloud tenants. We can segregate
client stored and transmitted data by launching virtual machine instances into a private virtual
network defined in the cloud. A single customer could also creates multiple virtual private networks in
the cloud in which different virtual machines are launched. Virtual machines eventually will be
decommissioned or destroyed and we need to make sure that when that happens there's no residual
data that's left behind. So we would have to determine what practices the cloud service provider has
adopted when it comes to sanitization. Virtual machine images or configuration files can be tampered
with. We want to make sure that the file system where these reside are secured.

Virtual machines can also be migrated over the network between hosts. We want to make sure that
when that happens, that happens on a secured network. Often virtual machine migration can be
conducted on a network that is isolated from regular user traffic. The NIST Virtualization Security
Guide, DRAFT SP 800-125 A, lists 22 security vulnerabilities related to virtualization environment. It
also lists specific internal threats related to the hypervisor, to the virtual machines, to virtual network
interface cards, virtual networks, and so on. NIST publications are made available to the general
https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 26/41
4/22/2019 Skillsoft Course Transcript

public over the Internet. Here we can see the document that's labeled Security Recommendations for
Hypervisor Deployment. In this video, we talked about securing the virtualized environment.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 27/41
4/22/2019 Skillsoft Course Transcript

Infrastructure and Data Threats


Learning Objective
After completing this topic, you should be able to
list and describe known and common threats to cloud infrastructure and data assets

1.
In this video, I'll discuss infrastructure and data threats. Infrastructural threats are those associated
with physical data centers including the items within them such as disk drives, servers, network
equipment. There's also the concern of physical access to the data center itself. There should be
security mechanisms in place such as guards, logbooks, a minimized number of entry and exit points
physically, and so on. There are also concerns related to administrative access to systems and data
within a data center. Often rack mounted equipment within the data center is stored within a cage that
can be locked, especially where a data center would house equipment for multiple different
customers. It would be a security breach to store equipment in the same locked rack from two
different customers or more. We need to take proactive steps to protect data assets such as data
files, databases, configuration files, applications, virtual machines, services, and so on. Interceptions
occur when access to data is obtained illegally, such as hacking into a system to get to the file
system to retrieve unencrypted data or capturing traffic on a network.

Now capturing network traffic might not itself be illegal but what we do with the information that we
learn could be. Fabrication occurs when a data asset gets copied. There are data leakage prevention
or DLP tools available to minimize this possibility. For example, we might prevent people from
copying data or forwarding it through e-mail or printing it and so on. Interruptions occur when a data
asset is unavailable for some reason. It could be due to corruption and a nonworking backup or it
could be something like ransomware whereby malware encrypts our data files and unless we pay a
ransom we would not have a decryption key to decrypt that data. Modifications relate to data that is
altered. We might use various mechanisms such as file hashing to detect this. File hashing takes
input data, feeds it through a one-way algorithm that results in a unique hash. Now the next time we
run that hashing algorithm against that data, if the data was changed, it would result in a different
hash so we know that something was changed. And that change might have been done intentionally
or unintentionally by a malicious user. There are plenty of software threats including
misconfigurations. For example, enabling public access to a port on a server when it was only
supposed to be given to nodes on a private network.

Some default configurations also pose problems with some operating systems and also some
firmware such as with wireless routers. In some cases a poorly written driver would not only affect
potentially the stability of an operating system, but it can also present a security hole. Patches need
to be applied to operating systems, drivers, application software, and so on. We need valid malware
scanners so that we can catch things like trapdoors. Trapdoors – often called backdoors – are secret
entry points into an application of some kind. Sometimes these are made by developers as they test
their code and sometimes they are not removed before it is released where they really should be
removed. A Trojan horse is a type of malware that looks benign, but is not. This might take the form
of downloading what appears to be a useful utility when in fact it's malware. Viruses attach
themselves to executable files. Worms are self-propagating malware that propagates itself
throughout a network.

The Cloud Security Alliance, CSA, publishes The Notorious Nine. This highlights the most common
threats. Cloud Security Alliance publications are made freely available to anyone that is interested
over the Internet. The Notorious Nine publication is listed here and it's available for download. The
Notorious Nine focuses on data breaches, data loss, account hijacking, insecure application
programming interfaces, and so on. The European Union Agency for Network and Information
Security or ENISA also publishes a Threat Landscape document that highlights the top 15 threats. It
https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 28/41
4/22/2019 Skillsoft Course Transcript

defines threat agents, attack vectors, and how the emerging threat landscape is evolving. In this
video, we discussed infrastructure and data threats.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 29/41
4/22/2019 Skillsoft Course Transcript

Platform-specific Security
Learning Objective
After completing this topic, you should be able to
define security considerations and responsibilities on a per Cloud Model (Category
basis – IaaS, PaaS, and SaaS plus their various derivatives)

1.
In this video, I'll discuss platform-specific security. Many of the security strategies that we might adopt
in a cloud computing environment would also apply to an on-premises environment. However, with
the cloud there are some notable differences. One of which is that an organization or an individual
that subscribes to a cloud provider for their services loses some control because it's running on cloud
provider infrastructure. Secondly, we have a dependency on an Internet connection in the case of a
public cloud provider before we can access those IT services. So we should adopt a methodical
approach for security that includes conducting a risk analysis where we identify assets, which could
include data, and the threats related to them. After which we can then determine how to prevent
attacks; how to detect them as they occur – perhaps through some kind of intrusion detection
systems; how to deter attacks – perhaps by encrypting data or using firewall rules; and how to deflect
attacks.

Deflection might come in the form of a honeypot, which is a system that is made to look vulnerable.
And it was intentionally made that way by its owners. This way we can deflect attention from real
assets to honeypots. Then there needs to be attack response and recovery in the event that they
actually do occur against assets. So to deploy defense mechanisms, we first have to have
organizational security policies in place that address these types of issues. After which, we would
then determine which hardware and software controls must be put in place to mitigate or reduce the
possibility of these threats. Some cases that can also include physical measures such as locking
racks in a server room and locking a server room door. Auditing will track all the usage related to all
of these cloud IT services. So we can determine whether or not anybody is abusing their power. For
platform-specific security, we have to ask ourselves who is responsible for security when it involves a
cloud service provider? Certainly some of the burden would fall on us – the consumer. But at the
same time some of the burden of security must also be the responsibility of the cloud service
provider.

The answer to who is responsible for security is that it depends on the platform you're using in the
cloud and the specific cloud service category. The Cloud Security Alliance or CSA defines the issue
at hand by discussing the type of cloud service that users are consuming. If you are a consumer and
you've established that you were not responsible for security aspects, then you must still discover the
security measures that are being implemented by the cloud service provider. The CSA Cloud Risk
Accumulation Model gives us an understanding of the dependencies of different models that these
models, of course, are infrastructure as a service, platform as a service, and software as a service. If
we take a look at how this works, most of the security burden falls on us – the cloud service customer
– when it comes to IaaS, infrastructure as a service. This is because we – the cloud consumers – are
deploying infrastructure items like virtual networks in the cloud, storage, and so on. So we have the
responsibility of dealing with the security. Now, if we look at the opposite end of the scale and we look
at software as a service, this is software that is made available by the provider for users to use. In
that case, much of the security falls on the cloud service provider because we have very little control
of that offering.

So with infrastructure as a service, we're dealing with networking in the cloud, storage, server
virtualization. This is stuff that would be managed, in some cases, by the cloud service provider. Now
with platform as a service, we would be responsible for our deploying, perhaps virtual server
instances, applications, and data. But with software as a service, again most of the responsibility for
https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 30/41
4/22/2019 Skillsoft Course Transcript

security would fall upon the cloud service provider. Some exceptions to that might be how we encrypt
data that results from using software as a service. That might be our responsibility. There are many
ISO Information Security Standards. Some that relates to IT service management, risk management,
information security standards, and ISO/IEC publication 20001-9, which deals with the application of
IT service management to cloud services. In this video, we discussed platform-specific security.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 31/41
4/22/2019 Skillsoft Course Transcript

Cloud - Data Life Cycle


Learning Objective
After completing this topic, you should be able to
detail the security-based data life cycle of cloud-hosted assets (data, files, features)

1.
In this video, I'll discuss the cloud data life cycle. The cloud data life cycle refers to cloud services,
which are often accessible over HTTP and HTTPS from the creation of that service to the storage of
data that results from it, the use of it, sharing of data, archiving of data, and the eventual destruction
of data. This is the cloud data life cycle, and it's crucial that security be thought of through each of
these phases whether we're using an existing solution or we're developing our own cloud service.
Data archiving and destruction, when we work within an on-premises IT environment, are completely
under our control. We own and manage the infrastructure. We control which tools get used and so
on. This is not the case in the public cloud. To a degree, we are at the mercy of the provider's
methods for archiving and data destruction. Data transmission to and from the cloud, in the case of
public cloud computing, normally takes place over the Internet. In some rare cases, you might have a
point to point dedicated or private network link between your on-premises network and the providers'
network. That's the exception. Normally transmission is over the Internet. So we need to make sure
that data is encrypted in transit. Then we have to think about data that's stored in the cloud,
especially if it's sensitive such as PII – Personally Identifiable Information. So the cryptographic
security of data, both at rest and in transit, is of paramount importance.

The nature of cloud computing dictates that security needs to be built in at each phase of this cloud
data life cycle instead of at the end, once we've got a solution and we tack on a security solution. It
needs to be thought of within each phase. Proper data governance will have policies that dictate that
this would be done and determine under whose responsibility data ownership falls. In most cases,
that would fall under the responsibility of the cloud service consumer. Other data life cycle
considerations are how data gets used or processed in various life cycle phases. For example, how
is data moved when we are using it versus when we are archiving it? Perhaps it's encrypted when
we're using it, but it doesn't get encrypted when we transmit it to an archive location and when it gets
stored. So we have to consider those facts. We also have to think about who accesses data in
various life cycle phases of that data. We might have a different group of users that have read-write
access when data gets created during its useful life. However, once it's archived, there might be a
different set of users that have access such as auditors, for example.

So data access can be controlled through each phase. We need to make sure that we know the
security mechanisms that are being used by the cloud service provider offerings. And in some cases,
we would map security controls as appropriate. For example, we might be able to configure a private
network in the cloud where we would host database server instances that are protected. In other
words, maybe they're not reachable through a public IP address. Cloud security controls can relate to
either structured or unstructured data. Structured data, for example, would be records stored within a
database application. And we could secure that data, for example, at the record or at the field level
with encryption. Unstructured data, for example, would be data stored on removable media. We
might have policies in place that require removable media devices like USB thumb drives to be
encrypted before data can be stored on them. In terms of digital Rights Management, or RM, we
could also consider tagging or labeling data in specific ways that control who has access to that data.

[Heading: Cloud Security Design – Cloud Data Life Cycle. A table with three columns and four rows
is displayed. The column headers are CONTROL, STRUCTURED/APPLICATION, and
UNSTRUCTURED. The entry for the first row under the column header CONTROL is Access
Control; the entries under the column header STRUCTURED/APPLICATION are DBMS Access
Controls and Administrator Separation of Duties; and the entries under the column header
https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 32/41
4/22/2019 Skillsoft Course Transcript

UNSTRUCTURED are File System Access Controls and Application/Document Management System
Access Controls. The entry for the second row under the column header CONTROL is Encryption;
the entries under the column header STRUCTURED/APPLICATION are Field Level Encryption,
Application Level Encryption, and Transparent Database Encryption; and the entries under the
column header UNSTRUCTURED are Media Encryption, Virtual Private Storage, File/Folder
Encryption, and Distributed Encryption. The entry for the third row under the column header
CONTROL is Rights Management; the entries under the column header
STRUCTURED/APPLICATION are Application Logic and Tagging/ Labelling; and the entries under
the column header UNSTRUCTURED are Tagging/ Labelling and Enterprise DRM. The entry for the
fourth row under the column header CONTROL is Content Discovery; the entries under the column
header STRUCTURED/APPLICATION are Cloud-Provided Database Discovery Tool, Database
Discovery/DAM, and DLP/CMP Discovery; and the entry under the column header
UNSTRUCTURED is Cloud-Provided Content Discovery DLP/CMP Content Discovery.]

In the cloud, we should also enable the use of data auditing and management tools, so that we can
monitor file and data access. This can be done through file access management systems and data
access management systems. Many organizations that require a higher level of security will use data
classification within the data life cycle framework. This allows us to define and assign relative values
to data assets. These values are then used to categorize stored data by sensitivity and business
importance levels. We get to make up the labels. This is a continuous process, not just initially when
data is created. Over a period of time, after data perhaps has been archived for a number of years,
we might loosen the security labels associated with that data. Sensitive and confidential data assets
should be treated with more care than assets that are considered public or less important, and this is
done through the labeling. All stakeholders that are affected by this should be involved in the data
classification processes. So for example, in a military installation, we might have different
classifications related to security clearance levels like top secret versus secret.

The Cloud Security Alliance's Cloud Control Matrix deals with data governance, and it deals with how
data will be assigned a classification label, which could be based on many criteria such as the type of
data, for example, if it's PII – Personally Identifiable Information – that's sensitive data, and it might
be classified accordingly. Whereas documentation related to a vendor service on their web site will
be classified as not secure. That would mean that the data would be publicly available. There are
many relevant regulations and standards that apply to data classification. The NIST as well as
Europe's ENISA framework have many documents related to this. Here in NIST publication SP 800-
53, we can see how the various security aspects are related to the cloud data life cycle, where we
categorize information system, select security controls, implement them, assess their usefulness,
authorize information systems, and the ongoing monitoring of those security controls. In this video,
we discussed the cloud data life cycle.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 33/41
4/22/2019 Skillsoft Course Transcript

Cloud Service Continuity


Learning Objective
After completing this topic, you should be able to
describe business continuity and disaster recovery as it applies to a cloud service

1.
In this video, I'll talk about cloud service continuity. Business Continuity, or BC, is a crucial aspect of
planning for when bad things happen. We need to ensure everybody knows their role and what steps
are to be taken to ensure the continuity of business operations. In the cloud, we lose some control
because we don't entirely own the infrastructure and the configuration of resources running on that
infrastructure. The cloud provider has a lot of that control. So this means that we need to discover the
cloud service provider's business continuity mechanisms and failover procedures. They must meet
our specific business needs. We can also build continuity items into service level agreements. The
Service Level Agreement, or the SLA, is the contractual document between the cloud provider and
the cloud consumer or customer. In it, we might define items related to incident reporting to clients
such as security breaches at a data center, for instance. We also have to think about multi-location
backup provided by the cloud service provider. In some cases, that might be automated or we might
pay an additional fee.

The SLA will also determine the expected uptime often expressed in a percentage. So we might have
99.99% uptime guaranteed for an e-commerce web site hosted in the cloud. We should also think
about service load balancing to distribute incoming traffic for a busy service like a web site and also
the service response times. Generally, when it comes to web pages, if it takes more than two
seconds to load a web page, this is bad because most people will simply browse to another web
page instead. We should also determine the availability of the cloud service provider's business
impact analysis to its cloud consumers. When building in resiliency, we need to determine to what
extent the cloud service provider implements an architecture that uses components that can
withstand failures such as clustering, whereby if one of the cloud services that we depend on is
running on a node and it fails, how quickly will that failover to another cluster node that remains
running?

In the event of a failure, what are the cloud service provider's capacity management methodologies
and defined reaction times? This relates to their RTO – Recovery Time Objective. This is the amount
of time that a service can be down, after which, if it's not returned back to normal operations, then we
can no longer run the business. So this is part of what we should be looking at as well. In some
cases, when it comes to capacity management, some cloud service providers have agreements with
other providers, whereby during busy times, for example, if there's an overload of requests for
storage that can't be met by one provider, that one provider will reach out to another provider and use
some of their infrastructure for example, storage capacity. We also need to consider physical
resilience of data centers owned by the cloud service provider. So how do things like fire, theft,
damage, floods, and so on affect the components and the running services in that specific data
center? Often cloud providers replicate data between data centers for this reason. The cloud service
provider should take on a design for failure culture. Cloud applications should be capable of running
in all clouds, not just one cloud, which could fail. Now that's not to say that we have to have the exact
same cloud services developed the same way among multiple cloud providers, but the data that
results from those services should be stored in a form that can be exported and imported freely
between varying systems. Maybe, for example, we're using the XML file format.

Individual component failure shouldn't be allowed to impact other components, and this is where
component isolation or loose coupling comes into play. Databases in the cloud should be distributed
and resilient. So they should be distributed, meaning they should be available in more than a single
data center or region operated by the cloud service provider, and they should be resilient in that if the
https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 34/41
4/22/2019 Skillsoft Course Transcript

host on which they are running fails, it should be failed over to another cluster node. There are a few
barriers related to business continuity in the cloud. One is regulatory compliance, where, regardless
of what the cloud service provider offers, we must run some IT workloads or store some data locally.
There could be technological issues; for example, we might not have enough bandwidth in a specific
region to safely archive data to the cloud. And in some cases, the costs may not make sense in
terms of using specific cloud services for business continuity. In some cases, it might be more cost-
effective to use solutions that run on-premises. In this video, we discussed cloud service continuity.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 35/41
4/22/2019 Skillsoft Course Transcript

Cloud Service Investment


Learning Objective
After completing this topic, you should be able to
define how a cloud deployment might be analyzed on a cost basis

1.
In this video, I'll talk about cloud service investments. There are some perceived advantages when
we decide to run IT services in a cloud environment. The first is a limited upfront investment in things
like hardware and infrastructure as well as things like software licenses, in some case. When we
utilize cloud services, that becomes the responsibility of the cloud service provider. There's also the
advantage of rapid elasticity where we can very quickly provision new resources and de-provision
them when we don't need them. So if we need a new server or a new e-mail accounts for employees
we've just hired or new storage space, we can have that in a matter of moments. Where if we weren't
using a cloud system, we would have to order that hardware, wait for it to arrive, configure it before it
could be used. Plus with rapid elasticity, when we de-provision resources that we don't require, we
aren't paying for them anymore. There's also the issue of mobile access to cloud services. One of the
characteristics of the cloud is that the services that are offered are available to anyone that's got an
Internet connection, at least in the case of the public cloud, using any type of device, be it a smart
phone, tablet, desktop, PC, anything like that.

Another advantage is reduced manpower requirements on-premises because we might be going


completely with all IT services in the cloud. That would mean that we wouldn't need to have a local
data center or server room, which means we wouldn't need the personnel to man that equipment.
Now that's not to say we don't need any IT personnel, even if we're running all of our IT services in
the cloud. They are still required to administer the cloud services and to deal with incidents as they
come up. Service uptime is often guaranteed as a percentage in the service level agreement with a
cloud service provider. So instead of that burden being upon a local IT team on-premises, instead it's
on the cloud service provider. In which case, if that gets violated, the cloud service provider might
offer free services for a period of time to make up for it. But this is something that you should check in
the service level agreement, which is negotiable. The lead time to production is reduced in the cloud
when it comes to developing products like software applications because it's very quick and easy to
provision resources such as databases and servers, even in an isolated virtual private network
designed in the cloud. There is a potential risk reduction in the availability of data and the security of
the data is assured in the cloud often as part of the service level agreement.

We also lose the issue of having to deal with security overhead. That is the responsibility of the cloud
service provider. And due to economies of scale and the fact that they have to undergo constant
third-party audits, the security done at a data center owned by a cloud provider is probably better
than what most small to medium-sized organizations could do themselves within their own data
centers or server rooms. To measure the value of our investment in cloud services, there are a
number of contributing factors, including the pricing and costing of cloud services. Depending on the
provider and the service in question, there might be a base rate or a subscription fee, and on top of
that there might be a usage fee that we pay monthly. In the cloud, everything is metered, so we pay
for only what we use. This leads to the return on investment, which should be calculated over time.
With cloud services, we really have what we would consider an operational expense monthly, instead
of a capital expenditure initially, for example, to purchase servers and infrastructure equipment. We
don't have to do that. The cloud service provider does that. We should also identify Key Performance
Indicators – KPIs – that will determine if our IT services running in the cloud are effective in meeting
our business needs and that those are being delivered efficiently and cost-effectively.

Risk management needs to be taken into account where we have to identify the assets that will be
running in the cloud and any threats against them, and we need to prioritize them to determine what
https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 36/41
4/22/2019 Skillsoft Course Transcript

security controls need to be put in place. Now the security controls might be handled by the provider,
or by the cloud consumer, us, or a hybrid of both. If we're shopping around for cloud service
providers, it's crucial that we evaluate more than one and their service offerings that might meet our
business needs to determine the best fit. We might even take a look at consumer satisfaction due to
using a specific cloud service provider that might be done through contacting them or by looking at
reviews. Contributing factors to return on investment, or ROI. Include the average cost of services. In
the cloud, remember, we might have a base monthly subscription fee and, on top of that, a usage
fee. So we have to consider that over time. That's not to say that using cloud IT services is always
cheaper because that's not necessarily the case. It just means that the cost is spread out over time.
Plus we have efficient resource provisioning. We only pay for what we're using. Consider the
example where a development team needs to order hardware on-premises to build and test and
deploy a new software solution. Well what if that project only lasts for four, five months? What about
all the hardware that was acquired for that project? Now there might be a way to efficiently reuse it,
but in the cloud, we simply de-provision the hardware and software that was being used, and we're
no longer paying for it in any way.

Also software licenses may not be required in some cases, depending on what we're deploying with
certain cloud providers. For example, if I'm deploying a database instance, I might be able to use my
own existing database license or I might be able to use one that's built in to a template when I deploy
that database in the cloud. There's our power consumption cost reductions also because we don't
have to house us much local physical computing equipment. So there's less power consumption.
There's less cooling required. It takes less square footage in office space and so on. We also have
other benefits such as a centralized development environment using Platform as a Service. And
finally we have IT staff redeployment benefits where we might require less IT staff on-premises.
However, we still need some to manage our cloud services and to deal with incidents. And let's not
forget that to access cloud services, end users still require a computing device of some kind. Now it
might be a thin client. It might be a mobile phone. It might be a laptop. But it still needs to have a
connection over the Internet if we're using public cloud services. In this video, we discussed cloud
service investments.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 37/41
4/22/2019 Skillsoft Course Transcript

Cloud Functional Security


Learning Objective
After completing this topic, you should be able to
define and describe focus areas relating to the functional security of the cloud
service including vendor lock-in, interoperability, portability, migration, etc.

1.
In this video, I'll talk about cloud functional security. One important aspect of cloud functionality is
interoperability, where it might relate to the encryption of data being transmitted or stored in the cloud
as well as the authentication mechanisms in use, such that they are standardized mechanisms like
identity federation and single sign-on. Portability is the ability to port applications, systems, and the
resultant data between cloud service providers. This could be achieved either in a cloud service
provider web interface, or it might be done through a command-line or API interface. It might allow us
to export data to a standardized format like CSV, text, or XML so that we could reuse that data, either
in an on-premises system or with another cloud service provider. Cloud service provider lock-in
occurs when we have a cloud service provider with services that we depend on that are proprietary.
Cloud service provider transfer allows us to transfer our existing configurations and data to another
cloud service provider or to an on-premises network. Now that might happen in an automated fashion
through scripting or manually. We might do it through some kind of a web-graphical interface or at the
command line. And it could include data that results from applications we used in the cloud. It could
also include entire virtual machines with their virtual hard disks and their configuration files.

The lack of interoperability is a major showstopper to the adoption of cloud computing services. The
cloud service provider's products and services should work together, and they should require minimal
effort to integrate them into consumer systems. Now those consumer systems might be running on-
premises, or we can actually migrate services we're running on-premises to the cloud, for example,
within a virtual machine instance. So we want to make sure that they can talk together. The Open
Group states that it should be possible to write any user-specific software that might be required, so
that it's based on commonly available components that can easily be sourced from multiple suppliers.
So we're talking about standardization here and interoperability. The lack of portability is another
major barrier to the adoption of cloud computing services. Portability is the inability to reuse certain
components or data when switching from one cloud component or service to another or even
between cloud platforms.

Vendor lock-in directly relates to poor interoperability and portability. Lock-in is a condition where the
customer using a cloud service provider product or service can't easily transition to a competitor
cloud service provider. Vendor transfer occurs when opening movement is supported by the existing
CSP of data. As we've stated, we might use a web interface or a command line tool or an API call to
transfer data between cloud service providers. We also have to consider how much time we're going
to have to allocate to port that, whether we're bringing it to our on-premises network from a cloud
service provider, transferring data and services between providers, depending upon the amount of
data, will determine how long it takes. In this video, we discussed cloud functional security.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 38/41
4/22/2019 Skillsoft Course Transcript

Cloud Service Certification Assessment


Learning Objective
After completing this topic, you should be able to
describe methodologies for mapping cloud service requirements to service provider
certification and product certifications

1.
For widespread adoption of cloud computing services, customers need assurances that providers are
trustworthy and are doing everything in their power to protect their data. In this video, I'll talk about
cloud service certification. Currently, there is no single cloud-specific standard that governs cloud
computing in terms of development, provisioning of resources, use, and associated hardware.
Instead, there are some overarching standards that do apply to cloud computing data storage,
hardware and software that gets used, as well as some legal provisions. Although the legal
provisions are generally regional within a state, a province, and a country due to jurisdiction. The
Cloud Security Alliance, or CSA, has the security guidance documentation that is freely available to
anybody on the Internet. As is ISO/IEC publication 15408, which is a series of standards governing
the Common Criteria or CC, by which IT products can be evaluated. We'll talk more about Common
Criteria in a few minutes. The ISO 20000-7 standard is currently in development. What it does is it
takes ISO/IEC 20000-1 and it applies to cloud computing technologies. ISO 20000 relates to
information technology systems management. So that is now being imported so we can apply to
cloud computing services.

The Cloud Standards Customer Council was founded by IBM and others. It's an umbrella
organization that promotes and supports cloud standards and cloud direction. Its members include
large multinational firms, including Microsoft, IBM group, Oracle, AT&T, and others. Here on the
Internet, the Cloud Standards Customer Council web page has categories where we can learn about
security related to cloud computing, interoperability and portability, mobile device access to cloud
architecture, as well as cloud service-level agreements. The Cloud Security Alliance is another body
that includes members such as SAP, Microsoft, HP, as well as others. They publish the "Security
Guidance for Critical Areas of Focus in Cloud Computing." And they also maintain the STAR
program. STAR stands for Security, Trust & Assurance Registry. This is where cloud providers can
certify that their security mechanisms are acceptable with the services that they offer their customers.
The CSA publishes the Cloud Control Matrix, which is used by cloud providers to assure that they are
in line with industry accepted standards related to security provisioning. It also allows customers to
be assured that the providers are following through with those standards. CSA also actively monitors
other standards agencies, including Europe's ENISA and so on.

Since 1985, the U.S. federal government has maintained a set of evaluation criteria for judging and
assessing the security of computer systems, including hardware, firmware, and software. This was
developed with the international coordination, so it's internationally recognized. The evaluation
process is known as Common Criteria for Information Technology Security Evaluation or Common
Criteria or simply CC. Within the CC process, various classes of products such as operating systems
get evaluated against predefined security functional and assurance requirements. CC framework
promotes product comparability by providing a common set of requirements for evaluating security
functionality of products like hardware, firmware, and software. A vendor can make claims regarding
the security attributes of their products, which then get evaluated and certified by third-party testing.
This way, customers can rest assured that an IT service offering in the cloud or a hardware product
or even software is to be trusted. In this video, we discussed cloud service certification.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 39/41
4/22/2019 Skillsoft Course Transcript

Product Certification
Learning Objective
After completing this topic, you should be able to
outline methodologies for mapping cloud components to appropriate or required
industry certifications or industry standards

1.
In this video, I'll discuss product certifications. Product certifications act as a component of due
diligence, and it's important to establish cloud service provider compliance with reference to well
established standards. Some of these standards include the ISO/IEC series such as publication
15408-1, which deals with information technology security techniques. Also we've got the NIST FIPS
140-2 compliance – Security Requirements for Cryptographic Modules. Many cloud service providers
will publish their compliance with specific standards. For example, with Microsoft Azure, we can see
there is documentation related to compliance with FIPS 140-2, Federal Information Processing
Standards publication. And if we were to peruse this page a little bit further, we would see it deals
with the fact that Microsoft Azure, Cloud Services are compliant with Cryptographic Module use
within their product line.

Common criteria is published as a set of distinct but related standards. Part 1 deals with an
introduction and framework to common criteria, where general concepts and principles are discussed
for IT security evaluation of products. Part 2, the functional components of security, talks about the
functional components that comprise things like a software component or a hardware cryptographic
module. These are templates that are used when evaluating products. Part 3 deals with security
assurance components. This is a set of assurance components that serve as templates when
evaluating assurance requirements for products. It also defines evaluation criteria for protection
profiles and security targets and presents seven predefined attainable assurance packages which
are called evaluation assurance levels. These levels are used by consumers to determine how
trustworthy an IT solution is. The ISO/IEC 15408 series uses a similar type of evaluation process. It
defines the product that's being evaluated as the target or Target of Evaluation shortened to (ToE).

A protection profile is a list of security requirements defined by the product class or type such as
smart cards, firewalls versus operating systems. The security target is a specific security property of
the target being evaluated. Security functional requirements define security functions that must be
provided by a product to receive certification. This evaluation process establishes a level of trust and
confidence in the target of evaluation via a quality assurance process. Evaluation Assurance Level or
EAL 7 is the most stringent accreditation test and top achievement possible with common criteria.
NIST FIPS 140-2 is the Federal Information Processing Standard published by NIST. These common
criteria are used when assessing hardware and software cryptographic modules in computers and
services, as we saw with Microsoft Azure. There are four security levels: 1, 2, 3, and 4. One is the
lowest level of security, where security level 4 deals with the protection to attacks when operation
parameters are outside of the normal working range, which occurs often with various types of
network attacks. In this video, we discussed product certification.

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 40/41
4/22/2019 Skillsoft Course Transcript

Exercise: Specifying Architectural Security


Learning Objective
After completing this topic, you should be able to
define Cloud Service roles, categories, and services; describe data state and data
asset classification with reference to security; and outline the purpose of Common
Criteria

1.
In this exercise, you'll begin by listing standards-defined roles and sub-roles associated with the
cloud computing service. Then you'll define the correlation between cloud capabilities and cloud
service categories. Then you'll describe the various data states. And finally, you'll describe the
purpose of common criteria framework. Now pause the video and perform the exercise steps and
then come back to view the solutions.

Standards-defined roles include the Cloud Service Customer, or CSC; the Cloud Service Provider,
CSP; and the Cloud Service Partner, the CSN. The cloud service customer or consumer is the cloud
service user. The user is one of the sub-roles, as is the cloud service administrator. On the cloud
service provider side, sub-roles include customer support and care representatives, as well as cloud
service managers. For the cloud service partner, sub-roles include cloud service developer, cloud
service broker, and so on. Cloud capability types include infrastructure capability, platform capability,
and application capability. These relate to the following cloud service categories: Infrastructure as a
Service, Platform as a Service, and Software as a Service. Infrastructure as a Service is a listing or a
categorization of cloud offerings related to items such as provisioning storage or networks in the
cloud or virtual server instances. Platform as a Service is of interest to developers where they can
provision virtual server instances for testing and development of software applications, database
instances, and they can also use development tools made available by the cloud service provider.
Software as a Service offers end-user productivity tools like word processors, e-mail in the cloud, and
so on that users would connect to using any device over a network.

There are three data states: Data in Use, Data at Rest, and Data in Motion. Data in Use, for example,
would be files or database records that are constantly being used by a system or by a user. Data at
Rest is data that is being stored on a storage medium, even in the case of archival over the long-
term. Data in Motion is data that is in the midst of being transmitted over a network such as from a
customer on-premises network over the Internet to the cloud provider's network infrastructure. These
various data states should take security into consideration. For example, for Data at Rest we might
take advantage of a cloud service provider's server-side encryption if that's available. In the case of
Data in Motion we might make sure that we have a VPN tunnel between our on-premises network,
the cloud provider through which data is securely transmitted, or we might use some kind of
transport-level encryption like HTTPS. Common Criteria, or CC, is an internationally accepted set of
standards that are used to evaluate the security aspect of hardware, software, and firmware. Vendors
can submit their products for evaluation. There are number of evaluation assurance levels, from one
to seven, where one is the lowest and seven is the highest.

© 2018 Skillsoft Ireland Limited

https://cdnlibrary.skillport.com/courseware/Content/cca/cl_csip_a01_it_enus/output/html/course_transcript.html 41/41

You might also like