You are on page 1of 572

Certified Cloud Security Professional (CCSP®)

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Course Introduction

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Overview

(ISC)2 About This


Course

CCSP®
Maintenance and
Endorsement

Candidates’ Exam
Profile Information
Certifying Body

International Information System Security Certification Consortium

Founded in 1989

Nonprofit organization

Certifications
Certifying Body

(ISC)2 Responsibilities

Certifications
Common Body of
Knowledge (CBK)

Exams
Accreditation
CCSP® Certification

(ISC)2

CSA

6
Benefits of CCSP® Certificate

CCSP® is specifically designed to enhance security practices and principles

CCSP® is vendor-neutral and is recognized across various platforms

CCSP® certification opens doors to new opportunities in the IT industry


Features of CCSP®

Adheres to the
Is accredited to ANSI Common Body of
Knowledge

Complements existing Follows


programs such as CCSK ANSI/ISO/IEC
and CISSP Standard 17024
Target Audience

Professionals involved in IT Employees responsible for


architecture, web and cloud security procuring, securing, and
engineering, information security, managing cloud environments
governance, risk and compliance, or purchased cloud services
and IT auditing

Enterprise Security Systems Security Security Security Security Systems


Administrator Architect Consultant Engineer Manager Architect
Architect Engineer
9
Eligibility

5 years of cumulative experience in Information Technology


• 3 years of full-time work experience in
information security
• 1 year of experience in 1 or more of the
6 domains of CCSP® CBK
In case of no experience, candidates can
• Pass the examination to become an
Associate of (ISC)2
• Earn required experience within a period
of 6 years

10
Registration

01 Visit http://pearsonvue.com/isc2/ 04 Pay the examination fee

02 Create a user account


05 Schedule your examination

03 Select the nearest Pearson VUE testing center

11
Examination Information

Examination duration: Passing grade:


3 hours 700/1000

Language:
Number of questions:
125 English

Question format: Test center:


Multiple Choice Pearson VUE Testing Center

12
Examination Weights

Domains Weight
Cloud Concepts, Architecture, and Design 17%
Cloud Data Security 19%
Cloud Platform and Infrastructure Security 17%
Cloud Application Security 17%
Cloud Security Operations 17%
Legal, Risk, and Compliance 13%
Total 100%

NOTE:
CCSP Maintenance:
AMF rate for CCSP: US$125
This new AMF rate is revised and effective from July 1, 2019.

13
Endorsement Process

Subscribe to the (ISC)²® Code


of Ethics

Get an endorsement form signed by an (ISC)² certified


professional

Get certified within nine


months of the date of your
exam or become an Associate
of (ISC)²

Retake the exam in order to become certified

Endorsed by (ISC)² in absence


of endorsement from any
certified professional
Certification Maintenance

01 Recertification (3 years)

CPE (30 credits)

Fee ($125) 03

Audit Notice*

Candidates who pass are randomly selected and audited by (ISC)² Member Services prior to issuance of any
certificate. Multiple certifications result in a candidate being audited more than once.
Course Objectives

Domain 1 Domain 4
Cloud Concepts, Architecture, and Cloud Application Security
Design Requirements

Domain 2 Domain 5
Cloud Data Security Cloud Security Operations

Domain 3 Domain 6
Cloud Platform and Infrastructure Legal, Risk, and Compliance
Security
Course Highlights

6 Domains

Revised curriculum

Real-world scenarios

7 Case studies

Knowledge check

Course-end assessment
Certified Cloud Security Professional (CCSP®)

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Cloud Concepts, Architecture, and Design

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Learning Objectives

By the end of this domain, you will be able to:

Explain the various roles, characteristics, and technologies related to


Cloud Computing

Describe concepts related to Cloud Computing activities,


capabilities, categories, models, and cross-cutting aspects

Identify, describe, and define the design principles necessary for


secure Cloud Computing in different types of cloud categories

Identify national, international, and industry-specific criteria for


certifying cloud service providers

Identify criteria specific to the system and subsystem product


certification
Security Concepts
Security Concepts

• Confidentiality, integrity, and availability is also known as the


CIA triad
Confidentiality
• This model guides policies for information security within an
organization

• Information security protects valuable information from


unauthorized access, modification, and distribution

Integrity Availability
Security Concepts

Confidentiality

The Confidentiality principle asserts that only authorized


parties can access information and functions.

Example: Military secrets

Integrity
Availability
Security Concepts

Confidentiality
The Integrity principle asserts that information and
functions can be added, altered, or removed only by
authorized people and means.

Example: Incorrect data entered by the user into a database

Integrity
Availability
Security Concepts

Confidentiality

Availability principle asserts that systems, functions,


and data must be available on-demand according to
agreed-upon parameters based on the levels of service.
Example: Network Load Balancing Solution

Integrity
Availability
Security Concepts

Key security concepts are:


▪ Separation of duties

▪ Job rotation

▪ Mandatory vacations

▪ Dual control

▪ Split knowledge

▪ Principle of least privilege

▪ Need-to-know principle
Security Concepts

Defense in Depth

A practice of having multiple overlapping methods for


securing the environment.

These should include a blend of:

▪ Administrative controls
Application
▪ Logical/Technical controls
Host
Data
Physical controls
Internal Networks

Perimeter
Physical

Policies, Procedures, and Awareness


Security Concepts

Due diligence is the act of investigating and understanding the risks a


company faces.
Examples: Software testing, vulnerability assessment, cloud vendor
evaluation, and carefully scrutinizing the details of the contracts and
SLAs.

Due care describes development and implementation of policies


which ensure that minimal level of protection is in place to protect a
company, its assets, and its people.
Examples: Physical security, taking regular backups, updating software
patches, and implementing network firewalls.
Security Control

Security controls are the counter measures taken to safeguard an information system from attacks
against confidentiality, integrity, and availability.

Categories of security control for Cloud Computing:

▪ Administrative/Directive Controls: Implemented by creating and following organizational policies, procedures,


or regulations.

▪ Technical/Logical Controls: Implemented by using software, hardware, or firmware that restricts access to
information system.
Examples: Firewall, router, and encryption.

▪ Physical Controls: Implemented by installing fences and locks, and hiring security.
Security Control Functionalities
Types of security control:

▪ Preventive: Avoids an incident from occurring


Examples: Fences, locks, biometrics, man traps, separation of duties, job rotation, antivirus software,
firewall, etc.

▪ Detective: Identifies an incident’s activities and potentially an intruder


Examples: Security guards, CCTV, job rotation, mandatory vacation, audit trails, etc.

▪ Corrective: Fixes components or systems after an incident has occurred


Examples: backups and restore plans

▪ Deterrent: Discourages a potential attacker


Examples: Policies, NDA, CCTV, etc.

▪ Recovery: Reverts the environment back to regular operations


Examples: Backups, restores, fault tolerant systems, server clustering, and database and virtual
machine shadowing

▪ Compensating: Provides an alternative measure of control


Examples: CCTV
Cloud Computing Concepts
Cloud Computing

“Cloud Computing is a model for enabling ubiquitous, convenient, on-demand


network access to a shared pool of configurable computing resources (e.g.,
networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider
interaction.”

NIST SP 800-145
Business Drivers for Cloud Computing

Major business drivers:

• Business growth and agility (Scalability/Elasticity)

• Capital expenditures (CapEx to OpEx)

• Business Continuity/Disaster Recovery

• Mobility

• Collaboration and innovation

• Cost (Pay per usage)


Cloud Computing Concepts

Scalability is based on the need and investment. The customer can increase or decrease
computing resources such as storage, computing power, and network bandwidth dynamically.
Scaling can be either vertical (scaling up) or horizontal (scaling out).

Vertical Scaling:

Horizontal Scaling:
Cloud Computing Concepts

Elasticity refers to the ability of a service to scale in and out depending on demand.

For example, a website might be hosted on a single virtual machine, and as more users connect
to the website, one or more virtual machines can be automatically brought online to handle the
load.
Cloud Computing Concepts

Vendor lock-in occurs when a customer is unable to leave, migrate, or transfer to an alternate
provider due to technical or non-technical constraints.

Vendor lock-out occurs when a customer is unable to recover or access their own data due to
the cloud provider going into bankruptcy or otherwise leaving the market.
Cloud Computing Concepts

Advantages of a Cloud-Based Solution:

• Cost-effective

• Easy to utilize

• Better quality of service

• Reliable

• Easy outsource

• Easy maintenance and


upgradation
Cloud Reference Architecture
The Conceptual Reference Model

Cloud Cloud Provider


Cloud Broker
Consumer

Service Orchestration Cloud Service Service


Service Layer Management Intermediation
Cloud Auditor SaaS
PaaS
IaaS Business Service
Support Aggregation

Security
Security Audit

Privacy
Resource Abstraction and
Control Layer Provisioning/
Privacy Impact Configuration
Audit Physical Resource Layer Service
Arbitrage
Hardware Portability/
Performance Interoperability
Audit Facility

Cloud Carrier
Cloud Computing Roles

Cloud Consumer A person or organization that uses service from Cloud Providers.

Cloud Provider A person, organization, or entity responsible for making a service available
to interested parties.
Cloud Auditor A party that conducts independent assessment of cloud services,
information system operations, and performance and security of the cloud
implementation.
Cloud Broker An entity that manages the use, performance and delivery of cloud
services, and negotiates relationships between Cloud Providers and Cloud
Consumers.
Cloud Carrier An intermediary that provides connectivity and transport of cloud services
from Cloud Providers to Cloud Consumers.

Regulators The entities that ensure organizations are in compliance with the
regulatory framework. These can be government agencies, certification
bodies, or parties to a contract.
Cloud Actors

Cloud consumer Cloud carrier Cloud provider

Cloud broker

Cloud auditor
Infrastructure as a Service (IaaS)

“The capability provided to the consumer is to provision processing, storage, networks, and
other fundamental computing resources where the consumer is able to deploy and run arbitrary
software, which can include OSs and applications. The consumer does not manage or control
the underlying cloud infrastructure but has control over OSs, storage, and deployed
applications; and possibly limited control of select networking components (e.g., host firewalls).”

- NIST SP 800-145
Infrastructure as a Service

Manage Compute Network

• Provides GUI and API access to • Encapsulates CPU processing time • Provides intra-, inter-, and extra-
infrastructure configuration and and RAM working space cloud communications
reporting • Implemented by hypervisors, • Maybe virtualized within a
• Implemented as a stand-alone containers and bare metal hypervisor, or carefully configured
application, integrates with • Must isolate different users’ as with bare metal
underlying cloud components workloads • Must isolate different workloads’
• Must robustly control access communications
through strong authentication and
authorization

Storage Database

• Provides block storage, often with • Provides simplified access to


additional functionality database services
• Implemented as virtual disk within • Implemented as SQL or NoSQL
a hypervisor, or has direct access engines, often managed for the
to physical storage cloud as a whole
• Must isolate different workloads • Must isolate different workloads
from stored data from data, and scale dynamically as
workload demands
Infrastructure as a Service (IaaS)

Key characteristics of IaaS:


• Scalability

• Reduced cost of ownership of physical hardware

• High availability

• Physical security requirements

• Location and access independence

• Metered usage

• Potential for “green” data centers


Infrastructure as a Service (IaaS)

Security concerns for IaaS:

• Multitenancy
• Co-location
• Hypervisor security and attacks
• Network security
• Virtual machine attacks
• Virtual switch attacks
• Denial-of-Service attacks (DoS)
Platform as a Service (PaaS)

“The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or
acquired applications created using programming languages, libraries, services, and tools supported by
the provider. The consumer does not manage or control the underlying cloud infrastructure, including
network, servers, OSs, or storage, but has control over the deployed applications and possibly
configuration settings for the application-hosting environment.”

- NIST SP 800-145
Platform as a Service (PaaS)

Key characteristics of PaaS:

• Performs auto-scaling

• Supports multiple languages and frameworks

• Supports multiple host environments

• Works on “Choice environments”

• Upgrades are simple

• Cost-effective

• Access is easy

• Responsible for licensing


Platform as a Service (PaaS)

Security concerns for PaaS:

System Isolation User Permissions User Access Malware, Trojans, and Backdoors
Software as a Service (SaaS)

“The capability provided to the consumer is to use the provider’s applications running on a cloud
infrastructure. The applications are accessible from various client devices through either a thin client
interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not
manage or control the underlying cloud infrastructure including networks, servers, operating systems,
storage, or even individual application capabilities, with the possible exception of limited user-specific
application configuration settings.”

- NIST SP 800-145
Software as a Service (SaaS)

Key characteristics of SaaS:


• Reduced overall costs

• Licensing

• Ease of use and administration

• Standardization

• Automatic updates and patch management


Software as a Service (SaaS)

Security concerns for SaaS:

Web Application Security User Permissions User Access Malware, Trojans, Backdoors
Cloud Service Categories

Enterprise IT Infrastructure as a Service Platform as a Service Software as a Service

Customer
Managed
Applications Applications Applications Applications

Customer
Managed
Security Security Security Security

Provider Managed
Databases Databases Databases Databases

Provider Managed
Customer

Operating Systems Operating Systems Operating Systems Operating Systems


Managed

Provider Managed
Virtualization Virtualization Virtualization Virtualization
Servers Servers Servers Servers
Storage Storage Storage Storage
Networking Networking Networking Networking
Data Centres Data Centres Data Centres Data Centres
Cloud Service Categories and Its Application

Data Voice Video PC Embedded Mobile


Presentation Presentation
Modality Platform
Mgmt
APIs
Salesforce.com
Native Web Emulated
Applications Google Apps
Oracle OnDemand
Structured Unstructured
Data Metadata Content

Google AppEngine
Database Messaging Queuing IAM/Auth
Integration and Middleware Force.com

GoGrid
Mgmt CloudCentre API
APIs
Amazon API
Core Connectivity and IPAM/ LB and
Security IAM/Auth
Delivery DNS Transport
PaaS
IaaS

SaaS

Abstraction
Grid/
VMM Cluster/ Images Amazon EC2
Hardware Utility GoGrid
FlexiScale
Compute Network Storage
Facilities

Power HVAC Space


Cloud Deployment Models

Public Cloud

“The cloud infrastructure is provisioned for open use by the general public. It may be owned,
managed, and operated by a business, academic, or government organization, or some
combination of them. It exists on the premises of the cloud provider.”

- NIST SP 800-145
Cloud Deployment Models

Key characteristics of Public Cloud:


• It is available to the generic public.

• It can be located at the premise of the cloud provider; may


be owned by a private company, organization, academic
institution, or a combination of owners.

• It is easy to set up and is inexpensive to the customer.

• One can pay only for services consumed.


Cloud Deployment Models

Private Cloud

“The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple
consumers (e.g., business units). It may be owned, managed, and operated by the organization, a
third party, or some combination of them, and it may exist on or off premises.”

- NIST SP 800-145
Cloud Deployment Models

Key characteristics of Private Cloud:

• Owned and controlled by a single entity

• Primarily used by an entity for their own purpose, and


can be open to collaboration with organizations

• Located on or off premises

• Used by different departments with internal billing


Cloud Deployment Models

Hybrid Cloud

“The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private,
community, or public) that remain unique entities, but are bound together by standardized or proprietary
technology that enables data and application portability (e.g., cloud bursting for load balancing between
clouds).”

- NIST SP 800-145
Cloud Deployment Models

Key characteristics of Hybrid Cloud:

• Composed of two or more different cloud models (Public,


Private, and Community)

• Standardized or proprietary technologies that enable portability


between models

• Typically leveraged for load balancing, high availability, or disaster


recovery
Cloud Deployment Models

Community Cloud

“The cloud infrastructure is provisioned for exclusive use by a specific community of consumers from
organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance
considerations). It may be owned, managed, and operated by one or more of the organizations in the
community, a third party, or some combination of them, and it may exist on or off premises.”

- NIST SP 800-145
Cloud Deployment Models

Key characteristics of Community Cloud:

• Owned by a group of similar organizations for use within the group

• Models and features are similar to Private Cloud

• Managed and controlled by the member organizations

• May exist on or off premises of the ownership organization


Service Models and Characteristics

Broad Network Measured On-Demand


Rapid Elasticity
Access Service Self-Service
Essential
Characteristics
Resource Pooling

Service
IaaS PaaS SaaS Models

Deployment
Public Private Hybrid Community
Models
Comparison of Cloud Deployment Models

Private Community Public Hybrid

Scalability Limited Limited Very high Very high

Security Most secure option Very secure Moderately secure Very secure

Performance Very good Very good Low to medium Good

Reliability Very high Very high Medium Medium to high

Cost High Medium Low Medium


Case Study: Hybrid Cloud

Business problem: An enterprise manages its own dedicated


infrastructure in an on-premise data center. Employees are complaining
about network speed.

Investigation report: IT department notices high traffic inflow and


outflow from research department into Amazon EC2.

Outcome: EC2 is an efficient solution, so IT took control of the web


services and centralized its management. This increased the bandwidth
with the existing WAN provider.
Case Study: Hybrid Cloud

Business problem: A few weeks later, employees still face latency and downtime issues.

Solution: IT department decided to offload large HR workloads into the pay-as-you-go


model of EC2.

Business problem: Offloading HR workloads requires new connection to its private data
center. New connection requires increase in bandwidth with their WAN provider, who is
charging more.

Solution: IT department decides to move its assets to AWS facility. This allows direct
connection to AWS rather than going through a network loop.

Outcome: Enterprise saves 40% bandwidth cost for EC2 and finds network provider
within the same facility. Hence, enterprise meets the requirements at low cost for their
overall corporate WAN.
Cloud Security Vulnerabilities

CSRF SQL injection


XSS
Buffer Overflows

TOCTTOU
Applications

OS

Rootkits
Hardware Hypervisor vulnerabilities

Trojans
Cloud Technology Roadmap

Cloud technology roadmap helps to develop secure identity, access, compliance


management configurations, and practices.

Characteristics of cloud technology roadmap:

• Interoperability • SLAs

• Portability • Auditability

• Availability • Regulatory Compliance

• Security

• Privacy

• Resiliency

• Performance

• Governance
Impact of Related Technologies

Artificial Intelligence
Artificial intelligence (AI) is a broad concept that addresses the use of computers to mimic the
cognitive functions of humans.

It covers topics related to automated capabilities of learning, problem-solving, reasoning,


interpreting complex data, game playing, speech recognition, social intelligence, and perception.

Artificial intelligence is steadily making its way into enterprise applications in areas such as
customer support, fraud detection, and business intelligence.
Impact of Related Technologies

Machine Learning

Machine Learning is a subset of AI and focuses on the ability of machines to automatically learn and
improve from experience.

Cloud Computing provides two basic prerequisites for running an AI system efficiently. It is
economically scalable and uses low cost resources as well as processing power to crunch huge amount
of data.

Amazon Web Services, for example, supports machine learning using AWS' algorithms to read native
AWS data (such as RDS, Redshift, and S3). Google supports predictive analysts with its Google Prediction
API, and Microsoft provides an Azure machine-learning service.
Impact of Related Technologies

The Internet of Things (IoT)


It is the global network of connected embedded systems which include devices from smart house-hold
appliances to self-driven cars to real-time accident predictions that are able to communicate over the
Internet.

Key cloud security issues related to IoT include:

• Authentication: Most IoT devices have very poor authentication/authorization support.


• Encryption: Many IoT devices use weak, outdated, or non-existent encryption which places data
and the devices at risk.
• Patching: Most IoT vendors follow poor security practices and poor patching practices resulting in
insecure products.
• Privacy: A user’s personal information could be collected and used insecurely, improperly, or
without permission.
Impact of Related Technologies

Blockchain can be defined as a public ledger network Each block in the blockchain is cryptographically
for secure online transactions with virtual currencies. linked to the previous block after validation and
Transaction records are encrypted by using undergoing a consensus decision. Blockchain
cryptographic methods and executed in a distributed networks typically generate an enormous number of
computer network as blockchain software. transactions.

Blockchain

Another important issue regarding blockchain


Elasticity and scalability are some of the most networks is system resilience and fault tolerance. It
important functionalities of the cloud systems to means that a failure of any single node in the
provide on-demand cloud resources to support large blockchain network should not affect the work of the
volumes of generated data for a dynamically whole system. Cloud services help in such cases
changing workload. through the replication of data stored in data centers
and the use of multiple software applications.
Impact of Related Technologies

Containers
Containers provide a standard way to package an application's code, configurations, and
dependencies into a single object. This creates an isolation boundary at the application level rather
than at the server level.
Isolation allows container-based applications to be deployed easily and consistently, regardless of
whether the target environment is a private data center, the public cloud, or even a developer’s
personal laptop.
Impact of Related Technologies

Containers

If anything goes wrong in that single container (for example, security breach and excessive
consumption of resources by a process) it only affects that individual container and not the whole VM
or whole server.

Each container runs as a separate process that shares the resources of the underlying operating
system.
Impact of Related Technologies

Containers benefits:

Run anywhere: Containers package the code with the configuration files and dependencies which it
requires to consistently run in any environment. This also makes it easier for developers to test softwares
across multiple environments.

Improve resource utilization: Containers are able to operate with the minimum amount of resources to
perform the task they were designed for; this can mean just a few pieces of software, libraries, and the
basics of an OS. This results in two or three times as many containers being able to be deployed on a server
than virtual machines.

Scale quickly: An orchestration system, such as Google Kubernetes, is capable of dynamically adjusting and
adapting to the changing needs, when the quantity of containers need to scale out. It can replicate
container images automatically and can remove them from the system.
Impact of Related Technologies

Quantum Computing:

Quantum computers is the next generation of computing. Unlike traditional computers, they
derive their computing power by harnessing the power of quantum physics.

Though there has been rapid strides in quantum computing, we are still quite some distance away
from creating a commercial and usable quantum computer.
Impact of Related Technologies

Quantum Computing:

Given that quantum computing will have the capability to solve problems in seconds,
quantum computing poses a significant threat to the sustainability of encryption.

IBM is one of the forerunners in quantum computing, as it provides a cloud platform


called IBM Q that gives the general public an experience of quantum computing.
Cryptography, Key Management, and Access Control
Cryptography

Encryption

Within a cloud environment, it is the duty of the Cloud


Security Professional to evaluate the needs of the
application, the technologies it employs, the types of
data it contains, and the regulatory or contractual
requirements for its protection, and use.

Types of data that require protection:


• Data in transit
• Data at rest
Cryptography

Data in transit focuses on information or data while in transmission across systems and components,
and across internal and external (untrusted) networks.

When the information is traversing through trusted and untrusted networks, the opportunity for
interception, sniffing, or unauthorized access is heightened.
Cryptography

Data in transit includes the following scenarios:

Data in transit

• Data transiting from an end user endpoint on the Internet to a web-facing service in the cloud.

• Data moving between machines within the cloud, such as between a web virtual machine (VM), and a
database.

• Data traversing trusted and untrusted networks.


Cryptography

Data at rest

Data at rest focuses on information or data while its stagnant.

• Data must be encrypted for confidentiality

• Must facilitate high performance and system speed

• Portability and vendor lock-in considerations


Symmetric Cryptography

• A secret key is also called as a symmetric key, since the same key is
required for encryption and decryption, or for integrity value generation
and integrity verification.

• It provides data integrity using message authentication codes (Example:


a Hash-Based Message Authentication Code or HMAC), or an encryption
mode of operation that provides data integrity.
Asymmetric Cryptography

Public and Private Key pair

• A pair of mathematically related keys used in asymmetric cryptography for authentication,


digital signature, or key establishment.

• The private key is used by the owner of the key pair, is kept secret, and should be protected at
all times.

• The public key should be publicly available.


Key States

Generation Activation Deactivation Suspension

Expiration Destruction Archival Revocation

Key Recovery Key Distribution Key Escrow


Key Management Generic Security Requirements

Verification: Parties performing key management functions are properly authenticated.

Source Authentication: Key management commands and associated data are protected from spoofing.

Integrity Protection: Key management commands and associated data are protected from undetected
and unauthorized modifications.

Confidentiality: Secret and private keys are protected from unauthorized disclosure.

Metadata Protection: All keys and metadata are protected from spoofing and unauthorized
modifications.

Encryption Key Protection: Encryption keys must be secured at the same level of control, or higher, as
they protect the data.
Approaches to Key Management

Major approaches to Key Management:

• Remote Key Management

• Client-Side Key Management


IAM and Access Control

Identity and access management ensures and enables the right individuals to access the right
systems and data at the right time under the right circumstances.

Key phases:

• Provisioning and deprovisioning

• Centralized directory services

• Privileged user management

• Authentication and access


management
Provisioning and Deprovisioning

User provisioning standardizes, streamlines, and creates an efficient account creation process while
creating a consistent, measurable, traceable, and auditable framework for providing access to end users.

Deprovisioning is the process whereby a user account is disabled when the user no longer requires access
to the cloud-based services and resources.
This is not just due to a user leaving the organization but may also be due to a user changing a role,
function, or department.
Centralized Directory Services

A directory service stores, processes, and facilitates a


structured repository of information stored and coupled
with unique identifiers and locations.

Examples:
• Lightweight Directory Access Protocol (LDAP)
• Microsoft Active Directory (AD)
Privileged User Management

Privileged user management focuses on requirements to manage the lifecycle of the


user account with the highest privilege in the system.

Compromised privilege account leads the attacker to access the resources and
negatively affect the organization.
To avoid the attack and access to a privilege account by a hacker, segregation of duties
(a risk reduction technique) becomes a necessity.
Authorization and Access Management

Authorization: Access management:


Determines the user’s right Focuses on the manner and way
to access a certain resource in which users can access relevant
resources based on their
credentials and characteristics of
their identity
Data Remanence

Data remanence is the residual representation of digital data that remains even after attempts have been
made to remove or erase the data.

Overwriting Degaussing Encryption Cryptographic Erasing Media destruction


Data Remanence

Countermeasures for data remanence:


• Clearing: Removal of sensitive data from storage devices in such a
way that it may not be reconstructed using normal system functions
or data recovery utilities

• Purging or sanitizing: Removal of sensitive data from storage


device with an intent that the data cannot be reconstructed by
any known technique

• Destruction: Destruction of underlying storage media to counter data


remanence
Virtualization

Features of virtualization:

• It enables a single hardware equipment to run multiple operating system environments simultaneously,
which enhances processing power utilization.

• The hypervisor program controls the execution of the various guest operating systems and provides the
abstraction level between the guest and host environments.
Virtualization

VM1 VM2

Hypervisor VM1 VM2

Host OS Hypervisor

Hardware Hardware

Hosted Hypervisor Bare-Metal Hypervisor


Virtualization

Why is virtualization required for Cloud Computing?

Server virtualization is required for Cloud Computing because virtualization offers:


• Scalability/Elastic computing

• Resource sharing and pooling

• Load balancing

• High availability

• Portability

• Cloning
Hypervisor Attack

Hyperjacking

Installation of a rogue hypervisor can take complete control of a server such as SubVir, Blue Pill
(hypervisor rootkit using AMD secure virtual machine), Vitriol (hypervisor rootkit using Intel VT-x),
and direct kernel structure manipulation.

VM escape
It is performed by allowing OS of VM to break out and interacts directly with the hypervisor.
Running an arbitrary code on the host OS allowing malicious VMs to take complete control of the
host OS.
Common Threats

Top Cloud Computing threats:

• Data breaches • Advance persistent threats


• Insufficient identity, credential, • Data loss
and access management
• Insufficient due diligence
• Insecure interfaces and APIs
• Abuse and nefarious use of
• System vulnerabilities cloud services
• Account hijacking • Denial of service
• Malicious insiders • Shared technology resources
Data Loss and Data Breach *

Data loss refers to loss of information either by deletion,


overwriting, corruption, or integrity issues related to information
stored, processed, and transmitted within the cloud DATA LOSS
environment.

Data breach can happen due to insufficient controls over the


identity and credential systems which are used for access. This
increases the probability of a data or system breach dramatically.
DATA BREACH
Securing Application Programming Interface

Strategies for securing APIs:

• Validate parameters
• Apply explicit threat detection
• Turn on SSL
• Apply rigorous authentication and authorization
• Use proven solutions
Advanced Persistent Threats

It uses multiple phases to break into a network, avoid


detection, and harvest valuable information over the long
term.
Advanced Persistent Threats:
• Can be customized
• Can be organized
• Require funding, tools, time, and patience
Denial of Service

Denial-of-service attack will render an application


or a system inaccessible or will significantly
degrade the service.
Malicious Insiders

According to CERT, malicious insider threats to an organization can come from “a


current or former employee, contractor, or other business partner who has or had
authorized access to an organization’s network, system, or data and intentionally
exceeded or misused that access in a manner that negatively affected the confidentiality,
integrity, or availability of the organization’s information or information systems.”
Abuse of Cloud Services

Attackers can access and execute dictionary attacks,


execute DoS attacks, crack encryption passwords, and
host illegal software and materials for widespread
distribution.
Insufficient Due Diligence

Without proper and thorough evaluation of their


systems, designs, and controls, an organization may
unintentionally expose themselves to security risks
and vulnerabilities by moving to a cloud environment.
Shared Technology Vulnerabilities

CSPs share infrastructure, platforms, and applications among tenants and potentially with other
providers which can include the underlying components of the infrastructure which results in
shared threats and vulnerabilities.

A defense-in-depth strategy should include compute, storage, network, application, and user
security enforcement and monitoring.
Design Principles of Secure Cloud Computing
OWASP

OWASP Top 10 (new)

• Injection
• Broken Authentication
• Sensitive Data Exposure
• XML External Entities (XXE)
• Broken Access Control
• Security Misconfiguration
• Cross-Site Scripting (XSS)
• Insecure Deserialization
• Using Components with Known Vulnerabilities
• Insufficient Logging and Monitoring
Payment Card Industry Data Security Standard

Visa, MasterCard, and American Express established PCI DSS as a security standard
to which all organizations or merchants that accept, transmit, or store cardholder
data, of the size or number of transactions, must comply.
PCI DSS

Requirement

1 Install and maintain a firewall configuration to protect cardholder data

2 Do not use vendor-supplied defaults for passwords or other security parameters

3 Protect stored cardholder data

4 Encrypt transmission of cardholder data across open, public networks

5 Protect all systems against malware and regularly update AV software

6 Develop and maintain secure systems and applications

7 Restrict access to cardholder data

8 Identify and authenticate access to system components

9 Restrict physical access to cardholder data

10 Track and monitor all access to network resources and cardholder data

11 Regularly test security systems and processes

12 Maintain a policy that addresses information security for all personnel


Cost-Benefit Analysis

Key factors influencing cost-benefit analysis:

• Resource pooling
• Shift from CapEx to OpEx
• Factors like time and efficiency
• Depreciation
• Reduction in maintenance and configuration time
• Shift in business policies
• Utility cost
• Software and licensing cost
• Pay-per-usage
Evaluate Cloud Service Providers
Certification against Criteria

Different types of certifications:

• ISO/IEC 27001:2013
• ISO/IEC 27002:2013
• ISO/IEC 27017:2015
• SOC 1/SOC 2/SOC 3
• NIST SP 800-53
• PCI DSS
ISO 27001:2013

Since September 2013, ISO 27001 has been updated to ISO 27001:2013.

Following are the 14 domains:

• Information Security Policies • Operations Security


• Organization of Information Security • Communications Security
• Human Resources Security • System Acquisition, Development, and Maintenance
• Asset Management • Supplier Relationship
• Access Control • Information Security Incident Management
• Cryptography • Information Security Business Continuity Management
• Physical and Environmental Security • Compliance
ISO/IEC 27002:2013

ISO/IEC 27002:2013 gives guidelines for organizational information security


standards and management practices including the selection, implementation,
and management of controls by taking the organization's information security
risk environment(s) into consideration.

It is designed to be used by organizations that intend to:

• Select controls within the process of implementing an Information Security


Management System based on ISO/IEC 27001

• Implement commonly accepted information security controls

• Develop their own information security management guidelines


ISO/IEC 27017:2015

ISO/IEC 27017:2015 gives guidelines for information security controls which is


applicable for the provision and use of cloud services by providing:

• Additional implementation guidance for relevant controls specified in ISO/IEC 27002

• Additional controls with implementation guidance that specifically relate to cloud


services

• This international standard provides controls, and implementation guidance for both
cloud service providers and cloud service customers
SOC 1/SOC 2/SOC 3

For years, Statement on Auditing Standards 70 was seen as the de facto standard for data
center customers to obtain independent assurance that their data center service provider
has effective internal controls for managing the design, implementation, and execution of
customer information.

The Statement on Auditing Standards 70 (SAS 70) was replaced by Service Organization
Control (SOC) Type 1 and Type 2 reports in 2011.
Information Technology Security Evaluation

The CC25 is an international set of guidelines and specifications (ISO/IEC 15408) developed
for evaluating information security products, with the view to ensure that they meet an
agreed-upon security standard for government entities and agencies.

• Protection profiles: Define a standard set of security requirements for a specific type of product,
such as a firewall, IDS, or Unified Threat Management (UTM)
• The evaluation assurance levels (EALs): Defines how thoroughly the product is tested. EALs are
rated using a sliding scale from 1–7, with 1 being the lowest level of evaluation and 7 being the
highest
Information Technology Security Evaluation

The seven EALs are as follows:

• EAL1: Functionally tested • EAL5: Semi-formally designed and tested


• EAL2: Structurally tested • EAL6: Semi-formally verified, designed, and tested
• EAL3: Methodically tested and checked • EAL7: Formally verified, designed, and tested
• EAL4: Methodically designed, tested, and reviewed
FIPS 140-2

Federal Information Processing Standard (FIPS) 140 Publication Series was issued by NIST to
coordinate the requirements and standards for cryptography modules covering both
hardware and software components for cloud and traditional computing environments.
FIPS Levels

Security Level 1

• Level 1 is the lowest level of security.

• In level 1, the basic cryptographic module requirements are specified for at least
one approved security function or approved algorithm.

• The encryption of a PC board best describes Level 1 rating.


FIPS Levels

Security Level 2

• Level 2 enhances the required physical security mechanisms listed within Level 1.

• This level requires the capabilities to illustrate evidence of tampering.

• Level 2 requires locks that are tamper proof on perimeter and internally prevent
unauthorized physical access to encryption keys.
FIPS Levels

Security Level 3

• Level 3 is developed on the basis of Level 1 and Level 2 to prevent an intruder from gaining access to
information and data, which is held within the cryptographic module.
• In this level, physical security controls are used to detect access attempts. This allows us to respond
appropriately in order to protect the cryptographic module.
FIPS Levels

Security Level 4

• Level 4 represents the highest rating.


• It provides complete protection around the cryptographic module with the intent of detecting and
responding to all unauthorized attempts at physical access.
• Upon detection of unauthorized access, immediate zeroization of all critical security parameters (also
known as CSPs but not to be confused with cloud service providers) is done.
• Security Level 4 undergoes rigid testing to ensure its adequacy, completeness, and effectiveness.
Decision Flow for Cloud System Design

Functional objectives Compliance objectives Compliance objectives

Economic objectives Performance objectives Security objectives

Cloud model Cloud architecture Security architecture


• Authentication
• Virtual machines
• Access control
• Storage
• Deployment
model • Cryptography
• Connectivity

• Delivery model • Auditing


• Functional
blocks
• Cloud type • Secure
connections
• APIs
• Application
• User interfaces
security
Cloud Transition Scenario

Due to competitive pressure, XYZ Corp is hoping to better leverage the economic and scalable
nature of Cloud Computing. These policies have driven XYZ Corp toward the consideration of a
hybrid cloud model that consists of enterprise private and public cloud use.
Although security risk has driven many of the conversations, a risk management approach has
allowed the company to separate its data assets into two segments: sensitive and non-sensitive.
Cloud Transition Scenario

IT governance guidelines must now be applied across the entire cloud platform and
infrastructure security environment. This also affects infrastructure operational options.
XYZ Corp must now apply cloud architectural concepts and design requirements that would best
align with corporate business and security goals.

As a CCSP, you have several issues to address to guide XYZ Corp through its planned transition
to a cloud architecture.
Cloud Transition Scenario

Which cloud deployment model(s) would need to be assessed to select the appropriate ones for the
enterprise architecture?

Based on the choice(s) made, additional issues may become apparent, such as these:
1. Who will the audiences be?

2. What types of data will they be using and storing?

3. How will secure access to the cloud service be enabled, audited, managed, and removed?

4. When and where will access be granted to the cloud and under what constraints (time, location, platform, and
so on)?
Cloud Transition Scenario

Which cloud service model(s) would need to be chosen for the enterprise architecture?

Based on the choice(s) made, additional issues may become apparent, such as these:
1. Who will the audiences be?

2. What types of data will they be using and storing?

3. How will secure access to the cloud service be enabled, audited, managed, and removed?

4. When and where will access be granted to the cloud service and under what constraints (time, location,
platform, and so on)?
Key Takeaways
You are now able to:

Explain the various roles, characteristics, and technologies related to


Cloud Computing

Describe concepts related to Cloud Computing activities, capabilities,


categories, models, and cross-cutting aspects

Identify, describe, and define the design principles necessary for


secure Cloud Computing in different types of cloud categories

Identify national, international, and industry-specific criteria for


certifying cloud service providers

Identify criteria specific to the system and subsystem product


certification
Certified Cloud Security Professional (CCSP®)

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Cloud Data Security

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Learning Objectives

By the end of this domain, you will be able to:

Explain the cloud data life cycle based on the Cloud Security Alliance
(CSA) guidance

Describe the design and implementation of cloud data storage


architectures on the basis of storage types, threats, and available
technologies
Identify relevant jurisdictional data protection and data security
strategies for securing cloud data

Define Digital Rights Management (DRM) with regard to


objectives and the available tools

Describe various data events and know how to design and


implement processes for auditability, traceability, and
accountability
Cloud Data Life Cycle
Cloud Data Life Cycle

Life cycle phases


Create: The new digital content is generated or acquired, or the existing
content is altered or updated during the creation phase.

Store: Digital data is committed to storage repository simultaneously with


creation in this phase.

Use: Data is viewed, processed, or used in other activities in this phase.

Share: Data is exchanged among users, customers, and partners in the sharing
phase.

Archive: Data leaves the active status and enters long-term storage in the
archiving phase.

Destroy: Data is permanently destroyed using physical or digital means in the


destruction phase.
Cloud Data Life Cycle

Create
Data created remotely:
Data created by the user should be encrypted before uploading it to the cloud to protect against
obvious vulnerabilities, including man-in-the-middle attacks and insider threats at the cloud data
center.
Data created within the cloud:
Data created within the cloud via remote manipulation should be encrypted upon creation to obviate
unnecessary access or viewing by data center personnel.

The Create phase necessitates all the activities like categorization and classification, labeling,
tagging and marking, and assigning metadata.
Cloud Data Life Cycle

Store
• Controls such as encryption, access policy, monitoring, logging, and backups should be
implemented to avoid data threats.

• Content is vulnerable to attackers if Access Control Lists (ACLs) are not implemented well, files
are not scanned for threats, or files are classified incorrectly.
Cloud Data Life Cycle

Use
• Controls such as Data Loss Prevention (DLP), Information Rights Management (IRM), and data and file
access monitors should be implemented to audit data access and prevent unauthorized access.

• Data in use is most vulnerable, because it might be transported to and processed at insecure locations
such as workstations.
Cloud Data Life Cycle

Share
• Not all data should be shared, and not all the data which is shared should present a threat.

• It becomes difficult to maintain security for shared data which is no longer present at the organization.

• Technologies such as DLP are used to detect unauthorized sharing, and IRM technologies are used to
maintain control over the information.
Cloud Data Life Cycle

Export restrictions

• International Traffic in Arms Regulations (ITAR) enforced by the State Department of the United States
prohibits defense related exports; this includes technical data which is protected by cryptography
systems.

• Export Administration Regulations (EAR) enforced by the Department of Commerce of the United States
prohibits export of dual-use items (technologies that could be used for both commercial and military
purposes) which can be technical data and nonphysical entities.
Cloud Data Life Cycle

Import restrictions

• Cryptography (various): Countries have restrictions on importing cryptosystems or material that has
been encrypted. It is the security professional’s responsibility to know and understand local mandates
while doing business with a nation which has crypto restrictions.

• Wassenaar arrangement: A group of 42-member countries have agreed to mutually inform each other
about conventional military shipments to non-member countries. It is not a treaty, and therefore not
legally binding, but it demands an organization to notify the respective government in order to stay in
compliance.
Cloud Data Life Cycle

Archive

• Provides ability to retrieve and recover data


• Archiving must meet the regulatory requirements

Location:
• Where is the data being stored?
• Which environmental factors pose risk to that location?
• Which jurisdictional aspects are applicable?
• How far is the archive location?
• Is it feasible to access the data during contingency operations, for
instance, during a natural disaster?
• Is it far enough to be safe from events that impact the production
environment but close enough to reach that data during those events?
Cloud Data Life Cycle

Archive

Format:

Format is the data which is stored on a physical medium such as a tape backup
or magnetic storage.

Following are the concerns related to the format and the medium in which it is
stored:

• Is the medium highly portable and in need of additional security controls for
theft?

• Will that medium be affected by environmental factors?

• How long do we expect to retain this data?

• Will it still be in the same format which the production hardware can access
when needed?
Cloud Data Life Cycle

Archive

Staff:

• Are personnel at the storage location employed by the organization?

• If personnel are not employed by the organization, does the contractor implement a
personnel control suite sufficient for background checks, reliance checks, and monitoring?
Cloud Data Life Cycle

Archive

Procedure:

• How is data recovered when needed?

• How is it ported to the archive on a regular basis?

• How often are we doing full backups?

• What is the frequency of the incremental or differential backups?


Cloud Data Life Cycle

Destroy

• Logically erasing pointers or permanently destroying data using


physical or digital means

• Data destruction must be in compliance with regulation, based on the


type of cloud being used (infrastructure as a service [IaaS] versus
software as a service [SaaS]), and the classification of the data

• Data remanence must be dealt with during data destruction


Business Scenario

Recently, a software repository company was hacked and bankrupted overnight. The bad guys got
hold of the cloud instance through compromised credentials. When they were discovered, they not
only wiped out all production data to cover their tracks, but they also deleted all of the backup data
bankrupting the company overnight due to the loss of all intangible assets as well as the complete
revocation of any type of trust or reputation the company might have had prior to the breach.

• Question: What was the basic mistake that led to the company losing all of its intangible assets?

• Answer: In this instance, the mistake was placing their cloud backups in the same cloud as their
production data.
Key Data Functions

Process
This function performs a transaction
on the data, updates it, and uses it in
a business processing transaction.

Access Store
This function views or accesses the 1 Data 3 This function stores the
data, including copying, file transfers, Function data in files and databases.
and other exchanges of information.

Create Store Use Share Archive Destroy


Access X X X X X X
Process X X
Store X X
Cloud Data Storage Architectures
Storage Types

Cloud service model Storage types

IaaS Volume

Object

PaaS Structured

Unstructured

SaaS Information storage and management

Content and file storage

Ephemeral storage

Content delivery network (CDN)


Storage Types

Volume storage (Block storage):

Volume storage is a virtual hard drive which is allocated by the cloud provider and is attached to the
virtual host.
Data is stored in volumes, also known as blocks. An arbitrary identifier is assigned to each block by
which it is stored and retrieved.
The operating system sees and interacts with the drive in the same way as it would in the traditional
server model. The drive can be formatted and maintained as a file system in the traditional sense and
utilized as such.
Storage Types

Object:

Object storage operates as an API or a web service call.


Rather than being located in a file tree structure and accessible as a traditional hard drive, files are
stored as objects in an independent system and a key value is assigned for reference and retrieval
Storage Types

Structured:

Structured data is an organized and categorized data that can easily be placed within a
database or other storage system with a set of rules and a normalized design.
This data construct allows the application developers to easily import data from other data
sources or nonproduction environments and makes it ready for use in production systems.
The data is typically organized and optimized for searching technologies so that it can be used
without the need for customization or tweaking.
Storage Types

Unstructured:

Unstructured data is an information that cannot be easily used in a rigid and formatted database data
structure. This can be due to the type or size of the files.
Files included in this category are multimedia files (videos and audio), photos, and files produced by
word processing and Microsoft Office products, website files, or anything else that will not fit within a
database structure.
Storage Types

Information storage and management:

This is the classic form of storing and managing the data within databases. This data is used and
maintained by applications.

Data is either generated by the application or imported via the application through interfaces and
loaded into the database.
Storage Types

Content and file storage:


This is also known as “File-Level storage” or “File-Based storage.”
SaaS application allows the data which is not part of an underlying database to be stored in content and
file storage.

The files and content that are held by the application in another means of storage can be made accessible
to the users.
Storage Types

Ephemeral storage:

This type of storage is relevant for IaaS instances and exists only as long as its instance is up.
It is typically used for swap-files and other temporary storage needs and is terminated with its instance.
Storage Types

Content delivery network:

A content delivery network (CDN) is a form of data caching, usually a geographically distributed network of
proxy servers which provide copies of data commonly requested by users.
Content is stored in object storage, which is then distributed to multiple geographically distributed nodes in
order to improve internet consumption speed.
Threats to Storage Types

1 2 3 4

Regulatory DoS and


Unauthorized Unauthorized DDoS attacks
noncompliance
usage access in storage

5 6 7
Corruption, Theft or
modification, Data leakage accidental loss
and and breaches of media
destruction

8 9
Improper
treatment
or sanitization Malware attack
after use
Real-World Scenario: Password Storage

In June 2012, Last.fm, a music-centered social media platform, admitted to a data breach and
advised all users to change their passwords after hackers posted Last.fm password hashes to a
password cracking forum.
Data breach notification service LeakedSource, obtained the data of more than 43 million user
accounts, including weakly encrypted passwords, 96 percent of which were cracked within two
hours.
Since then Last.fm has made improvements to how the passwords are stored after admitting
that they have been using the MD5 algorithm with no salt.

•Question: What is the best practice to store passwords?


•Answer: Encrypt passwords with salted hash and avoid weak algorithms like MD5 and SHA1.
Data Security Strategies
Encryption Architecture

Encryption is necessary to achieve data security. The encryption architecture is as follows:

01 02 03

Data Encryption engine Encryption keys


This is the data This performs the All encryption is based on keys.
object or objects that encryption operation. Safeguarding the keys is a crucial
need to be activity, necessary for ensuring the
encrypted. ongoing integrity of the encryption
implementation and its algorithms.
Encryption

Sample use cases for encryption:

• Encryption will be used for data which moves in and out of the cloud for processing, archiving, or sharing.
Techniques such as SSL/TLS or VPN are used to avoid information exposure or data leakage while in motion.

• It is used for protecting data at rest such as file storage, database information, application components,
archiving, and backup applications.

• This is done for files or objects that must be protected when stored, used, or shared in the cloud.

• It is useful when complying with regulations such as HIPAA and PCI DSS, which in turn require relevant protection
of data traversing untrusted networks and protection of certain data types.

• Encryption provides protection from third-party access via subpoena or lawful interception.

• It is used for creating enhanced mechanisms for logical separation between different customers’ data in the
cloud.

• It helps in logical destruction of data when physical destruction is not feasible or technically impossible.
Encryption Challenges

Following are the encryption challenges:

• The integrity of encryption is heavily dependent on control and management of the relevant encryption
keys, including how they are secured.

• It is challenging to implement encryption effectively when a CSP is required to process the encrypted data.

• Encryption and key management becomes challenging as the data in the cloud is highly portable, that is, it
replicates, is copied, and is backed up extensively.

• Multitenant cloud environments and the shared use of physical hardware pose challenges for the
safeguarding of keys in volatile memory such as random access memory (RAM) caches.

• Secure hardware for encrypting keys may not exist in cloud environments. Software-based key storage are
often more vulnerable.
Encryption Challenges

• It can negatively affect high-performance data processing mechanisms such as


data warehouses and data cubes.

• The nature of cloud environments typically requires you to manage more keys than
traditional environments (access keys, API keys, encryption keys, and shared keys).

• Encryption affects data availability. Encryption complicates data availability controls


such as backups, disaster recovery planning (DRP), and colocations because
expanding encryption into these areas increases the likelihood that keys may become
compromised. In addition, if encryption is applied incorrectly within any of these
areas, the data may become inaccessible when needed.

• Encryption does not solve data integrity threats. Data can be encrypted and yet be
subject to tampering or file replacement attacks. In this case, supplementary
cryptographic controls such as digital signatures need to be applied, along with
nonrepudiation for transaction-based activities.
Data Encryption in IaaS

Characteristics of basic storage level encryption of data in IaaS:

• The engine encrypts the data which is written to the storage and decrypts it while exiting the storage.

• The encryption engine is located on the storage management level, with the keys usually held by the
cloud service provider (CSP).

• The data encryption works on object storage as well as volume storage.

• The encryption of data in IaaS provides protection from hardware theft or loss.

• The data encryption in IaaS does not protect from CSP administrator access or any unauthorized
access coming from the layers above the storage.
Data Encryption in IaaS

Volume storage encryption:

Volume storage encryption encrypts the data which resides on volume storage typically
through an encrypted container, which is mapped as a folder or volume.

Instance based encryption allows access to data only through the volume OS and therefore
provides protection against the following:

Physical loss or theft

External administrator(s) accessing the storage


Snapshots and storage-level backups being taken and removed from the system
Data Encryption in IaaS

Object storage encryption:

File-level encryption:

• The encryption engine is commonly implemented at the client side and preserves the format
of the original file.

• Examples for file-level encryption include Information Rights Management (IRM) and Digital
Rights Management (DRM) solutions.
Data Encryption in IaaS

Object storage encryption:

Application-level encryption:

• The encryption engine resides in the application that is utilizing the object storage.

• It can be integrated into the application component or by a proxy that is responsible for
encrypting the data before going to the cloud.
Database Encryption

File-level encryption:

• It encrypts the volume or folder of the database with the help of encryption engine and its
keys reside on the instances attached to the volume.

• External file system encryption protects the data from media theft, lost backups, and
external attacks but does not protect against attacks with access to the application layer, the
instance OS, or the database itself.
Database Encryption

Application-level encryption:

It encrypts the data with the help of encryption engine which resides in the application that is utilizing
the database.
Database Encryption

Transparent encryption:
• Transparent encryption is capable of encrypting the entire database or specific portions, such as
tables.

• The encryption engine resides within the database and is transparent to the application.

• Transparent encryption keys usually reside within the instance. Their processing and management
can also be offloaded to an external Key Management Service (KMS).

• Transparent encryption provides effective protection from media theft, backup system intrusions,
and certain database and application-level attacks.
Key Management and Common Challenges

Access to the keys:

Leading practices coupled with regulatory requirements may set specific criteria for key access, along with
restricting or not permitting access to keys by CSP employees or personnel.

Key storage:

Secure storage for the keys is essential to safeguard the data. In traditional in-house environments, keys
were able to be stored in secure dedicated hardware. This may not always be possible in cloud
environments.

Backup and replication:

The nature of the cloud results in data backups and replication across a number of different formats. This
can affect the ability for long- and short-term key management.
Key Management Considerations

While generating keys, use of random number generator should be considered as a trusted process.

During the lifecycle, cryptographic keys should never be transmitted in an untrusted environment; they should
always remain in a trusted environment.

While considering key escrow or key management “as a service,” we must carefully plan to take into account all
relevant laws, regulations, and jurisdictional requirements.

While discussing confidentiality threats versus availability threats, we must consider that lack of access to the
encryption keys will result in lack of access to the data.

Wherever its possible, the key management functions should be conducted separately from the CSP in order to
enforce separation of duties and force collusion to occur if unauthorized data access is attempted.
Key Storage in the Cloud

Internally managed Externally managed Managed by a third party

• The keys are stored on • The keys are • Trusted third party
the virtual machine or maintained separately provides key escrow
application component from the encryption service
that is also acting as the engine and data • Key management
encryption engine • Can be on the same providers use
• Typically used in cloud platform, specifically developed
storage-level internally within the secure infrastructure
encryption, internal organization, or on a and integration
database encryption, different cloud services for key
or backup application management
encryption
• Helpful for mitigating
against the risks
associated with lost
media
Keys

Data Encryption Key (DEK):

DEK is used to encrypt the data.

Key Encryption Key (KEK):

KEK themselves shouldn't be stored in the clear but are encrypted with a KEK (Key
Encrypting Key).

The DEK and the KEK must be stored on separate physical systems so that if one is
compromised, the other is not.
Data Security Strategies

Data masking:

Data masking or obfuscation, is the process of hiding, replacing, or omitting sensitive information from a
specific data set.
Data Security Strategies

Random substitution: The idea of replacing (or appending) the value with a
random value is called random substitution.

Algorithmic substitution: The idea of replacing (or appending) the


value with an algorithm generated value is called algorithmic
Data
Masking substitution. This typically allows for two-way substitution.
Approaches
Shuffle: The idea of using different entries from within the same data set
to represent the data is called shuffling. This has the obvious drawback of
using actual production data.

Masking: The idea of using specific characters to hide certain parts of


the data is called masking. It usually applies to credit card data
formats.
Deletion: The idea of simply using a null value is called
deletion.
Data Security Strategies

Data anonymization

Data anonymization is a type of information sanitization and its intent is privacy protection.

Data generally has direct and indirect identifiers, where direct identifiers represent private data, whereas
indirect identifiers have attributes such as demographic and location data. When used together, they could
produce the exact identity of an individual.

Data anonymization is the process of removing the indirect identifiers either by encrypting or removing
personally identifiable information from data sets, so that the people whom the data describes remain
anonymous.
Data Security Strategies

Tokenization

Tokenization is the process of substituting a sensitive data element with a nonsensitive equivalent, referred
to as a token.

Tokenization is the practice of having two distinct databases; one with the live and actual sensitive data and
another with nonrepresentational tokens mapped to each piece of that data.

The token is usually a collection of random values with the shape and form of the original data placeholder
which can be mapped back to the original data by the tokenization application or solution.
Data Security Strategies

Tokenization

Tokenization is able to assist with each of these:

• Complying with regulations or laws

• Reducing the cost of compliance

• Mitigating risks of storing sensitive data and reducing attack vectors on that data
Data Security Strategies

Sensitive data
Tokenization steps: Token

Token database
• An application collects or generates a piece of sensitive data.
3
• The data is not stored locally and is sent to the tokenization server.
Tokenization Authorized
• The tokenization server generates the token. The sensitive data and Server
6
Application

the token are stored in the token database.


2 4

• The tokenization server returns the token to the application.


Application
1
• The application stores the token rather than the original data. Server
5
Database
• When the sensitive data is needed, an authorized application or user
can request it.
Data Security Strategies

Homomorphic encryption

Homomorphic encryption is a form of encryption that allows computations to be carried out on ciphertext,
thus generating an encrypted result which on decryption matches the result of operations performed on the
plaintext.

Note: Homomorphic encryption is a developing area and does not represent a mature offering for most use
cases.
Data Security Strategies

Bit splitting

Bit splitting involves encrypting data, then splitting the encrypted data into smaller data units and distributing
those smaller units to different storage locations, and then further encrypting the data at its new location.

With this process, the data is protected from security breaches, because even if an intruder is able to retrieve
and decrypt one data unit, the information would be useless unless it can be combined with decrypted data
units from the other locations.
Data Security Strategies

Bit splitting

Benefits:
• Data security is enhanced due to the use of stronger confidentiality mechanisms.
• Bit splitting between different geographies and jurisdictions make it hard to gain access to the complete data
set via a subpoena or other legal processes.
• It can be scalable, can be incorporated into secured cloud storage API technologies, and can reduce the risk
of vendor lock-in.
Data Security Strategies

Bit splitting:

Challenges:

• Processing and reprocessing the information to encrypt and decrypt the bits is a CPU intensive
activity.

• The whole data set may not be required to be used within the same geographies which is stored and
processed by CSP, which in turn leads to the need of ensuring data security on the wire as a part of
the security architecture for the system.

• Storage requirements and costs are usually higher with a bit splitting system.

• Bit splitting can generate availability risks because all parts of the data may not be available while
decrypting the information.
Real-World Scenario

During an audit of a regulated company who had utilized most of the encryption techniques as part
of their PCI compliance efforts, a surprising finding was discovered.
It was found that customer representatives, who could not see Social Security numbers due to
masking, were in fact exposed to full Social Security numbers during phone call conversations with
customers. The customers, even though never asked, would sometimes just blurt out their entire
Social Security number.
What no one was aware of was that the annoying message you get at the start of customer service calls
that says, “This call may be recorded for training purposes” resulted in those blurted-out messages being
recorded.

The messages that were being recorded were also being stored in the cloud and were not encrypted.
PCI only has standards for encrypting cardholder data, and no one ever suspected that the voice
messages that were recorded might expose such data.

•Question: What would be a possible solution to this issue?


•Answer: The solution was a simple matter of adding encryption to the storage for the messages.
Data Security Strategies

Data loss/leakage prevention (DLP):

DLP describes the controls which are put in place by an organization to ensure that certain types of data
(structured and unstructured) remain under organizational controls, in line with policies, standards, and
procedures.
Data Security Strategies

This is sometimes referred to as network-based or gateway DLP. In this topology, the


Data in
motion monitoring engine is deployed near the organizational gateway to monitor outgoing
(DIM) protocols such as hypertext transfer protocol (HTTP), hypertext transfer protocol secure
(HTTPS), simple mail transfer protocol (SMTP), and file transfer protocol (FTP).

DLP Data at rest This is sometimes referred to as storage-based data. In this topology, the DLP engine is
architect (DAR) installed where the data is at rest and usually have one or more storage subsystems, as
ure
well as file and application servers.

Data in use This is sometimes referred to as client or endpoint-based. The DLP application is installed on a
(DIU) user’s workstation and endpoint devices.
Data Security Strategies

Administrative access for


enterprise data in the cloud is
tricky

Data in the cloud tends to DLP technology affects overall


move and replicate network performance

Cloud-based DLP considerations


Data Security Strategies

Leading practices:
Leading practices start with the data discovery and classification process. Data discovery and classification
processes are mature in the cloud deployments and add value to the data security process.

Points to be addressed in cloud DLP policy:


• What kind of data is permitted to be stored in the cloud?
• Where can the data be stored (which jurisdictions)?
• In which way should the data be stored?
• What kind of data access is permitted under cloud DLP policy?
• Which devices and what networks are permitted under cloud DLP policy?
• Which tunnels are permitted under cloud DLP policy?
• Which applications are permitted under cloud DLP policy?
• What are the types of conditions under which the data is allowed to leave the cloud?
Real-World Scenario: DLP Solutions

A well-known, nationally recognized Cancer Research and Treatment Center in the United States was initially
seeking to update and align its information security posture to comply with healthcare regulations and
policies that safeguard patient privacy and data. Like many healthcare institutions, the Cancer Center had
made significant investments in cybersecurity to protect its perimeter and was committed to improving
security in the wake of growing threats from both inside and outside the organization.

Because of the “open” nature of the affiliated university’s network, the Cancer Center’s security management
team decided to invest in DLP technologies for safeguarding both structured and unstructured data which
included standard patient data and intellectual property in the form of genomic research.

Within the first few days of implementing the DLP solution, the client’s CIT identified and
flagged the behavior of one specific doctor involved in genomic research at the Center’s campus.
Whether by accident or malicious intent, it appeared the doctor was transferring proprietary
research information to a university outside the United States; a clear violation of policy.
Real-World Scenario: DLP Solutions

Question: What is false positive in DLP solutions?


Answer: A false positive happens when data that is not sensitive data is mistakenly identified
as sensitive data.
Real-World Scenario: DLP Solutions

Within the first few days of implementing the DLP solution, the client’s CIT identified and
flagged the behavior of one specific doctor involved in genomic research at the
Center’s campus. Whether by accident or malicious intent, it appeared the doctor
was transferring proprietary research information to a university outside the
United States; a clear violation of policy.

Question: What is false positive in DLP solutions?


Answer: A false positive happens when data that is not sensitive data is mistakenly identified
as sensitive data.
Data Discovery and Classification Technology
Data Discovery

Data discovery:

It is a business intelligence operation and an interactive user-driven process where data is visually
represented and is analyzed to look for patterns or specific attributes rather than static reporting.

Data discovery enables people to use intuition to find meaningful and important information in data.

It is an iterative process, where initial findings refine the parameters and representation in order to dive
deeper into the data and continue to scope it toward the desired objective.
Data Discovery

Data discovery approaches:

Big data: On big data projects, data discovery is important and challenging. Many traditional methods of data
discovery fail when it comes to big data, as the volume of data meant for discovery is large and the diversity of
sources and formats also presents many challenges. Cases in which big data initiatives involve rapid profiling
of high-velocity big data make data profiling harder and less feasible using existing toolsets.

Real-time analytics: The ongoing shift toward real-time analytics has created a new class of use cases for
data discovery. These use cases are valuable but require data discovery tools that are faster, more automated,
and more adaptive.

Agile analytics and agile business intelligence: Data scientists and business intelligence teams are adopting
more agile, iterative methods of turning data into business value. They perform data discovery processes
more often and in more diverse ways, for example, when profiling new data sets for integration, seeking
answers to new questions emerging this week based on last week’s new analysis, or finding alerts about
emerging trends that may warrant new analysis work streams.
Data Discovery

Data discovery techniques:

Metadata: Metadata provides information about the data. All relational databases store metadata that
describes tables and column attributes.

Labels: This is marked by data elements which are grouped with a tag that describes the data. This can
be done at the time the data is created, or tags can be added over time to provide additional information
and references to describe the data.

Content analysis: In this form of analysis, the data itself is analyzed by employing pattern matching,
hashing, statistical, lexical, or other forms of probability analysis.
Example: Luhn check to verify Credit Card number.
Data Discovery

Challenges with data discovery in the cloud:

Identifying where your data is: Not knowing where the data is, where it is going, and where it will be at
any given moment with assurance presents significant security concerns for enterprise data and the CIA
that is required to be provided by the CCSP.

Accessing the data: Not all data stored in the cloud can be accessed easily. Sometimes customers do not
have the necessary administrative rights to access their data on demand. Long-term data can be visible to
the customer but is not accessible to download in acceptable formats for use offline.

Performing preservation and maintenance: Long-term preservation of data is possible and can be
managed via an SLA with a provider. However, the issues of data granularity, access, and visibility need to
be considered when planning for data discovery against long-term stored data sets.
Data Classification

• Data classification is a process of analyzing data for certain attributes, and then using that to
determine the appropriate policies and controls to apply to ensure its security.

• Data classification is the responsibility of the data owner and takes place in the create phase.

• A data classification process is recommended for implementing data controls such as DLP
and encryption.

• Data classification is also a requirement of certain regulations and standards, such as ISO
27001 and PCI DSS.
Data Classification

Types of data classification:

Sensitivity: Data is assigned a classification according to the sensitivity of the data, based on the negative
impact an unauthorized disclosure would cause. This classification model is used by the military.

Jurisdiction: Data jurisdiction considers the geophysical location of the source or storage point of the
data might have significant bearing on how that data is treated and handled.
For instance, Personally Identifiable Information (PII) data gathered from citizens of the European Union
(EU) is subject to the EU privacy laws, which are much stricter and comprehensive than privacy laws in the
United States.

Criticality: Data that is deemed critical to organizational survival might be classified in a manner distinct
from trivial, basic operational data.
BIA can help determine which material would be classified this way.
Challenges with Cloud Data

Challenges

The CCSP needs to ensure that proper security controls are in place so that whoever creates or
Data creation
modifies data must classify or update the data as part of the creation or modification process.

Classification Controls can be administrative (as guidelines for users who are creating the data), preventive, or
controls compensating.

Classifications can be based on the metadata that is attached to the file, such as owner or
Metadata location. This metadata should be accessible to the classification process to make the proper
decisions.

Classification data Controls should be placed to make sure that the relevant property or metadata can survive data
transformation object format changes and cloud imports and exports.

Cloud applications must support a reclassification process based on the data life cycle.
Reclassification Sometimes the new classification of a data object may mean enabling new controls such as
consideration encryption or retention and disposal. For example, customer records moving from the
marketing department to the loan department.
Jurisdictional Data Protections for Personally Identifiable Information (PII)
Data Privacy Acts

28 member states (countries) comprise the EU, with that number dropping to
27 in 2017–2018 when the UK formalizes leaving the union. The EU treats PII
The EU as a human right, with severely stringent protections for individuals.

Personal privacy rights are often delineated in industry-specific laws (such as


United States GLBA for financial services and HIPAA for medicine), but there is no
overarching federal law ensuring individual personal privacy.
Australia and New Zealand Laws in these countries conform to the EU policies.
Argentina Local law is specifically based on the EU guidance.
A four-member body that includes Switzerland, Norway, Iceland, and
EFTA Liechtenstein. Swiss law, in particular, provides stringent privacy protections,
particularly for banking information.
The Personal Information Protection and Electronic Documents Act (PIPEDA)
Canada
conforms to the EU Data Directive and Privacy Regulation.
Data Privacy Acts

General Data Protection Regulation (GDPR)

The EU General Data Protection Regulation (GDPR) is a regulation that requires businesses to protect the
personal data and privacy of EU citizens for transactions that occur within EU member states and exportation
outside the EU.

Companies that collect data on citizens in European Union (EU) countries will need to comply with strict new
rules around protecting customer data by May 25, 2018.

Non-compliant organizations may face administrative fines of up to €20 million or up to 4% of the entity’s
global turnover of the preceding financial year, whichever is higher.

Organizations must report data breaches within 72 hours.

Under existing “right to be forgotten” provisions, people who don’t want certain data about them online can
request companies to remove it.
Data Privacy Acts

A Data Controller is the legal entity “who either alone, or jointly, determines the
purpose for and manner in which personal data is, or will be, processed.”
A data controller could either be an organization or an individual that collects and
processes information about customers, patients, etc.

A Data Processor processes data on behalf of the data controller. But, it does not
GDPR: Roles and control the data and cannot change the purpose or use of the particular set of
data.
Responsibilities
Data processors could include organizations such as payroll firms, cloud service
vendors, and data analytics providers. Data Processors reports to Data
Controllers, they are all directly accountable for data protection under GDPR.

A Supervisory Authority (SA) established in each EU Member State, has been tasked to
enforce GDPR and monitor the application of GDPR rules to protect individual rights
with respect to the processing and transfer of personal data within the EU.
Data Privacy Acts

GDPR: Data Protection Principles

The EU General Data Protection Regulation (GDPR) outlines six data protection principles that
organizations need to follow when collecting, processing, and storing individuals’ personal data.

Lawfulness,
Purpose Data Storage Integrity and
fairness, and Accuracy
limitation minimization limitations confidentiality
transparency

The data controller is responsible for complying with the principles and must be able to
demonstrate the organization's compliance practices.
Data Privacy Acts

GDPR: Data Protection Principles

Lawfulness, fairness, and transparency:


“Personal data shall be processed lawfully, fairly, and in a transparent manner in relation to the data
subject.”

Purpose limitation:
“Personal data shall be collected for specified, explicit, and legitimate purposes and not further
processed in a manner that is incompatible with those purposes.”

Data minimization:
“Personal data shall be adequate, relevant and limited to what is necessary in relation to the
purposes for which they are processed.”
Data Privacy Acts

GDPR: Data Protection Principles

Accuracy
“Personal data shall be accurate and where necessary, kept up-to-date; every reasonable step
must be taken to ensure that personal data that are inaccurate, having regard to the purposes
for which they are processed, are erased or rectified without delay.”

Storage limitations
“Personal data shall be kept in a form which permits identification of data subjects for no longer
than is necessary for the purposes for which the personal data are processed”.

Integrity and confidentiality


“Personal data shall be processed in a manner that ensures appropriate security of the personal
data, including protection against unauthorized or unlawful processing and against accidental
loss, destruction or damage, using appropriate technical or organizational measures.”
Data Privacy Acts

United States

GLBA (The Gramm-Leach-Bliley Act): It is also known as the U.S. Financial Modernization Act which regulates
the protection of consumer personal information held by financial institutions.

This act demands financial institutions for the companies which offer consumers financial products or services
like loans, financial or investment advice, or insurance. These financial institutions regulate the flow of
information and practices to their customers and safeguards their sensitive data.

Financial privacy rule: It regulates the collection and disclosure of private financial information.

Safeguards rule: It stipulates that financial institutions must implement security programs to protect such
information.

Pretexting provision: It prohibits the practice of pretexting (accessing private information using false
pretenses).
Data Privacy Acts

HIPAA (Health Insurance Portability and Accountability Act) of 1996:

The primary goal of the law is to make it easier for people to keep health insurance, protect the
confidentiality and security of healthcare information, and help the healthcare industry control
administrative costs.

The HIPAA security rule requires appropriate administrative, physical, and technical safeguards to ensure
the confidentiality, integrity, and security of Protected Health Information (PHI).
HIPAA mandates steep federal penalties for noncompliance.
A supplemental act was passed in 2009 called The Health Information Technology for Economic and Clinical
Health (HITECH) Act which provides financial incentives for medical practices and hospitals to convert paper
record keeping systems to digital.
Data Privacy Acts

FISMA (Federal Information Security Management Act) of 2002:

This act requires program officials and the head of each agency to conduct annual reviews of information
security programs, with the intent of keeping risks at or below specified acceptable levels in a cost-effective,
timely, and efficient manner.

According to FISMA, the term “information security” means protecting information and information systems
from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide integrity,
confidentiality, and availability.
Data Privacy Acts

The Sarbanes-Oxley Act (SOX) of 2002:

The Sarbanes-Oxley Act (SOX) requires all publicly held companies to establish internal controls and
procedures for financial reporting to reduce the possibility of corporate fraud.

Penalties for noncompliance: Formal penalties for noncompliance with SOX can include fines, removal from
listings on public stock exchanges, and invalidation of D&O insurance policies. Under the Act, CEOs and CFOs
who willfully submit an incorrect certification to a SOX compliance audit can face fines of $5 million and up to
20 years in jail.
Responsibilities of Cloud Services

IaaS PaaS SaaS Enterprise Responsibility


Shared Responsibility
Security Governance,
CSP Responsibility
Risk, and Compliance
(GRC)

Data Security

Application Security

Platform Security

Infrastructure Security

Physical Security
Data Rights Management
Data Rights Management

Digital Rights Management (DRM): This applies to the protection of consumer media, such as music,
publications, videos, and movies.

DRM is most typically used to protect the intellectual property of a vendor’s digital product that is
electronically sold into a wide market, such as music or film. If someone buys a music file online, for example,
DRM built into the servers and players allows the licensor to control how the file is used. The licensor may
specify electronically that a music file can’t be forwarded to others or copied, or that a video file may be
watched for only a certain length of time.

Information Rights Management (IRM): This applies to the organizational side to protect information and
privacy, whereas DRM applies to the distribution side to protect intellectual property rights and control the
extent of distribution.
Information Rights Management

Features:

• IRM allows one to set policies on who can open the document and what they can do with it. IRM provides
granularity that flows down to printing, copying, saving, and similar options.

• IRM contains ACLs and is embedded into the original file. Due to this, the IRM is agnostic to the location
of the data unlike other preventive controls which depend on file location.

• IRM helps in providing protection and this protection travels with the file so that the information remains
protected in secured and unsecured network.

• IRM is useful for protecting sensitive organization content such as financial documents. IRM can also be
implemented to protect emails, web pages, database columns, and other data objects.

• IRM is useful for setting up a baseline for the default Information Protection Policy.
Information Rights Management

Tools:

Auditing: It allows robust auditing of who has viewed information, as well as provide proof as to when and where
they accessed the file.
Expiration: IRM technologies allow for the expiration of access to data. This gives an organization the ability to set
a lifetime for the data and enforce policy controls that disallow it to be accessible forever.
Policy control: It allows an organization to have very granular and detailed control over how their data is
accessed and used. The ability to control, even with different audiences, who can copy, save, print, forward, or
access any data is far more powerful than what is affordable by traditional data security mechanisms.
Protection: With the implementation of IRM technologies and controls, any information under their protection is
secure at all times.
Support: Most IRM technologies support a range of data formats and integration with application packages
commonly used within organizations, such as email and various office suites.
Data Retention, Deletion, and Archiving Policies
Data Protection Policies

Data protection policies fulfill regulatory obligations, which


can come in the form of legal requirements or mandates
from certification or industry oversight bodies.

Data protection policies regulate:

• Data retention

• Data deletion

• Data archiving
Data Protection Policies

Data retention:

Data retention involves the storing and maintaining of data for a period of time as well as the methods used
to accomplish this.

A good data-retention policy should include:

• Retention periods

• Legislation, regulation, and standards requirements

• Data classification

• Data formats

• Data security

• Data-retrieval procedures for the enterprise


Data Protection Policies

Data deletion:

When data is no longer needed in a system, it must be removed in a secure way, so that it can no longer be
accessible or recoverable in the future.

Within a cloud environment, the deletion methods available to the customer are overwriting and
cryptographic erasure (also known as cryptographic shredding).

A data-deletion policy is sometimes required for the following reasons:

• Regulation or legislation: Certain laws and regulations require specific degrees of safe disposal for certain
records.

• Business and technical requirements: A business policy may require safe disposal of data. Also,
processes such as encryption might require safe disposal of the clear text data after creating the encrypted
copy.
Disposal Options

Physical Physically destroying the media by incineration, shredding, or other means.


Destruction

Using strong magnets for scrambling data on magnetic media such as hard drive
Degaussing
and tapes.

Overwriting Performing overwriting destroys the data thoroughly.

Using an encryption method to rewrite the data in an encrypted format to make it


Encryption unreadable without the encryption key.
Real-World Scenario: Data Remanence

In 2013, Affinity Health Plan, a managed care plan company based in New York, agreed to pay federal
regulators $1.2 million to settle a 2010 incident that affected 344,557 individuals whose data was
discovered on the hard drives of copy machines that had been returned to a leasing company.
Affinity discovered the breach after it was informed by a representative of CBS Evening News that, as
part of an investigatory story, CBS had purchased four copy machines from a company that had
leased them to four different organizations, including Affinity. CBS had hired a firm to analyze what
was on their hard drives, discovering that the machine that Affinity had used contained confidential
medical information.
The investigation revealed that Affinity failed to incorporate the ePHI stored on photocopier
hard drives in its analysis of risks and vulnerabilities as required under the HIPAA Security Rule,
and failed to implement policies and procedures when returning the photocopiers to its leasing
agents.
A corrective action plan required Affinity to make an effort to retrieve all hard drives from
leased photocopiers and take measures to safeguard ePHI.
Real-World Scenario: Data Remanence

Question: What is the best method to prevent data remanence?


Answer: Physical destruction or cryptographic erasure on the cloud.
Data Protection Policies

Data archiving:

Data archiving is the process of identifying and moving inactive data out of current production systems
and into specialized long-term archival storage systems.

Questions: Terms addressing the questions:


How is the data represented and stored? Format

How long must data be retained and other Regulatory Requirements


requirements for its preservation?

Which specific software applications are used Technologies


to create and maintain the archives?
Real-World Scenario: Data Loss

In 2017, Verizon, a major telecommunications provider, suffered a data security breach with
over 14 million US customers' personal details exposed on the Internet after NICE Systems, a
third-party vendor, mistakenly left the sensitive users’ details open on a server.

Nice Systems (a Verizon partner) logged customer files that contained sensitive and personal
information (including customer names, corresponding cell phone numbers, and specific
account PINs) on an Amazon S3 bucket. For reasons unknown, that bucket was left unsecured,
thus exposing more than 14 million Verizon customer records to anyone who discovered the
bucket.

•Question: Between Verizon, NICE Systems, and Amazon, who is accountable for the loss of data?

•Answer: Verizon. They should ensure visibility into how partners and other stakeholders keep their
data secure.
Data Protection Policies

Legal Hold

A legal hold (also known as a litigation hold) is a process that an organization uses to preserve
electronically stored information (ESI) or paper documents that may be relevant to a new or imminent
legal case. It is intended to prevent deletion or modification of potentially relevant evidence, so as to
ensure that evidence, when needed, will be available.

Failure to adequately preserve their data or organize the proper litigation hold can expose an
organization to legal and financial risks, such as scrutiny of the organization's records retention and
discovery processes, adverse legal judgments, sanctions, or fines.
Auditability, Traceability, and Accountability of Data Events
Event Sources

With an IaaS environment, the cloud customer has the most


access and visibility into the system and infrastructure logs of any
IaaS event sources:
of the cloud service models. Therefore, virtually all logs and data
PaaS event sources: events should be exposed and available for capture.
SaaS event sources:
However, some logs outside of the typical purview of the cloud
customer might also be of high value, and access to those logs
should be clearly articulated in the contract and SLA between the
cloud provider and the cloud customer.
Event Sources

IaaS event sources: A PaaS environment does not offer or expose the same
level of customer access to infrastructure and system
PaaS event sources:
logs as the IaaS environment, but the same detail of logs
SaaS event sources:
and events is available at the application level.
Event Sources

IaaS event sources: Given the nature of a SaaS environment, the amount
PaaS event sources: of log data that is typically available to the cloud
SaaS event sources: customer is minimal and highly restricted.
Security Information and Event Management (SIEM)

Security Event Management (SEM) provides real-time monitoring, correlation of events, notifications, and
console views.

Security Information Management (SIM) provides long-term storage, analysis, and reporting of log data.

Security Information and Event Management (SIEM) technology provides real-time analysis of security
alerts generated by network hardware and applications.
SIEM is sold as software, appliances, or managed services, and is used to log security data and generate
reports for compliance purposes.

Security
Security Event
Information SIEM
Management (SEM)
Management (SIM)
Security Information and Event Management (SIEM)

Data aggregation: Log management aggregates data from many sources, including network, security,
servers, databases, and applications, providing the ability to consolidate monitored data to help avoid
missing crucial events.

Correlation: This involves looking for common attributes and linking events into meaningful bundles.
This technology provides the ability to perform a variety of correlation techniques to integrate
different sources to turn data into useful information. Correlation is typically a function of the SEM
portion of a full SIEM solution.

Alerting: This is the automated analysis of correlated events and production of alerts to notify
recipients of immediate issues. Alerting can be to a dashboard or via third-party channels such as
email.

Dashboards: Tools can take event data and turn it into informational charts to assist in seeing
patterns or identifying activity that is not forming a standard pattern.
Security Information and Event Management (SIEM)

Compliance: Applications can be employed to automate the gathering of compliance data, producing
reports that adapt to existing security, governance, and auditing processes.

Retention: This involves employing long-term storage of historical data to facilitate correlation of data
over time and to provide the retention necessary for compliance requirements. Long-term log data
retention is critical in forensic investigations because it is unlikely that discovery of a network breach will
coincide with the breach occurring.

Forensic analysis: This is the ability to search across logs on different nodes and time periods based on
specific criteria. It mitigates having to aggregate log information in your head or having to search through
thousands and thousands of logs.
Chain of Custody

Creating a verifiable chain of custody for


evidence within a cloud computing
environment where there are multiple
data centers spread across different
Chain of custody is the preservation and
jurisdictions can become challenging.
protection of evidence from the time it is
Sometimes the only way to provide for a
collected until the time it is presented in
chain of custody is to include this
court.
provision in the service contract and
ensure that the CSP will comply with
requests pertaining to chain of custody
issues.
Chain of Custody

CHAIN OF CUSTODY EVIDENCE EVIDENCE


Submitting Agency:_______________ Submitting Agency:_______________________________
Received From: ______________ Case No.:_______________ Item No.:________________
Case No.:_________________________
Received By:_________________ Date of Collection:_______ Time of Collection:_____
Item No.:_________________________ Collected by:_____________________________________
Date: ______ Time:_____ am/pm
Date of Collection:________________ Badge No.:_______________________________________
Description of Enclosed Evidence:________________
Time of Collection:________________ ___________________________________________________
Collected by:______________________ Location Where Collected:________________________
Received From: ______________ ___________________________________________________
Badge No.:________________________
Received By:_________________ Type of Offence:__________________________________
Description of Enclosed Evidence: Victim’s Full name:________________________________
Date: ______ Time:_____ am/pm
___________________________________ Suspect’s Full Name:______________________________
___________________________________
Location Where Collected:
Received From: ______________
Received By:_________________
___________________________________ CHAIN OF CUSTODY
___________________________________
Date: ______ Time:_____ am/pm Received From: __________________________________
Type of Offence:__________________ Received By:______________________________________
Victim’s Full name:________________ Date: ________________ Time:______________ am/pm
___________________________________
Received From: ______________ Received From: __________________________________
Suspect’s Full Name: Received By:______________________________________
Received By:_________________
___________________________________ Date: _________________Time:______________ am/pm
Date: ______ Time:_____ am/pm
___________________________________
Nonrepudiation

Nonrepudiation is the ability to confirm the origin or authenticity of data to a high degree of certainty. This
typically is done through digital signatures and hashing, to ensure that data has not been modified from its
original form. This concept plays directly into and complements chain of custody for ensuring the validity and
integrity of data.
Real-World Scenario: Hacking of Dropbox

In 2012, Dropbox was hacked, with over 68 million users’ email addresses and passwords leaking on to the
internet.
A Dropbox employee's personal password had been used on both their LinkedIn and their corporate Dropbox
account. The LinkedIn password was obtained via another breach and this was reused to infiltrate the Dropbox
network and eventually steal the files containing the credentials.
Fortunately, Dropbox used bcrypt hashing algorithm to protect the password which is very resilient to cracking.
Dropbox completed password reset for all those users who signed up for Dropbox prior to mid-2012 and hadn’t
changed their password since. They also encouraged users to enable two-step verification.

•Question: What steps did Dropbox adopt to prevent such data breach?

•Answer: Dropbox has taken steps to ensure that its employees don’t reuse passwords on their corporate
accounts.
Key Takeaways

You are now able to:


Explain the cloud data life cycle based on the Cloud Security Alliance
(CSA) guidance

Describe the design and implementation of cloud data storage


architectures on the basis of storage types, threats, and available
technologies
Identify relevant jurisdictional data protection and data security
strategies for securing cloud data

Define Digital Rights Management (DRM) with regard to


objectives and the available tools

Describe various data events and know how to design and


implement processes for auditability, traceability, and
accountability
Certified Cloud Security Professional (CCSP®)

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Cloud Platform and Infrastructure Security

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Learning Objectives

By the end of this domain, you will be able to:

Illustrate physical and virtual infrastructure

Describe the specifications necessary for the physical, logical, and


environmental design of the data center
Define the process for analyzing risk in a cloud infrastructure

Develop a plan for mitigating risk in a cloud infrastructure

Create a security control plan that includes the physical


environment, virtual environment, etc.
Describe Disaster Recovery (DR) and Business Continuity
Management (BCM) for cloud systems
Cloud Infrastructure Components
Cloud Infrastructure Components

Cloud infrastructure consists of data centers and hardware used for its functioning

Management Plane

Virtualization Software

Networking

Storage (Object and Volume)

Compute (Memory and Processing)

Physical Hardware and Environment


Data Center Design Redundancy Factors

Backup Power

Multiple External Multiple


Entry Points for Independent
Power and Networks Cooling Units

Redundancy

Multiple Building Multiple Power


Entrances Lines

Multiple Power
Distribution
Units (PDUs)
Cloud Redundancy

Following are the different redundancies in cloud computing:

External Redundancy Internal Redundancy

■ Power feed lines ■ Power distribution units

■ Power substations ■ Power feeds to racks

■ Generators ■ Cooling chillers and units

■ Generator fuel tanks ■ Networking

■ Network circuits ■ Storage units

■ Building access points ■ Physical access points

■ Cooling or chilling infrastructure


Network And Communications in the Cloud
Network And Communications In The Cloud

Cloud Service Consumer Cloud Service Provider Cloud Carrier

The people or organization A person, organization, or The intermediary that


that maintains a business entity responsible for provides connectivity and
relationship with and uses making a service available transport of cloud services
service from the Cloud to service consumers. between the CSPs and the
Service Providers (CSPs). cloud service consumers.
Network Functionality

Address allocation Access control Bandwidth allocation


The ability to be able to The mechanism used to The specified amount of
provide one or more IP grant or deny access to a bandwidth for system
addresses to a cloud resource access or use
resource

Rate limiting Filtering Routing


The ability to control the The ability to selectively The ability to direct the
amount of traffic sent or allow or deny content or flow of traffic between
received access to resources endpoints based on the
best path
Software-Defined Networking

Software-Defined networking (SDN) allows network administrators to programmatically initialize, control, change, and
manage network behavior dynamically via open interfaces and abstraction of lower-level functionality.

This is done by decoupling or disassociating the system that makes decisions about where traffic is sent from the
underlying systems that forward traffic to the selected destination.

Management Plane
Net App Net App Net App

API API

Control
Plane
SDN Control Software

Control Data Plane Interface

Data Plane
Management Plane

Outside
World

Storage
• Allows the administrator to remotely manage any Compute Volume
or all of the hosts. Controller Controller

• Used in the physical, logical, and networking areas


of the data center, for various tasks, such as: Compute Pool Storage Pool

• Physical: Applying, detecting, and enforcement Management Plane Management Plane


of hardware baseline configurations
VM VM VM Storage Storage
• Logical: Scheduling tasks, optimization
resource allocation, maintaining and updating
software and virtualized hardware Hypervisor Hypervisor Storage Storage

• Networking: All network management and


administration tasks
The management plane could be a potential single point
! of failure.
Virtualization

Compute Pool

Management Plane

VM VM VM

Hypervisor Hypervisor

Key drivers for using virtualization:


• Sharing underlying resources to enable more efficient and agile use of hardware

• Managing is easy through reduced personnel, resourcing, and maintenance


Factors That Impact Data Center Design
Factors Affecting Data Center Design

Logical design Physical design

Virtualization Jurisdiction

Multi-tenant network
Natural disasters

Access control Access control

Physical security
APIs
Buy, Build, and Share

Build The organization controls its design and security.

Buy or This is a cheaper alternative and may include limitations on design inputs.
lease

Share The physical separation of servers and equipment must be included in the design.
Data Center Design Standards

Building Industry Consulting Service International Inc. (BICSI):


The ANSI/BICSI 002-2014 standard covers cabling design and installation.
https://www.bicsi.org

The International Data Center Authority (IDCA):


The Infinity Paradigm covers data center location, facility structure, infrastructure, and applications.
https://www.idc-a.org/

The National Fire Protection Association (NFPA):


NFPA 75 and 76 standards specify how hot aisle or cold aisle containment must be carried out. In case of an
emergency, NFPA standard 70 requires an emergency power-off button to protect the first responders in the
data center.
https://www.nfpa.org/
Data Center Design Standards

Uptime Institute, Inc.

The Uptime Institute, Inc. publishes widely known standards on data center tiers and topologies.

It is based on a series of four tiers. Progressive increase in number makes each tier more stringent,
reliable, and redundant systems for security, connectivity, fault tolerance, redundancy, and cooling.

TIER I TIER II TIER III TIER IV


Redundant site
Concurrently
Basic data center infrastructure Fault-tolerant site
maintainable site
site infrastructure capacity infrastructure
infrastructure
components
Data Center Design Standards

Basic data center site infrastructure:

Tier 1 Tier 1 is a simplistic data center with little or no redundancy.

Tier 2
The minimum requirements for a data center are:
Tier 3 • Dedicated space for IT systems
• Uninterruptible Power Supply (UPS) system for line conditioning and backup purposes
Tier 4
• Sufficient cooling system for all critical equipments
• Efficient power generator for extended electrical outages
Data Center Design Standards

Tier 1 data centers also have these features:


• Scheduled maintenance will require systems (including critical systems) to be taken
Tier 1 offline.
• Both planned and unplanned maintenance and response activity may take systems
Tier 2
(including critical systems) offline.
Tier 3 • Untoward personnel activity (both inadvertent and malicious) will result in downtime.

Tier 4 • Annual maintenance is necessary to safely operate the data center and requires full
shutdown (including critical systems). Without this maintenance, the data center is
likely to suffer increased outages and disruptions.
Data Center Design Standards

Redundant site infrastructure capacity components:

Tier 1
A Tier 2 data center is slightly more robust than Tier 1.
Tier 2
Features:
Tier 3
• Critical operations do not have to be interrupted for scheduled replacement and
Tier 4 maintenance of any of the redundant components.
• Unplanned failures of components or systems result in downtime.
Data Center Design Standards

Concurrently maintainable site infrastructure:

The Tier 3 data center features both the redundant capacity components of a Tier 2 build and
Tier 1 the added benefit of multiple distribution paths.

Tier 2
Characteristics that differentiate Tier 3 from the prior levels include the following:
Tier 3 • There are dual power supplies for all IT systems.

Tier 4 • The critical operations can continue even if any single component or power element is out of
service.
• Unplanned loss of a component or single system may cause downtime; the loss of a single
system, on the other hand, will cause downtime
Distinction: A component is single node in a multiple node system; while each system will
have a redundant component, not all systems are redundant.
Data Center Design Standards
Fault-tolerant site infrastructure:

Every element and system of the facility has integral redundancy such that critical operations
can survive both planned and unplanned downtime.

Tier 1
In addition to all Tier 3 features, the Tier 4 data center will include these attributes:
Tier 2
• There is redundancy where multiple components are independent and physically separate
Tier 3 from each other.
• There is availability of sufficient power and cooling for critical operations even after the
Tier 4
loss of any facility infrastructure element.
• The loss of a single system, component, or distribution element will not affect critical
operations.
• The automatic response capabilities will not let critical operations to halt due to
infrastructure failures.
• Scheduled maintenance can be performed without affecting critical operations.
Data Center Design Standards

Feature Tier 1 Tier 2 Tier 3 Tier 4


Active capacity components N N+1 N+1 2N + 1
support the IT load
Distribution paths 1 1 1 active and 1 2
alternate simultaneously
active
Concurrently maintainable No No Yes Yes

Fault tolerance No No No Yes

Compartmentalization No No No Yes
Continuous cooling No No No Yes
Real-World Scenario: Tier Type

Verne Global owns and operates a 44-acre data center campus in Keflavik, Iceland. As a strategic
location between the world’s two largest data center markets, Europe and North America, Verne
Global is addressing two key issues facing today’s data revolution, power pricing and availability.

The facility does not use water cooling or mechanical cooling equipment, such as compressors.
Instead, it uses power from Iceland’s renewable energy sources and free air cooling technology to
minimize carbon emissions.
Real-World Scenario: Tier Type

The data center is the world’s first dual-sourced, 100% renewably powered data center, according to Verne
Global, as it uses Iceland’s natural geothermal and hydroelectric power.

Question: Which tiered data center model provides continuous cooling?

Answer: Tier 4
Environmental Design Considerations
Environmental Design

ASHRAE (American Society of Heating, Refrigeration, and Air Conditioning Engineers):

HVAC (Heating, Ventilation, and Air Conditioning):

Temperature range: 64.4 to 80.6 degrees F (18 to 27 degrees C)


High temperature causes the components to overheat and turn off.
Low temperature causes the components to work more slowly.

Humidity range: 40 to 60 percent relative humidity


High humidity promotes corrosion of metallic components.
Low humidity enhances the possibility of static discharge.

Note: Must employ positive air pressure and drainage inside the data
center
Cable Management

Implementing cable management strategies:


• Minimizes the air flow obstructions caused by cables and wiring
• Removes the under-floor and overhead obstructions, which often interfere with the distribution of cooling
air
• Achieves the raised-floor minimum effective (clear) height of 24 inches
• Institutes a cable mining program (a program to remove abandoned or inoperable cables)
Aisle Separation and Containment

The data center equipment is laid out in rows of racks


with alternating cold (rack air intake side) and hot
(rack air heat exhaust side) aisles between them.

This arrangement significantly increases the air-side


cooling capacity of a data center’s cooling system.
Connectivity
Multivendor Pathway Connectivity

• Redundant connectivity from multiple providers to the data center prevents a single point of
failure for network connectivity
• Cabling and connectivity backed by a reputable vendor with guaranteed error-free performance
avoids poor transmission in the data center
Hypervisor
The Hypervisor

A hypervisor is a software, firmware, or hardware that


gives the impression to the guest OSs that they are
operating directly on the physical hardware of the host. VM VM VM
App App App

■ Allows multiple guests OSs to share its hardware on a OS OS OS


single host.
VMM VMM VMM
■ Manages requests by VMs to access and abstract the
physical hardware resources of the host, and allows Hypervisor
the VMs to behave as if they were independent
machines.
Physical Hardware
■ Uses Virtual Machine Manager (VMM) instance to
create a one-to-one association with each VM being
managed, allowing the hypervisor to securely manage
the VM.
CPU Resource Allocation
CPU Resource Allocation
• It is a minimum resource guaranteed to a customer within a cloud
environment
• Available for Central Processing Unit (CPU) or Random Access
Memory (RAM)

• Manages the relative


importance of a VM
• If resource contention
takes place, share values
are used to prioritize
compute resource access
for all guests, who are
assigned a certain number
of shares

Limits are put in place to enforce maximum utilization of memory


or processing by a cloud customer
Risks Associated with Cloud Infrastructure
Risks Associated with Cloud Infrastructure

Policy and
Cloud-Specific Risks
Organization Risks

General Risks Legal Risks

Non-Cloud Specific
Virtualization Risks
Risks
Risks Associated with Cloud Infrastructure

The consumer has made


significant vendor-specific
The provider is no Provider investments which lead to high
Policy and longer willing or Lock-in costs of switching between
Organization Risks capable of providing providers.
the required service.
General Risks
Provider Loss of
Exit Governance
Virtualization Risks

The consumer is unable to


The consumer obligations are implement required
Compliance
unfulfilled by a specific cloud controls leading to non-
Risks
vendor and solution. realization of the needed
level of security.
Risks Associated with Cloud Infrastructure
Risks Associated with Cloud Infrastructure
The potential failure to meet any requirement in technical terms, such as
performance, operability, integration, and protection

Policy and
Organization Risks The consolidation of IT infrastructure leads to consolidation risks,
where a single point of failure can have a bigger impact.
General Risks

The large-scale platform requires the CSP to bring more technical


Virtualization Risks skills to manage and maintain the infrastructure.

The control over technical risks shifts toward the provider.


Risks Associated with Cloud Infrastructure
Risks Associated with Cloud Infrastructure

• The breakout of a guest OS to access the


hypervisor or other guests. This is presumably
Guest Breakout facilitated by a hypervisor flaw.
Policy and
Organization Risks

General Risks • The portability of images and snapshots


makes people forget that images and
Snapshot and Image Security snapshots contain sensitive information and
need protection.
Virtualization Risks

• The maximum number of Virtual Machines


(VMs) on a network that the administrator
Sprawl can no longer manage.
Risks Associated with Cloud Infrastructure
Risks Associated with Cloud Infrastructure

■ Management plane breach: Malicious users affect the entire infrastructure


that the management interface controls.
■ Resource exhaustion: Shared cloud resources may lead to cloud outages.
Cloud-Specific Risks ■ Isolation control failure: Resource sharing across tenants typically requires
the CSP to realize isolation controls. Example: One tenant’s VM instance
accessing or affecting instances of another tenant.
Legal Risks
■ Insecure or incomplete data deletion: Data erasure is just removing
directory entries rather than reformatting the storage used.
Non-Cloud Specific ■ Control conflict risk: Controls that lead to more security for one stakeholder
Risks may make it less secure for another.
• Software-related risks: Every software has potential vulnerabilities.
Risks Associated with Cloud Infrastructure

• The controls and actions of the CSP may not be


Data Protection
sufficient to protect the customer's data.

• The CSPs may have data storage locations in


Cloud-Specific Risks Jurisdiction multiple jurisdictions, which can affect other
risks and their controls.

Legal Risks
• The data of multiple customers may be
Law Enforcement exposed as it may be required for law
Non-Cloud Specific enforcement or civil legal authorities.
Risks

• The licensing agreements on software may


Licensing make it legally impossible or expensive for
customers to move it to the cloud.
Risks Associated with Cloud Infrastructure
Risks Associated with Cloud Infrastructure

Other Malicious
Cloud-Specific Risks Unauthorized Network Attacks:
Or Non-Malicious
Facility Access Consumer and
Actions
Provider Side
Legal Risks
Natural Social Default
Disasters Engineering Passwords
Non-Cloud Specific
Risks
Cloud Attack Vectors
Cloud Attack Vectors

• Guest breakout

• Identity compromise, either technical or social (for example, through employees of the provider)

• API compromise, such as by leaking API credentials

• Attacks on the provider’s infrastructure and facilities (for example, from a third-party administrator

hosted with the provider)

• Attacks on the connecting infrastructure (cloud carrier)


Compensating Controls
Compensating Controls

A compensating control, also called an alternative control, is a mechanism that is put in place to
satisfy the requirement for a security measure that is deemed too difficult or impractical to implement
at the present time.
Compensating Controls
Compensating Controls
Every compensating control must meet four criteria:

• Have the intent and rigor of the original requirement

• Provide a similar level of defense as the original requirement

• Be above and beyond other requirements

• Be commensurate with the additional risk imposed by not adhering to the requirement
Business Scenario: Security Control
Business Scenario

PCI DSS mandates periodic cryptographic key changes. The objective is to minimize the risk of
someone discovering the keys.

Company XYZ’s initial encryption of card holder’s data in the ABC application database took 16
months to complete. To change keys, it will take at least 16 months to decrypt the database and
another 16 months to encrypt it.

To meet the security requirement, company XYZ requires the Key Encryption Key (KEK) to be
changed annually. The KEK is retrieved from a separate system each time the ABC application is
initiated. The KEK is retained in active memory as long as the ABC application is up and running.

Question: What type of security control is used to meet the security requirement?

Answer: Compensating security control


Design and Plan Security Controls
Physical and Environmental Protection
Physical and Environmental Protection

• Employment of Defense-in-Depth principle


• Any utilities and facilities that the data center depends on and is redundant both inside and outside the
data center
• Extensive and rigorous background checks for any personnel accessing the data center
• Physical access to personnel based on the least privilege principle
• Efficient monitoring and reviews
• Continual training to remind them of policies, procedures, and proper safeguards
System and Communication Protection
System and Communication Protection

Cloud provider is responsible for the underlying hardware and network. The remaining services and
security responsibilities either lie with the customer or are split between the customer and cloud
provider.
• Data at rest: The main protection is through the use of encryption technologies.
• Data in transit: The main methods of protection are network isolation and the use of encrypted
transport mechanisms.
• Data in use: Protection through secure API calls and web services via the use of encryption, digital
signatures, or dedicated network pathways.
Virtualization Systems Protection

Outside
World

• The management plane is the prime resource for Management Plane


protection
o It controls the entire infrastructure
o Parts of it are exposed to customers independent of
Guest Guest Guest
network location OS OS OS
• A hypervisor flaw might allow a guest OS to “break out”
and access other tenants’ information or even take over
VM VM VM
the hypervisor
• Management network should be isolated from other Hypervisor
networks such as storage and tenant
Hardware
Real-World Scenario: Hacking of Edge Browser

During the 2017 Pwn2Own, an annual hacking contest in Vancouver, a hacker compromised
Microsoft's heavily fortified Edge browser that fetched a prize of $105,000.

The hacker used a JavaScript engine bug to achieve the code execution inside the Edge sandbox,
and used a Windows 10 kernel bug to escape and fully compromise the guest machine.

Hence, malicious websites can compromise a visitor's virtual machine.

VMware patched the vulnerabilities within 2 weeks.

Question: What is the risk of breakout from hypervisor isolation?

Answer: If the hypervisor is compromised, all VMs residing on the hypervisor


can be compromised.
Plan Disaster Recovery and Business Continuity
Disaster Recovery and Business Continuity Management Planning

Business Continuity Plan (BCP) Disaster Recovery Plan (DRP)

Allows a business to plan what it needs to Allows a business to plan what needs to be
do to ensure that its key products and done immediately after a disaster to
services are not affected in case of a recover from it.
disaster.
On-Premises Cloud as BCDR

The cloud serves as the endpoint for failover services and BCDR activities

Disaster Recovery

On-Premises Cloud Provider


Cloud Service Consumer and Primary BCDR Provider
Cloud Service Consumer, Primary Provider BCDR

When one region or availability zone fails, the service is restored to another part of that same cloud

Disaster Recovery

Cloud Provider
Cloud Provider
Cloud Service Consumer and Alternative BCDR Provider
Cloud Service Consumer, Alternative Provider BCDR
When a region or availability zone fails, the service is restored to a different cloud

Disaster Recovery

Paris London
Real-World Scenario: Ransomware Attack
Real World Scenario

In October 2016, Northern Lincolnshire and Goole NHS Foundation Trust experienced a cyberattack
from a variant of ransomware called Globe2 which infects users via phishing emails.

To prevent the virus from spreading, the trust shut most of its systems for four days, resulting in
2,800 patients appointment cancellations.

Investigation revealed that the cause for the attack was a misconfiguration in the firewall. The trust
agreed to conduct penetration testing and gauging staff's cybersecurity awareness.

Question: Why did the trust cancel appointments after the cyber attack?

Answer: The trust did not have a business continuity plan in place to maintain
services.
BCDR Planning Factors

Information relevant in BCDR planning includes the following:

• Important assets: data and processing

• Current locations of these assets

• Networks between the assets and the sites of their processing

• Actual and potential location of workforce and business partners in relation to the disaster event
Disruptive Events

Natural Floods, earthquakes, storms and tornadoes, fires, etc.

Supply
System Power distribution outages, communication interruptions, etc.

Manmade Unauthorized access, explosions, vandalism, fraud, etc.

Equipment
Failures Hardware failure, network failure, utility disruptions, etc.
Relevant Cloud Infrastructure Characteristics

Cloud infrastructure has a number of characteristics that can be distinct advantages in realizing BCDR

Rapid elasticity and on-


demand self-service Broad network connectivity

Resilient cloud infrastructure Pay-per-use


BCDR Strategies

DC2
Planning, Preparing, and Provisioning Location DC1 Zone 1
DC2
• Power or network failure can be mitigated in a DC1 Zone 2
different zone in the same data center.
Production
• Flood, fire, and earthquakes direct the facilities to BCDR Site
Site
be set up at remote locations. Paris
Active/Passive Full Replica
Data Replication

• Data can be replicated at the block level, the file level, Traffic After Failover

and the database level.


• Replication can be in bulk, on the byte level, by file Load Balancer Load Balancer

synchronization, database mirroring, and daily copies.


Cloud Service Cloud Service
Functionality Replication Failover Capability Functionality Functionality Functionality Functionality

• Manipulation of cluster managers, load balancer


devices, Domain Name System (DNS) Backup
Replicate
Storage Storage
Backup
DB Replicate DB
Returning to Normal
Returning to Normal

If original
Clean up any
Return to provider no
resources that
normal is Back to the longer viable Document any
are no longer
where DR original then, DR lessons
needed,
ends provider provider learned
including
becomes the
sensitive data
“new normal”
Real-World Scenario

On the night of September 16, 2013, lightning struck Cantey Technology, an IT company that hosts
servers for more than 200 clients.

A security alarm notified employees and triggered a call to the fire department.

Lightning surged through the IT company’s network connections, and started a blaze which
destroyed their network closet.

But, Cantey’s clients never felt the effects of the disruption as the business continuity plan moved
the client servers to a remote data center and scheduled continual data backups.

Question: When is the disaster considered to be officially over?

Answer: When all the business elements at the original site have returned
to normal operation.
Creating the BCDR Plan
Creating The BCDR Plan
Revise
Define scope

Report
Gather requirements

Test Analyze

Design Assess risk


Define Scope

• Ensure that security concerns are an intrinsic part of the plan from the start, rather than trying
to retrofit them into the plan after it is developed.

• Include clearly defined roles, risk assessment, classification, policy, awareness, and training.
Gathering Requirements

• Identify critical business processes and their dependence on specific data and services.
• Derive requirements from company's internal policies and procedures, applicable legal, statutory, or
regulatory compliance obligations.
• Influence the business strategy with acceptable RTO and RPO values.
Analyze

• Translate BCDR requirements into inputs for the design phase.


• Inputs for the design phase are scope, requirements, budget, and performance objectives.
• Analyze the business requirements and the threat model for completeness and consistency. It is
then translated into an identification of the assets at risk.
Assess Risk

• Load capacity at the BCDR site: Can the site handle the needed load to run the application or
system, and is that capacity readily and easily available?
Can the BCDR site handle the level of network bandwidth required for the production services and
the user community accessing them?
• Contractual issues: Will any new CSP address all contractual issues and SLA requirements?
• Legal and licensing risks: There may be legal or licensing constraints that prohibit the data or
functionality to be present in the backup location.
Design

The actual technical evaluation of BCDR solutions is considered and matched to the company’s
requirements and policies.

Following are additional BCDR-specific questions that should be addressed in the design phase:
• How will the BCDR solution be invoked?
• What is the manual or automated procedure for invoking the failover services?
• How will the business be affected during the failover, if at all?
• How will the BCDR be tested?
Test the Plan
Define Scope

The actual production applications and hosting may be augmented or modified to provide additional hooks
or capabilities to enable the BCDR plan to work.

A Cloud Security Professional, performs a cost–benefit analysis to know the extent of modifications required
and the benefits those modifications bring to the overall organization.
• It reveals problems with the DR plan
• It allows for proactive troubleshooting
• It helps meet expectations
• It gives management the confidence of recovery in an emergency
Tabletop Exercise or Structured Walk-Through Test

• A tabletop exercise or structured walk-through test is considered a preliminary one in the


overall testing process and may be used as an effective training tool

• Primary objective: To ensure that critical personnel are familiar with the BCP and that the
plan accurately reflects the organization’s ability to recover from a disaster
Tabletop Exercise or Structured Walk-Through Test

This exercise or test is characterized by the following:

Attendance of
business unit
management
representatives and
employees who play a
critical role in the BCP
process Discussion about
each person’s
responsibilities as
Individual and team defined by the BCP
training, which
includes a walk-
through of the step-
by-step procedures
outlined in the BCP Clarification and
highlighting of critical
plan elements as well
as problems noted
during testing
Walk-Through Drill

The various tasks involved in this drill are:

Implementing the BCP procedures by all the personnel


Requires more involvement than a Practicing and validating specific functional response capabilities
tabletop exercise or structured walk- Demonstrating knowledge and skills
through test as participants choose a
specific scenario and apply the BCP to it Role-playing with a simulated response at alternate locations

Mobilizing the crisis management and response team to practice


proper coordination
Varying degrees of mobilization to reinforce the content and logic of the
plan
Functional Drill or Parallel Test

A functional drill includes:

A full test of the BCP that involves all employees


A functional drill or parallel test is the first
type that involves the actual mobilization of Demonstrating emergency management capabilities of groups
personnel to other sites in an attempt to practicing a series of interactive functions
establish communications and perform
actual recovery processing set forth in the Testing medical response and warning procedures
BCP.
Communication capabilities
The goal is to determine whether critical
systems can be recovered and if employees
can actually deploy the BCP procedures. Mobilizing personnel and resource at varied geographical sites

Varying degrees of mobilization and comparison of results


Full-Interruption or Full-Scale Test

Full-interruption or full-scale test is the most


comprehensive type of test. In a full-scale test, a • Enterprise-wide participation and interaction
real-life emergency is simulated. • Validation of crisis response functions
• Demonstration of knowledge and skills
Therefore, comprehensive planning should be a • On-the-scene execution of coordination and decision-
prerequisite to this type of test to ensure that making
business operations are not negatively affected. • Actual, as opposed to simulated, notifications,
mobilization of resources, and communications of
decisions
• Activities conducted at actual locations or facilities
• Actual processing of data using backup media
Understanding Business Requirements

MTD (Maximum Tolerable Downtime) or MAD (Maximum


Recovery Point Objective (RPO) Allowable Downtime)

RPO helps determine how much information How long it would take for an interruption in service to kill an
must be recovered and restored. Another way of organization, measured in time. For instance, if a company
looking at RPO is to ask yourself, “how much would fail because it had to halt operations for a week, then its
data can the company afford to lose?” MTD is one week.

Time

Normal Business Disaster Occurs Recovery Back to Normal


Understanding Business Requirements

Recovery Time Objective (RTO)

RTO is the time measure of how fast you need


each system to be up and running in the event of
a disaster or critical failure.

Months Weeks Days Hours Minutes Seconds

Recovery Service Level (RSL)

RSL measures the percentage (0-100%)


of the total typical production service
level that needs to be restored to meet
BCDR objectives during a disaster.
Report and Revise

• Once the testing has been completed, a full and comprehensive report detailing all activities,
shortcomings, changes made during the course of testing, and the results of the effectiveness of the
overall BCDR strategy and plan should be presented to the management for review.
• The management will evaluate the effectiveness of the plan, coupled with the goals and metrics deemed
suitable, and the costs associated with obtaining such goals.
• Once the management has a full briefing and the time to evaluate the testing reports, the iterative
process can begin with the changes and modifications to the BCDR plan.
Types of Testing

Criteria Black Box Testing White Box Testing

Definition The internal The internal structure/design/


structure/design/implementation of implementation of the item being
the item being tested is NOT known to tested is known to the tester.
the tester.
Levels Mainly applicable to higher levels of Mainly applicable to lower levels of
testing: testing:
■ Acceptance Testing ■ Unit Testing
■ System Testing ■ Integration Testing
Responsibility Independent Software Testers Software Developers

Programming Not Required Required


Knowledge
Implementation Not Required Required
Knowledge
Basis for Test Requirement Specifications Detailed Design
Cases
Uptime and Availability

Uptime is the time when the actual server is up and powered on and available
to the system administrators

Availability is when the servers are “present and ready for use”, “willing to
serve or assist.”

Note: Having a server up and powered on does nothing for your company if the actual services that your site
requires are not up.
Real-World Scenario

Review this diagram of a cloud datacenter campus and determine if


it has sufficient resiliency, redundancy, and security. Determine the
pros and cons in this. Power Substation

Generator/Fuel Storage

Cooling Towers/Water Storage

Data Processing Facilities

Fence

Parking

Road

Entrance/ Visitor Control


Case Study: Netflix Simian Army

In 2011, Netflix revealed the Simian Army: a set of testing and monitoring applications that randomly
disables Netflix’s production instances to ensure it can withstand failure and provide services without
any customer impact.

• Chaos Monkey randomly disables production instances and services.

• Chaos Gorilla simulates availability zone outage to verify that services automatically re-balance
without impact.

• Latency Monkey induces artificial delays in RESTful client-server communication layer to simulate
service degradation.

• Conformity Monkey finds instances that don’t adhere to best practices and shuts them down.

• Doctor Monkey taps into health checks that run on each instance to detect unhealthy instances.

• Janitor Monkey ensures that Netflix’s cloud environment is running free of clutter and waste.

• Security Monkey finds security violations or vulnerabilities.


Security Training and Awareness

Basis of Education Training Awareness


Difference
Attribute Why How What
Level Insight Knowledge Information
Objective Understanding Skills Exposure
Teaching Theoretical instruction Practical instruction Media
Method ● Discussion ● Lecture ● Videos
● Seminar ● Case study ● Newsletter
● Background reading ● Workshop ● Posters
● Research ● Hands-on practice
Test Measure Essay (Interpret Problem solving ● True or false
learning) (Apply learning) ● Multiple choice
(Identify learning)
Impact Long term Intermediate Short term
Timeframe
Real-World Scenario: Credential Theft

In February 2016, unknown hackers stole more than $81 million from Bangladesh Bank's account at
the Federal Reserve Bank of New York.
Investigations revealed that Bangladesh's central bank did not have a firewall and used $10 switches
to network computers connected to the SWIFT global payment network.
This made it easy for hackers to steal credentials for the SWIFT messaging system and use malware
to attack the computers used to authorize transactions.
The lack of sophisticated hardware made it harder to trace the origin of the hacks.
The hackers covered their tracks by installing malware on the bank's network to prevent workers
from discovering fraudulent transactions quickly.

Question: What precautions could banks use to protect their SWIFT systems?
Answer: Banks must build multiple firewalls to isolate the Swift system from its
other networks and keep the machines physically isolated in a separate locked
room.
Key Takeaways

You are now able to:


Illustrate physical and virtual infrastructure

Describe the specifications necessary for the physical, logical, and


environmental design of the data center
Define the process for analyzing risk in a cloud infrastructure

Develop a plan for mitigating risk in a cloud infrastructure

Create a security control plan that includes the physical


environment, virtual environment, etc.
Describe Disaster Recovery (DR) and Business Continuity
Management (BCM) for cloud systems
Certified Cloud Security Professional (CCSP®)

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Cloud Application Security

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Learning Objectives
By the end of this domain, you will be able to:
Identify the training and awareness required for successful cloud
application security deployment
Describe the software development life cycle process for a cloud
environment
Demonstrate the use and application of the software development
life cycle
Identify the requirements for creating secure identity and access
management solutions
Describe specific cloud application architecture

Describe the steps necessary to ensure and validate cloud


software
Identify the necessary functional and security testing for
software assurance
Summarize the process for verifying secure software
Advocate Training and Awareness for Application Security
API Types

Service Requestor
SOAP

Service Provider

Simple object access protocol (SOAP): A protocol and standard for exchanging information between
web services in a structured format.

Features
• Language, platform, and transport independent
• Distributed enterprise environment
• Standardized
• Pre-build extensibility (WS-* standards)
• Built-in error handling
• Automated

329
API Types

Client
HTTP REST Server

Representational State Transfer (REST): A software architecture style consisting of guidelines and
best practices for creating scalable web services.

Features:

▪ Easier to use and more flexible


▪ No expensive tools
▪ Smaller learning curve
▪ Efficient and fast
▪ Closer to other web technologies in design philosophy
API Formats

Representational State Transfer (REST) Simple Object Access Protocol (SOAP)

SOAP WS

JSON PO-XML YML XML

HTTP (POST, GET, PUT, DEL) SMTP FTP HTTP

An architectural style protocol XML-based message protocol

Uses only HTTP Uses SOAP envelope and HTTP to transfer the data

Supports many different data formats like JSON, XML,


Supports only XML format
and YAML

Slower performance, complex scalability, and caching


Uses caching for performance and scalability
can’t be used

Used widely Provides WS-* features


331
Real-World Scenario: Hacking of iCloud Accounts

August 31, 2014: Hackers publicized around 500 private pictures of various celebrities. Due to
a lack of two-factor authentication, hackers were able to continuously attempt to sign into the
celebrities' iCloud accounts and guess password combinations.

Apple has now fixed this vulnerability in its Find My iPhone feature.

Question: What could have prevented the brute force attack on that API?

Answer: Rate limiting


Common Pitfalls
Cloud Security Application Deployment: Common Pitfalls
Risks Associated with Cloud Infrastructure

1 On-premises does not always transfer (and


vice versa)

2 Not all apps are “Cloud-Ready”

3 Lack of awareness and training

4 Lack of documentation and guidelines

5 Complexities of integration

6 Overarching challenges
Cloud Security Application Deployment: Common Pitfalls
Risks Associated with Cloud Infrastructure

On-premises does not always transfer (and • Present performance and functionality may not be
vice versa) transferable
• Current configurations and applications may be hard to
replicate
Not all apps are “Cloud-Ready”
• Not developed for cloud-based services
• Not all applications are forklifted to the cloud

Lack of awareness and training

Lack of documentation and guidelines

Complexities of integration
Forklifting: The process of migrating an entire application
to the cloud with minimal code changes.
Overarching challenges
Cloud Security Application Deployment: Common Pitfalls

On-premises does not always transfer (and


vice versa)

Not all apps are “Cloud-Ready”


• The code is reevaluated and altered several times in order
to run effectively in the cloud.
Lack of awareness and training • Many high-end applications come with distinct security
and regulatory restrictions or rely on legacy coding.
• Encryption of information is performed that was not done
Lack of documentation and guidelines earlier.

Complexities of integration

Overarching challenges
Cloud Security Application Deployment: Common Pitfalls
Risks Associated with Cloud Infrastructure

On-premises does not always transfer (and


vice versa)

Not all apps are “Cloud-Ready”

New development techniques and approaches require


Lack of awareness and training training and willingness to utilize new services.

Lack of documentation and guidelines

Complexities of integration

Overarching challenges
Cloud Security Application Deployment: Common Pitfalls
Risks Associated with Cloud Infrastructure

On-premises does not always transfer (and


vice versa)

Not all apps are “Cloud-Ready”

Lack of awareness and training


• Best practice requires developers to follow relevant
documentation, guidelines, methodologies, processes,
Lack of documentation and guidelines and life cycles.
• Most up-to-date guidance may not always be available,
particularly for new releases and updates.
Complexities of integration

Overarching challenges
Cloud Security Application Deployment: Common Pitfalls
Risks Associated with Cloud Infrastructure

On-premises does not always transfer (and


vice versa)

Not all apps are “Cloud-Ready”

Lack of awareness and training

Lack of documentation and guidelines


• Integration is complicated when the CSP manages
infrastructure, applications, and integration platforms.
Complexities of integration • Collection or keeping tracks of events and transactions is
difficult across interdependent or underlying components.
• Using CSPs wherever it’s possible.
Overarching challenges
Cloud Security Application Deployment: Common Pitfalls
Risks Associated with Cloud Infrastructure

On-premises does not always transfer (and


vice versa)

Not all apps are “Cloud-Ready”

Lack of awareness and training

Lack of documentation and guidelines

Complexities of integration

• Multitenancy
• Third-party administrators
Overarching challenges
Encryption Dependency Awareness
Awareness of Encryption Dependencies

• Encryption of data at rest: Addresses encryption of data stored within the CSP network
• Encryption of data in transit: Addresses security of data while it traverses the network
• Data masking (or data obfuscation): The process of hiding original data using random characters
or data
Business Scenario
Real World Scenario
A large DNS provider experienced a network attack by thousands of Internet of Things (IoT)
devices infected with Botnet malware.

When combined, these devices attacked a DNS Provider. As a result, some of the largest online
retailers in the country were offline for several hours.

This went on for almost a day, and cost estimates were well into millions of dollars in lost sales.

Question: What type of attack was experienced by the DNS provider?

Answer: Distributed Denial of Service (DDoS)


Understanding Software Development Lifecycle Process
Cloud Secure Development Life Cycle

Planning and
Requirements Defining
Analysis

Designing Developing

Testing Maintenance
Disposal Phase

Crypto-shredding: The deletion of the key used to encrypt data that’s stored in the cloud

346
Real-World Scenario: Revoking Access

Several years ago, an IT employee having access to numerous administrative-level accounts


across network equipment and servers on the city’s private cloud was fired. He retaliated by
locking city employees out of these systems. One of the affected systems was the payroll
system.

After discovering the intrusion, police took him into custody.

Question: What should be included in an employee termination policy?

Answer: Revoke access immediately after termination.


Vulnerabilities and Risks
OWASP Top 10

OWASP Top 10 (old) OWASP Top 10 (new)


A1: Injection A1: Injection
A2: Broken Authentication and Session A2: Broken Authentication
Management
A3: Cross-Site Scripting (XSS) A3: Sensitive Data Exposure
A4: Insecure Direct Object References A4: XML External Entities (XXE)

A5: Security Misconfiguration A5: Broken Access Control


A6: Sensitive Data Exposure A6: Security Misconfiguration
A7: Missing Function Level Access Control A7: Cross-Site Scripting (XSS)

A8: Cross-Site Request Forgery (CSRF) A8: Insecure Deserialization


A9: Using Components with Known Vulnerabilities A9: Using Components with Known Vulnerabilities

A10: Unvalidated Redirects and Forwards (Dropped) A10: Insufficient Logging and Monitoring
The Notorious Nine

1 2 3

Data Breaches Insecure APIs Abuse and


Nefarious Use

4 5 6

Data Loss Denial of Service Insufficient Due


Diligence

7 8 9
Account Shared
Highjacking Malicious Insiders Technology Issues
350
Threat Modeling
Threat Modeling

Classification
Potential Actors
Threat
Attack Surface
Potential Migration

Thread modeling is a process by which potential threats are modified, identified, enumerated, and prioritized from
an attacker’s point of view

The purpose is to provide defenders with the probable attacker’s profile, likely attack vectors, and the assets
desired by the attacker

352
Threat Modeling

It answers the questions:


• Where are the high-value assets?

• Where am I most vulnerable to attack?

• What are the most relevant threats?

• Is there an attack vector that might go unnoticed?

353
STRIDE

STRIDE is a threat classification model developed by Microsoft for thinking about computer security
threats.
Spoofing identity

Tampering with data

Repudiation

Information disclosure (privacy breach or data leak)

Denial of Service

Elevation of Privilege

354
Supplemental Security Devices
Supplemental Security Devices

• A Layer 7 firewall that understands HTTP traffic


Web Application Firewall (WAF)
• A Cloud WAF effective in the case of a DoS attack

• A layer 7 monitoring device that understands SQL commands


Database Activity Monitoring • Is agent-based (ADAM) or network-based (NDAM)
(DAM) • Detects and stops malicious commands from executing on an
SQL server

• Transforms the way services and sensitive data are exposed


as APIs
XML Applications/Gateways
• Is either hardware or software
• Implements security controls
Supplemental Security Devices

■ Are distributed or configured across the SaaS, PaaS, and


IaaS landscapes
Firewalls
■ Are cloud-based and need to be installed as software
components

■ A device that filters API traffic


API Gateway ■ Implements access control, rate limiting, logging, metrics,
and security filtering
Real-World Scenario

Recently a client was experiencing a massive Layer 7 DDOS attack, generating tens of thousands of
random HTTP requests per second to their web server.

HTTP flood attacks have little dependency on bandwidth allowing them to easily take down a server.

With this type of attack, the server-level caching is unable to stop it. The incoming URLs are dynamic and
the application forces a reload of the content for every new request, not in the cache.

The solution an emergency DDOS protection feature uses is JavaScript to prevent malicious bots from
hitting the site. An intelligent log correlation system pinpoints the IP addresses and traffic pattern,
blocking the incoming attack at the edge via the web application.

Question: What is OSI Layer 7?

Answer: Application Layer.


Encryption
Data-in-Transit Encryption

Virtual Private Network (VPN,


Secure Socket Layer (SSL) Transport Layer Security (TLS)
such as IPSec gateway)

• The standard security • A protocol that ensures privacy • A network that is constructed
technology for establishing an between applications and by using public wires (usually
encrypted link between a web users on the Internet the Internet) to connect to a
server and a browser • Ensures that no third party private network, such as a
• Ensures that all data passed eavesdrops or tampers a company’s internal network
between the web server and message during the
browsers remains private and communication between a
integral server and a client
• It is successor to SSL
Data-at-Rest Encryption

Whole Instance Encryption Volume Encryption File or Directory Encryption

• A method for encrypting all the • A method for encrypting a • A method for encrypting a
data associated with the single volume on a drive single file or directory on a
operation and use of a virtual drive
machine
Sandboxing and Application Virtualization
Sandboxing

Sandboxing is the segregation and isolation of information or processes from other components within
the same system or application.

Where is it used?

1 Data isolation: To keep different communities and populations of


users isolated from similar data

2 Cloud environments without physical network: To run untested or


untrusted code in a tightly controlled environment

362
Application Virtualization

• It is a technology that creates a virtual environment for an application to run.

• This virtualization creates an encapsulation from the underlying OS.

• It can be used to isolate or sandbox an application to see the processes which are performed by applications.

• Examples:

o Wine, which allows for some Microsoft applications to run on a Linux platform

o Microsoft App-V

o XenApp

363
Federated Identity Management
Federated Identity Management

A model that enables companies with different technologies, standards, and use-cases to share their
applications by allowing individuals to use the same login credentials across security domains.

The main purpose is to allow registered users of a certain domain to access information
from other domains without having to provide extra administrative user information.

365
Federation Standards

Security Assertion
• An XML-based framework for communicating user authentication,
Markup Language
entitlement, and attribute information
(SAML) 2.0

• A specification that defines mechanisms to allow different security realms


WS-Federation to federate, such that authorized access to resources in one realm can be
provided to security principals in other realms

• Lets developers authenticate their users across websites and apps without
OpenID Connect
having to own and manage password files

• An OAuth 2.0 authorization framework which enables a third-party


OAuth
application to obtain limited access to an HTTP service
SAML Authentication

Google Service ACME Corp. Identity


Provider User Provider

User tries to reach


hosted Google
Acme Corp. uses Gmail as its corporate e-mail Application START HERE
1
platform. They want to control user access 2
Google generates
credentials to enforce password policies. SAML request Google redirects browser
to SSO URL
Browser

3 Browser redirects to
SSO URL
Acme Corp. set up a relationship with Google that ACME parses SAML
request,
would allow Acme Corp. to do just the same using 4 authenticates user

SAML.
Acme
Acme returns encoded
5 generates
Whenever a user attempts to access the corporate SAML response to SAML response
browser
Gmail account, Gmail redirects their request to
Browser sends SAML
6
Acme’s SSO service, which authenticates the user response to Google
6
and relays a SAML response.
Google verifies
7 SAML response

367
SAML Authentication

User wants access to web application


1 Web Application
User Google
Web application redirects 2
user to a Google page,
asks user access to the
The user gives the application.
web application 3
access to Google Google sends web
application an 4
apps data.
authorization code.
The web app sends
Google records web 5 an authorization
app access by the code and client
token issued, and the credentials to
admin or the user can Google and receives
revoke the token. a new token.

368
Case Study

• At the University of Arizona Libraries, study room management had become a time-consuming task.

• In late August 2012, they decided to implement a web application which enabled the students to find
and reserve unmediated study spaces from their smartphones anytime.

• This reduced the staff time involved in the room management.

• For the University of Arizona, an important feature was the integration of Shibboleth, an open-source,
single sign-on authentication system for complex federated environments based on the Security
Assertion Markup Language (SAML).

369
Identity and Access Management
Identification, Authentication, and Authorization

Identification Authentication Authorization

Describes a method of Identifies the Grants access to an


ensuring that a subject individual by object after the subject
is a real entity asking, “Who are is identified and
you?” and “How do I authenticated and
know I can trust evaluates “What do
you?” you have access to?”
Identity and Access Provisioning Life Cycle

● Disable an account as soon as an


employee leaves
● Set account expiry date for
temporary accounts
● Delete an expired account as per
organization policy
● Create new accounts
● Provision them with appropriate
rights and privileges

● Check accounts periodically


● Disable inactive accounts
● Check for excessive and creeping privileges
Multi-Factor Authentication
Multi-Factor Authentication

• It is also known as two-factor authentication and strong authentication.

• The general principle is to add an extra level of protection to verify the legitimacy of a transaction.

• To be a multifactor system, users must be able to provide at least two of the following requirements:

Something you Something you Something you are


know have
Real-World Scenario: FAR

bunq, the world’s first mobile-only bank faced the issue of how to securely authenticate its customers.

The company chose face biometrics, as it is supported on smartphones. However, face biometrics are
easy to spoof and have high rates of false rejection.

An alternative to this was Veridium’s 4 Fingers TouchlessID which collects the user’s fingerprints from
the rear camera and the LED flash of a smartphone.

Switching from face to hand recognition reduced complaints of failed authentication attempts by
up to 90%.

Question: What is False Acceptance Rate?

Answer: FAR is the measure of the likelihood of the biometric security system
incorrectly accepting an access attempt by an unauthorized user.
Case Study

• In January 2010, Google announced that the Chinese government had been targeting it to gain access
to the email accounts of human rights activists working in China and around the world.

• The attacks led to a number of changes at Google, in terms of security infrastructure and policy. As a
result, Google decided to shut down operations in China.

• Later that year, Google introduced its two-factor authentication system for business accounts, and
then to the general public.

• In the years since, major companies such as Microsoft, Twitter, Apple, and Amazon offer 2FA options
across a multitude of online platforms. Competition on security features, high-profile breaches, and
the everyday occurrence of account hijackings have led to demands for better authentication.

• However, many consumers are not availing these options as they are less convenient and more
complex than a password.

376
Cloud Access Security Broker (CASB) and ISO/IEC 27034-1
Cloud Access Security Broker (CASB)

A Cloud Access Security Broker (CASB) is an on-premise or cloud-based security policy enforcement point that is
placed between cloud service consumers and cloud service providers. It ensures that the network traffic between
on-premise devices and the cloud provider complies with the organization's security policies. CASB acts as a
gatekeeper allowing the organization to extend the security controls of there on-premise infrastructure to the
cloud.

Auto-discovery is used by CASB to identify cloud applications in use and identify high-risk applications, users, and
other key risk factors. Example: Some CASBs can benchmark an organization’s security configurations against
industry best practices and regulatory requirements.

Organizations are increasingly taking help of CASB vendors to address cloud service risks, enforce security
policies, and comply with regulations.
ISO/IEC 27034-1

Provides one of the most widely accepted set of standards and guidelines for secure application development

Key elements:

Organizational Application Application Security


Normative Normative Management
Framework (ONF) Framework (ANF) (APSM)

379
Application Security Testing
Application Security Testing

Static Application Security Testing (SAST)

• A white-box test
• Performs an analysis of the application source code, byte code, and binaries without
executing the application code
• Determines coding errors and omissions that are indicative of security vulnerabilities
• Can be used to find XSS errors, SQL injection, buffer overflows, unhandled error
conditions, and potential backdoors

381
Application Security Testing

Dynamic Application Security Testing (DAST)

• A black-box test
• Discovers individual execution paths in the application being analyzed
• DAST is used against applications in their running state
• Considered effective when testing exposes HTTP and HTML interfaces of web applications

382
OWASP Testing Guide

• Identity management testing


• Authentication testing
• Authorization testing
• Session management testing
• Input validation testing
• Error handling test
• Weak cryptography testing
• Business logic testing
• Client-side testing

383
Software Supply Chain Management
Software Supply Chain (API) Management

End Users
• Cloud-based systems and modern web applications consist of
software, API calls, components, and external data sources.
• The integration of external API calls and web services allows an APPS
application to leverage enormous numbers of external data
sources and functions.
• These external sources are outside the control of the
developer or an organization. Web APIs
• Software components produced without secure software
development guidance can create security risks throughout
the supply chain.
On-Premise In-Cloud

385
Real-World Scenario: Hacking of Jeep Cherokee

In 2015, hackers remotely commandeered the controls of Jeep Cherokee.

Hackers Charlie Miller and Chris Valasek demonstrated a Digital Crash-Test Dummy to
highlight vulnerabilities in Internet-connected entertainment and navigation systems
featured in many new vehicles.

Following this incident, Chrysler released a software update to improve vehicle security and
recalled 1.4 million recent models.

Question: Who are white hat hackers?

Answer: White hat hackers use their skills to improve security by exposing
vulnerabilities before malicious hackers known as black hat hackers can detect and
exploit them.
Key Takeaways
You are now able to:
Identify the training and awareness required for successful cloud
application security deployment
Describe the software development life cycle process for a cloud
environment
Demonstrate the use and application of the software development
life cycle
Identify the requirements for creating secure identity and access
management solutions
Describe specific cloud application architecture

Describe the steps necessary to ensure and validate cloud


software
Identify the necessary functional and security testing for
software assurance
Summarize the process for verifying secure software
Certified Cloud Security Professional (CCSP®)

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Cloud Security Operations

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Learning Objectives
By the end of this domain, you will be able to:

Identify the requirements to build and implement the physical cloud


infrastructure
Define the process for running and managing the physical
infrastructure based on access, security controls, and availability
configurations, analysis, and maintenance
Identify the requirements to build and implement the logical cloud
infrastructure
Define the process for managing the logical infrastructure based
on access, monitoring, security controls, availability
configurations, analysis, and maintenance
Identify the necessary regulations and controls to ensure
compliance for the operation and management of the cloud
infrastructure
Describe the process of conducting a risk assessment of the
physical and logical infrastructure
Describe the process for the collection, accusation, and
preservation of digital evidence
Secure Configuration of Hardware: Servers
Best Practices for Servers

Host hardening

Host patching

Host lockdown
To achieve this, remove all nonessential services and software from the

Secure ongoing configuration maintenance


host.

Trusted Platform Module (TPM)

Hardware Security Modules (HSM)


Best Practices for Servers

Host hardening

Host patching To achieve this, install all patches provided by the vendors whose
hardware and software are being used to create the host server.
Host lockdown

Secure ongoing configuration maintenance


These patches may include basic input/output system (BIOS)/firmware
updates, driver updates for specific hardware components, and OS
Trusted Platform Module (TPM)
security patches.

Hardware Security Modules (HSM)


Best Practices for Servers

Host hardening To achieve this, implement host-specific security measures, such as:
• Blocking non-root access to the host under most circumstances
Host patching
(local console access only via a root account)
Host lockdown • Allowing the use of secure communication protocols and tools to
access the host remotely, such as PuTTY with secure shell (SSH)
Secure ongoing configuration maintenance
• Configuring and using host-based firewall
Trusted Platform Module (TPM) • Using Role-Based Access Controls (RBACs) to limit user access to
host and what permissions they have
Hardware Security Modules (HSM)
Best Practices for Servers

Host hardening

To achieve this, use a variety of vendor-specific and some non-vendor


Host patching
specific mechanisms, such as:

Host lockdown • Patch management of hosts, guest OSs, and application workloads
running on them
Secure ongoing configuration maintenance
• Periodic vulnerability assessment scanning of hosts, guest OSs, and

Trusted Platform Module (TPM)


application workloads running on hosts
• Periodic penetration testing of hosts and guest OSs running on them
Hardware Security Modules (HSM)
Best Practices for Servers

TPM is a hardware chip on the computer’s motherboard that stores


cryptographic keys used for encryption.
Host hardening

Features of TPM:
Host patching
• Provides full-disk encryption capabilities
Host lockdown • Provides integrity and authentication to the boot process
• Keeps hard drives locked until system verification and
Secure ongoing configuration maintenance
authentication gets completed
Trusted Platform Module (TPM)

TPM includes a unique RSA key burned into it, which is used for
Hardware Security Modules (HSM)
asymmetric encryption.
Additionally, it can generate, store, and protect other keys used in the
encryption and decryption process.
Best Practices for Servers

Host hardening HSM is a security device which manages, generates, and securely
stores cryptographic keys.
Host patching
High performance HSMs are external devices connected to a network
using TCP/IP.
Host lockdown Smaller performance HSMs are expansion cards which gets installed
within a server, or as devices which gets plugged into computer ports.
Secure ongoing configuration maintenance
HSMs can be added to a system or a network, but if a system didn’t
ship with a TPM, it’s not feasible to add one later.
Trusted Platform Module (TPM)

HSM and TPM provide secure encryption capabilities by storing and


Hardware Security Modules (HSM)
using RSA keys.
Best Practices for Storage Controllers

• Implementing a virtualized system allows the storage traffic to be segregated and isolated on its own LAN

• Prioritizing resolution of latency issues for storage systems over typical network traffic

• Offering a built-in encryption capability by storage controllers ensures confidentiality of the data transiting
the controller

• Addressing insecure settings


Real-World Scenario: Data Theft

In March 2011, Health Net, a provider of managed healthcare services, began notifying
1.9 million patients that nine server drives containing personal and health data were
stolen from a data center managed by IBM in Rancho Cordova, California.

The missing drives contained names, addresses, social security numbers, financial
information, and health data of customers, employees, and healthcare providers.
Real-World Scenario: Data Theft

This incident was the second data breach by Health Net in two years. In 2009, Health Net’s Connecticut office
had lost a portable hard drive containing health and financial data on 1.5 million policyholders. The
Connecticut attorney filed a lawsuit in federal court, as the company had not only failed to protect the
personal data but also did not notify affected individuals in a timely manner. Health Net agreed to pay $2.5M
in damages and offer stronger consumer protections to settle the lawsuit.

Question: What is the best practice to store data in hard drives?


Answer: Data in hard drive has to be encrypted. Consider TPM or HSM.
Internet Small Computer System Interface

iSCSI is a protocol that uses TCP to transport SCSI commands. It enables the use of the existing TCP/IP
infrastructure as a SAN.

iSCSI makes block devices available via the network; unlike Network Attached Storage (NAS), which
presents devices at the file level.

iSCSI must be considered as a local-area technology, not a wide-area technology, because of latency
issues and security concerns.

Layering two VLANs is a good way to segregate iSCSI traffic from general traffic.
Initiators and Targets

Types of storage network equipment:

Initiator

The consumer of storage, typically a server with an adapter card in it is called a Host Bus Adapter (HBA). The
initiator commences a connection over the fabric to one or more ports on your storage system, which are called
target ports.

Target

These are the ports on the storage system that deliver storage volumes (called target devices or Logical Unit
Numbers [LUNs]) to the initiators.
Oversubscription

Oversubscription occurs when more users are connected to a system that can be fully supported at the
same time.
Networks and servers are almost always designed with some amount of oversubscription with the
assumption that not all users need the service simultaneously.
Oversubscription is permissible on general-purpose LANs.

Best practices:
• To have a dedicated local area network (LAN) for iSCSI traffic
• Not to share the storage network with other network traffic, such as management, fault tolerance, or
vMotion/Live Migration
iSCSI Implementation Considerations

Encrypt the traffic (IPsec)

Implement a private network (use Authenticate the traffic (Kerberos, CHAP)


dedicated VLAN, 802.1q)
Virtual Switches Best Practices

Virtual switches connect the physical Network Interface Cards (NICs) in the host server to the
virtual NICs in VMs. Switches support 802.1Q tagging, which allows multiple VLANs to be used on a
single physical switch port to reduce the number of physical NICs needed in a host.

Best practices:
• Utilizing several types of ports and port groups separately rather than all together on a single
virtual switch offers higher security and better management.
• Achieving virtual switch redundancy by assigning at least two physical NICs to a virtual switch,
with each NIC connecting to a different physical switch.
Other Virtual Network Security Best Practices

• Moving live VMs from one host to • Locking down the access to virtual
another does so in clear text by switches so that an attacker
using specific network cannot move VMs from one
01 03 network to another
• Helping “sniff” the data or perform a • This also helps VMs not to straddle
man-in-the-middle attack when a 02 an internal and external network
live migration occurs

While dealing with internal and external networks, it


is necessary to create a separate isolated virtual
switch with its own physical network interface cards
and never mix internal and external traffic on a
virtual switch
Configuration of VM Tools
Leading Practices of VM Tools

Defense in depth

Implement the tools used to manage the host as part of a larger architectural design that mutually reinforces
security at every level of the enterprise.

Access control

Secure the tools and tightly control and monitor access to them.

Auditing and monitoring

Monitor and track the use of the tools throughout the enterprise to ensure proper usage.

Maintenance

Update and patch the tools as required to ensure compliance with all vendor recommendations and security
bulletins.
Cloud Environments: Sharing a Physical Infrastructure

Important considerations when sharing resources:


Legal

Exposing your data in an environment shared with other companies can give the government reasonable
cause to seize your assets, because another company has violated the law.

Compatibility
Storage services provided by one cloud vendor may be incompatible with another vendor’s services should
you decide to move from one to the other.
Cloud Environments: Sharing a Physical Infrastructure

Important considerations when sharing resources:

Control
If information is encrypted while passing through the cloud, does the customer or cloud vendor control the
encryption and decryption keys?
Most consumers probably want their data encrypted both ways across the internet using the secure sockets
layer (SSL) protocol.

Log data
SaaS suppliers have to provide log data in a real-time, straightforward manner, for their administrators and
customers. This is done because the SaaS provider’s logs might not be externally accessible which might make
monitoring a difficult task.
Cloud Environments: Sharing a Physical Infrastructure

PCI DSS access

Access to logs is required for PCI DSS compliance. Security managers need to make sure to negotiate access
to the provider’s logs as part of any service agreement.

Upgrades and changes

Cloud applications constantly experience addition of new features. Users must keep applications up-to-date
to make sure that they are well protected.

A secure software development life cycle may not be able to provide a security cycle that keeps up with
changes that occur so quickly. This means that users must constantly upgrade because an older version may
not function or protect the data.
Cloud Environments: Sharing a Physical Infrastructure

Failover technology
Administering failover technology is a component of securing the cloud that is often overlooked.

Security needs to be moved to the data level so that enterprises can be sure that their data is protected
wherever it goes.

Compliance

SaaS makes the process of compliance more complicated because it is difficult for a customer to discern
where his data resides on a network controlled by the SaaS provider, or a partner of that provider, which
raises all sorts of compliance issues of data privacy, segregation, and security.

Some countries have strict limits on what data can be stored about its citizen and for how long.
Cloud Environments: Sharing a Physical Infrastructure

Regulations

Compliance with government regulations, such as the Sarbanes-Oxley Act (SOX), the Gramm-Leach-Bliley Act
(GLBA), the Health Insurance Portability and Accountability Act (HIPAA), and industry standards such as the PCI
DSS are much more challenging in the SaaS environment.

Outsourcing
Outsourcing means losing significant control over data.

It is necessary to work with a company’s legal staff to ensure that appropriate contract terms are in place to
protect corporate data and provide for acceptable SLAs.
Cloud Environments: Sharing a Physical Infrastructure

Placement of security

Cloud-based services result in many mobile IT users accessing business data and services without traversing
the corporate network.

Virtualization
Virtualization efficiencies in the cloud require VMs from multiple organizations to be colocated on the same
physical resources.
Administrative access is through the Internet rather than the controlled and restricted direct or on-premises
connection that is adhered to in the traditional data center model.

VM

The dynamic and fluid nature of VMs makes it difficult to maintain the consistency of security and ensures
that records can be audited.
Proving the security state of a system and identifying the location of an insecure VM is challenging. The
colocation of multiple VMs increases the attack surface and risk of VM-to-VM compromise.
Real-World Scenario: EC2 Vulnerability

On April 8th, 2011, Amazon sent out an email to its Elastic Compute Cloud customers acknowledging the
presence of compromised images in the Amazon AMI community. AMI stands for Amazon Machine
Image, a pre-configured virtual guest.

The infected image comprised of Ubuntu 10.4 server, running Apache, and MySQL along with PHP
especially used for hosting a website.
The “certified pre-owned” had the image publisher’s public key left in root .ssh authorized keys and home
ubuntu .ssh authorized keys, allowing the publisher to log into any server instance running his image as
the root user.
Real-World Scenario: EC2 Vulnerability

The publisher claimed this was purely an accident, a mere result of his inexperience. While this may or
may not be true, this incident exposes a major security hole within the EC2 community.

Question: What is the risk of using preconfigured system images?

Answer: Organizations must analyze the risk associated with pre-configured cloud-based systems, and
consider the option of configuring the system from the “ground up,” beginning with the base operating
system.
Securing Network Configuration (Part 1)
Securing Network Configuration

VLAN TLS

DNSSEC
DNS

IPSec
Securing Network Configuration

VLAN

• VLAN is an Institute of Electrical and Electronics Engineers (IEEE) standard networking scheme with specific
tagging methods that allow routing of packets to only those ports that are part of the VLAN.
• VLANs do not guarantee that data will be transmitted securely and will not be tampered with or
intercepted while on the wire.
Securing Network Configuration

TLS

It uses X.509 certificates to authenticate a connection and to exchange a symmetric key.


Types of protocols:
• TLS handshake protocol: Allows the client and the server to authenticate each other and to negotiate an
encryption algorithm along with cryptographic keys before data is sent or received.
• TLS record protocol: Provides connection security and ensures that the connection is private and reliable
and is used to encapsulate higher-level protocols, including the TLS handshake protocol
Securing Network Configuration

DNS

Threats to the DNS infrastructure:


• Footprinting: The process by which an attacker obtains DNS zone data, including DNS domain names,
computer names, and IP addresses for sensitive network resources
• Denial-of-service attack: When an attacker attempts to deny the availability of network services by
flooding one or more DNS servers in the network with queries
Securing Network Configuration

DNS

• Data modification: An attempt by an attacker to spoof valid IP addresses in IP packets that the attacker
has created. This gives these packets the appearance of coming from a valid IP address in the network.
• Redirection: When an attacker can redirect queries for DNS names to servers that are under the control
of the attacker
• Spoofing: When a DNS server accepts and uses incorrect information from a host that has no authority to
give that information
Securing Network Configuration

DNSSEC

• DNSSEC is a suite of extensions that adds security to the Domain Name System (DNS) protocol by enabling
DNS responses to be validated.
• DNSSEC provides origin authority, data integrity, and authenticated denial of existence.
• In the presence of DNSSEC, the DNS protocol is much less susceptible to certain types of attacks.
• Validation of DNS responses occurs through the use of digital signatures that are included with DNS
responses.
• It does not address confidentiality or availability.
Securing Network Configuration

IPSec

• IPSec includes protocols for establishing mutual authentication at the beginning of the session and
negotiating cryptographic keys to be used during the session.
• IPSec supports network-level peer authentication, data origin authentication, data integrity, encryption,
and replay protection.
• A major difference between IPSec and other protocols such as TLS is that IPsec operates at the internet
network layer rather than the application layer, allowing end-to-end encryption of all communications and
traffic.
Real-World Scenario: DNS Attack

In 2013, the New York Times' website was hit with a Domain Name System
(DNS) attack and became inaccessible.

The compromise was the result of a targeted phishing attack against a


reseller for Melbourne IT, an Australian domain registrar and IT services
company.

The attack resulted in hackers changing the DNS (Domain Name System)
records for several
domain names including nytimes.com. This resulted in traffic to those
websites being
temporarily redirected to a server under the attackers’ control.
Real-World Scenario: DNS Attack

The affected DNS records were reverted back soon after, but this highlights the fact that
with most DNS providers and for most DNS records, there is no real security on DNS addresses.

Question: Could the New York Times have prevented this attack?

Answer: They could have done a registry lock which makes it very difficult for anyone to alter the
DNS records that govern the links between a domain name and an IP address.
Clustered Host
Clustering

A cluster is a group of hosts combined physically or logically by a centralized management system to


allow redundancy, configuration synchronization, failover, and minimization of downtime.

Features of resource sharing:


• Reservations guarantee a minimum amount of the cluster’s pooled resources be made available to
a specified VM.
• Limits guarantee a maximum amount of the cluster’s pooled resources be made available to a
specified VM.
• Shares provision the remaining resources left in a cluster when there is resource contention.
Distributed Resource Scheduling (DRS)

DRS provides high availability, scaling, management, workload distribution, and balancing of jobs and
processes.

As loads change, virtual hosts can be moved between physical hosts to maintain proper balance, which:
• Provides highly available resources to workloads
• Balances workload for optimal performance
• Scales and manages computing resources without service disruption
Dynamic Optimization (DO)

Dynamic optimization is the process through which the cloud environment is constantly maintained to
ensure resources are available when and where needed.

With auto-scaling and elasticity, cloud environments are ensured to always be different from one
moment to the next, and through automated means without any human intervention or action.

With rapid elasticity, capabilities can be rapidly and elastically provisioned, in some cases automatically,
to scale rapidly outward and inward, commensurate with demand.
Storage Cluster

Clustered storage is the use of two or more storage servers working together to increase performance,
capacity, or reliability.

Clustering distributes workloads to each server, manages the transfer of workloads between servers, and
provides access to all files from any server regardless of the physical location of the file.

Storage clusters must be designed to:


• Meet the required service levels as specified in the SLA
• Provide for the ability to separate customer data in multitenant hosting environments
• Securely store and protect data through the use of availability, integrity, and confidentiality (AIC) mechanisms
Maintenance Mode and Patch Management
Maintenance Mode

Maintenance mode refers to the physical hosts and times when upgrades, patching, or other
operational activities are necessary.

All operational instances are removed from the system/device before entering maintenance mode.

While in maintenance mode, customer access is blocked and alerts are disabled (although logging is still
enabled).

Note: It is important to test if the system or device has all the original functionality necessary for
customer purposes before moving it back to normal operation from maintenance mode.
Patch Management

Patch management is the process of identifying, acquiring, installing, and verifying patches for
products and systems.

Not implementing patch management properly results in:


• Failing to provide due care for those customers utilizing the unpatched products
• Affecting the production environment, harming the customer's ability to operate

Best Practices:
• Test the patches before pushing them out
• Perform system backup before applying patch
Patch Management

Implementation: Automated or Manual


Automated:
• Has a faster delivery with more targets compared to manual approach
• Provides reports that annotate which targets have received the patch, cross-referenced against the asset
inventory, and have an alerting function to inform administrators which targets have been missed
• Requires frequent human monitoring

Manual:
• Trained and experienced personnel are more trustworthy than a mechanized tool and might understand
when anomalous activity occurs
• Slower than automated type and may not be as thorough
Patch Management

Challenges of Patch Management:


• Lack of service standardization
• Patch management is not simply using a patch tool to apply patches to endpoint systems, but rather
a collaboration of multiple management tools and teams, such as change management and patch
advisory tools
• In a large enterprise environment, patch tools need to be able to interact with a large number of
managed entities in a scalable way and handle the heterogeneity that is unavoidable in such
environments
• To avoid problems associated with automatically applying patches to endpoints, thorough testing of
patches beforehand is absolutely mandatory
Performance Monitoring
Performance Monitoring

Performance monitoring is essential for the secure and reliable operation of a cloud environment.

Network Disk Memory CPU

Excessive Full disk or Excessive Excessive CPU


dropped slow reads memory usage utilization
packets and writes to or full
the disks utilization of
(input/output available
operations memory
per second) allocation
Outsourcing Monitoring

Outsourcing the monitoring function to a trusted third party for the 24/7 monitoring of the cloud
environment
Use the following approaches to assess risk while outsourcing:
• Having HR check references
• Examining the terms of any SLA or contract being used to govern service terms
• Executing some form of trial of the managed service in question before implementing into
production
Real-World Scenario: Sony Hack Attack

In 2015, Sony Pictures suffered the worst corporate hack attack in history when attackers going by
the name Guardians of Peace, employed a previously undisclosed vulnerability to break into Sony’s
system.

These types of vulnerabilities are known as Zero-Day because the original programmer has zero days
after learning about it to patch the code before it can be exploited in an attack. These flaws are
usually the result of errors made during the writing of the software, giving an attacker wider access to
the rest of the software. More often, they remain undetected until an attack has occurred.
Real-World Scenario: Sony Hack Attack

The attackers first crippled its network and then released sensitive corporate data on public file-sharing
sites, including four unreleased feature films, business plans, contracts, and the personal emails of top
executives.

Question: Can a signature based IPS system prevent a zero-day attack?


Answer: No
Network Security Controls: Layered Security and Honeypot
IDS and IPS

True False

Positive Alarm Alarm


Attack No Attack

Negative No Alarm No Alarm


No Attack Attack

When dealing with encrypted traffic, an IPS and IDS face considerable challenges because
signature-based analysis is effectively eliminated as the system cannot perform inspection.
Honeypot

It is a computer system that is set up to act as a decoy to lure cyber attackers and to detect, deflect, or study
attempts to gain unauthorized access to information systems.

Honeypot is isolated from the production system. It is designed in such a way that the attacker thinks that it is
part of the original production system and contains valuable data. However, the data on a honeypot is bogus
data, and it is set up on an isolated network so that any compromise of it cannot impact any other systems within
the environment.
Honeypot

Honeypot
advantages

• Divert Attackers Effort: An intruder will spend energy on a system that causes no harm to production
servers.
• Educate: The properly designed and configured Honeypot provides data on the methods used to attack
systems.
• Detect Insider Attacks: Since most IDS systems have difficulty detecting insider attacks, Honeypot can
provide valuable information on the patterns used by insiders.
• Create Confusion for Attackers: The bogus data Honeypot and provides to attackers can confuse and
confound.
• Deter Attacks: Fewer intruders will invade a network that know is designed to monitor and capture their
activity in detail.
Honeypot

Honeynet is an extension of the honeypot. It groups multiple honeypot systems to form a network that is used
in the same manner as the honeypot, but with more scalability and functionality.

Enticement means that you have made it easier for the bees (the attackers) to conduct their normal activity.
Enticement is not necessarily illegal, but does raise ethical arguments and may not be admissible in court.

Entrapment is where the intruder is induced or tricked into committing a crime that the individual may have had
no intention of committing.
Entrapment is illegal and cannot be used when charging an individual with hacking or unauthorized activity.
Security Information and Event Management (SIEM)

SIEM software products and services combine security information management (SIM) and security event
management (SEM).
SIEM gathers logs from various devices (servers, firewalls, routers, etc.) and attempts to correlate the log data
and provide real-time analysis of security alerts.

Features of SIEM are:


• Data aggregation
• Correlation
• Alerting
• Dashboarding
• Compliance
• Log data retention
• Forensic analysis
Log Management
Log Management

Service models

IaaS
Cloud customer is responsible for log collection and maintenance as far as virtual machines and
application logs are considered. The SLA between the cloud customer and the cloud provider will have
to clearly spell out what logs are available within an IaaS environment, who has access and is
responsible to collect them, who will be responsible for maintaining and archiving them, as well as
what the retention policy is.
Log Management

Service models

PaaS
The cloud provider needs to collect the operating system logs and possibly logs from the application;
depending on how PaaS is implemented and what application frameworks are used. Therefore, the
SLA will have to clearly define how those logs are collected and given to the cloud customer and what
degree of support is available with such efforts.
Log Management

Service models

SaaS
All logs will have to be displayed by the cloud provider via SLA requirements. With many SaaS
implementations, to some degree the logs are exposed by the application itself to administrators or
account managers of the application. These logs might be limited to a set of user functions or just
high-level events. Anything more detailed should be a part of the SLA.
Orchestration
Orchestration

Orchestration pertains to the enormous use of automation for tasks such as provisioning, scaling,
allocating resources, and even customer billing and reporting.

Cloud service orchestration consists of these elements:


• Composing
• Stitching
• Connecting and Automating
Cloud Service Orchestration

Rationale behind using Orchestration:

• Cloud services are intended to scale-up arbitrarily and dynamically, without requiring direct human
intervention to do so.
• Cloud service delivery includes fulfillment assurance and billing.
• Cloud services delivery entails workflows in various technical and business domains.
Availability of Guest OS
High Availability

High availability (HA) is a characteristic of a system, that aims to ensure an agreed level of
operational performance, usually uptime that is higher than the normal period is achieved.
Features of HA:
• It’s goal is to minimize downtime, not prevent it
• HA systems are bound by SLA
• The failure of one or more components does not affect the performance of the system
• It eliminates a single point of failure
• It detects failure as they occur
Fault Tolerance

Fault tolerant systems offer higher levels of resiliency and recovery. They use a high degree
of hardware redundancy and specialized software to provide near-instantaneous recovery
from any single hardware or software unit failure.
Features of fault tolerance:
• It’s Expensive
• It addresses only hardware failures, not software failures
• The performance of the system could degrade when one or more component fails
• RAID 1, for example, by mirroring data across multiple disks, provides fault tolerance from
disk failures
• It runs a MySQL slave that can be promoted to the master if the master fails
High Availability vs. Fault Tolerance

High Availability Fault Tolerance

Loosely coupled servers Tightly-coupled servers

No performance downgrade Possible performance downgrade

Systems are independent Systems are mirrored and dependent

Chances to have downtime No downtime


Operations Management (Part 1)
Management

Information
Security
Management
Continual
Continuity Service
Management Improvement
Management

Change Incident
Management Management
Management

Information
Security
Management
Continual
Continuity Service
Management Improvement
Management
Change management is an approach that
allows organizations to manage and control the
impact of change through a structured process. Change Incident
Management Management

Change management creates and implements a


series of processes that allow changes to the
scope of a project to be formally introduced and
approved.
Management

Information
Continuity management, or business Security
Management
continuity management, is focused on Continual
planning the successful restoration of systems Continuity Service
Management Improvement
or services after an unexpected outage, Management
incident, or disaster.

Change Incident
Management Management
Management

Information security management is focused


on the organization’s data, IT services and IT
Information
Security systems, and the assurance of their
Management confidentiality, integrity, and availability.
Continual
Continuity Service
Management Improvement Information security management encompasses:
Management
• Designing security controls
• Testing security controls
• Managing security incidents
Change Incident
Management Management • Reviewing and maintaining security controls
and processes
Management

Continual service improvement is based on the


ISO 20000 standards for continual
Information
Security improvement.
Management
Continual
Continuity Service The process applies quality management
Management Improvement
Management principles and the collection of a wide range of
metrics and data.

Change Incident This is all combined into a formal analysis


Management Management process, with the goal of finding areas within
systems and operations to continually improve
performance, user satisfaction, and cost-
effectiveness.
Management

Information
Security
Management
Continual
Continuity Service
Management Improvement
Management An incident is defined as any event that can
lead to the disruption of an organization’s
services or operations that impact either
Change Incident
internal or public users.
Management Management
Incident management is focused on limiting
the impact of these events on an organization
and their services, and returning their state to
full operational status as quickly as possible.
Real-World Scenario

A manager of a German telecom company discovered a dangerous fire encroaching on a crucial


company facility. The facility was a central switching center, which housed important telecom
wiring and equipment that were vital to providing service to millions.

The company uses an incident management system from Simba, which alerted the staff to the
fire, evaluated the impact of the incident, automatically activated incident management response
teams, and sent emergency alerts to Simba’s 1,600 Germany-based employees.
Real-World Scenario

Given the proximity of the fire to the company facility, these procedures included an orderly
emergency evacuation of the premises to a pre-arranged recovery site.

Despite best efforts, the fire did indeed reach the building, ultimately knocking out the entire
switching center. However, with an effective incident management system and the redundant
equipment put in place, combined with a redundant network design, the company was able to
fully restore service within six hours.

Question: What is the most important consideration during a disaster?


Answer: Human safety
Management

Configuration
Management

Release and Service Level


Deployment Management
Management

Problem Availability
Management Management

Capacity
Management
Management

Configuration
Management

Release and Service Level


Deployment Management
Management

The objective of problem


management is to minimize
the impact of problems on Problem Availability
Management Management
the organization by
identifying the root cause of Capacity
the problem at hand. Management
Management

Release and deployment Configuration


Management
management involves the
planning, coordination, Release and Service Level
Deployment Management
execution, and validation of Management
changes and rollouts to the
production environment.

Problem Availability
Management Management

Capacity
Management
Management

Configuration management tracks and


maintains detailed information about any IT
Configuration
components within the organization.
Management

Release and Service Level


Deployment It encompasses all physical and virtual systems
Management
Management and includes hosts, servers, appliances, and
devices.

Problem Availability It also includes all the details about each of


Management Management these components, such as settings, installed

Capacity software, and version and patch levels.


Management
Management

Configuration
Management

Release and Service level management is focused


Service Level
Deployment Management on the negotiation, implementation, and
Management
oversight of SLAs.

Problem Availability
Management Management

Capacity
Management
Management

Configuration
Management

Release and Service Level


Deployment Management
Management

Availability management is focused on


making sure system resources, processes,
Problem Availability
Management Management personnel, and toolsets are properly allocated
and secured to meet SLA requirements for
Capacity performance.
Management
Management

Configuration
Management

Release and Service Level


Deployment Management
Management

Problem Availability
Management Management Capacity management is focused on the
required system resources needed to deliver
Capacity
performance at an acceptable level to meet
Management
SLA requirements, and doing so in a cost-
effective and efficient manner.
Risk Management Process: Framing Risk and Risk Assessment
Framing Risk

Framing risk is the first step in the risk management process designed to produce a risk-management
strategy intended to address how organizations assess, respond to, and monitor risk.
Risk Assessment

Risk assessment is the process used to identify, estimate, and prioritize information security risks.

According to NIST SP 800-39, the purpose of engaging in risk assessment is to identify:

• Threats to organizations (i.e., operations, assets, or individuals) or threats directed through


organizations against other organizations
• Vulnerabilities: internal and external to organizations
• The harm (i.e., adverse impact) that may occur given the potential of threats exploiting vulnerabilities
• The likelihood that harm will occur
Risk Assessment

• Single Loss Expectancy (SLE): Represents an organization’s loss from a single threat.

Single Loss Expectancy (SLE)= Asset value ($) x EF (%)

• Exposure Factor (EF): Percentage of loss that a realized threat could have on a certain asset
• Annualized Rate of Occurrence (ARO): Value for the estimated frequency of a specific threat occurring
within one year
• Annualized Loss Expectancy (ALE): Annual expected financial loss to an organization from a threat

Annualized Loss Expectancy (ALE)= (SLE) x (ARO)


Quantitative Risk Analysis: Problem

?
Problem:
Hacker hacks a server, with the data encrypted. Consider the following conditions:
• Asset value= $6,000
• EF= 50%
• ARO= 10% chances of hacking in one year
Scenario

ABC Corp. has been experiencing increased hacking activity as indicated by firewall and IPS logs
gathered from its managed service provider. The logs also indicate that the company has
experienced at least one successful breach in the past 30 days. Upon further analysis of the breach,
the security team has reported to senior management that the dollar value impact of the breach
appears to be $10,000.

Senior management has asked the security team to come up with a recommendation to fix the issues
that led to the breach. The recommendation from the team is that the countermeasures required to
address the root cause of the breach will cost $30,000.
Scenario

Question: Is $30,000 expense to implement the countermeasures justified?

Answer: Taking the loss encountered of $10,000 per month, the annual loss expectancy is $120,000.
Thus, the mitigation would pay for itself after three months ($30,000) and would provide a $10,000
loss prevention for each month after. Therefore, this is a sound investment.
Risk Response

Risk treatment can be done in the following four ways:


• Transfer: Transfer the risk
• Avoid: Terminate the risk activity
• Reduce: Take measures
• Accept: Accept the loss

Residual Risk:
It is the risk remaining after a risk treatment.
If the residual risk is unacceptable, the risk treatment process should be iterated.
Risk Monitoring

Risk monitoring is the process of keeping track of identified risks.


Purpose of risk monitoring is:
• Determine the ongoing effectiveness of risk responses
• Identify risk-impacting changes to organizational information systems and the environments in which the
systems operate
• Monitor changing regulatory requirements and whether the current risk evaluations and mitigations still
fulfill their expectations
Collection and Preservation of Digital Evidence
Collection and Preservation of Digital Evidence

Digital forensics is the application of science to the identification, collection, examination, and analysis of data
while preserving the integrity of the information and maintaining a strict chain of custody for the data.

Chain of custody should clearly depict how the evidence was collected, analyzed, and preserved to be
presented as admissible evidence in court.
In a cloud, it is not obvious where a VM is physically located.
The investigator’s location and a VM’s physical location can be in different time zones.
Hence, maintaining a proper chain of custody is much more challenging in the cloud.
Cloud Forensics Challenges

Control over data: Cloud users have the highest level of control in IaaS and the least level of control in SaaS
Multitenancy: In the cloud, the forensics investigator may need to preserve the privacy of other tenants
Data volatility: Data residing in a VM is volatile because once the VM is powered off, all the data is lost
Evidence acquisition: Investigators are completely dependent on CSPs for acquiring cloud evidence
Accessing Information in Service Models

INFORMATION SaaS PaaS IaaS LOCAL


Networking √
Storage √

Servers √
Virtualization √
OS √ √
Middleware √ √
Runtime √ √
Data √ √ √
Application √ √ √
Access Control √ √ √ √
Process Flow of Digital Forensics

Forensic evidence can be collected from the host or guest OS

Identification Collection Examination Analysis Reporting


Challenges in Collecting Evidence

Challenges

• The seizure of servers containing files from many users creates privacy issues among the multitenants
homed within the servers.
• The trustworthiness of evidence is based on the CSP, with no ability to validate or guarantee on behalf of the
CCSP.
• Investigators are dependent on CSPs to acquire evidence.
• Technicians collecting data may not be qualified for forensic acquisition.
• Unknown location of the physical data can hinder investigations.
Network Forensics

Network forensics is defined as the capture, storage, and analysis of network events.
The idea is to:
• Capture: Capture every packet of network traffic and make it available in a single searchable database so
that the traffic can be examined and analyzed in detail
• Trace: Entire contents of emails, IM conversations, web-surfing activities, and file transfers can be recovered
and reconstructed to reveal the original transaction

Use cases:
• Uncover proof of an attack
• Troubleshoot performance issues
• Monitor activity for compliance with policies
• Source data leaks
• Create audit trails for business transactions
Communication with Relevant Parties
Communication with Relevant Parties

Communication between the provider, its customers and suppliers is critical for any environment

Vendors

Communication with vendors will be driven almost exclusively through contract and SLA requirements.

Customers (Internal and External)

Communications about availability, changing policies, and system upgrades and changes.

Regulators

Many regulatory requirements are based on geography and jurisdiction.


Early communication is essential with regulators when developing a cloud environment.
SLA

Service Level Agreement (SLA) is a form of communication that clarifies responsibilities.


Some metrics that SLAs may specify include:
• Percentage of the time services that are available
• Number of users that can be served simultaneously
• Specific performance benchmarks to which actual performance is periodically compared
• Schedule for notification in advance of network changes that may affect users
• Help/service desk response time for various classes of problems
• Remote access availability
• Usage statistics that are provided
Real-World Scenario: DTS Data Breach

In early 2012, a large data breach took place on the servers of Utah’s Department of Technology Services (DTS). A
malicious hacker group from Eastern Europe succeeded in accessing the servers of DTS, compromising 780,000
Medicaid recipients and the Social Security Numbers (SSNs) of 280,096 individual clients.

The Utah DTS had proper access controls, policies, and procedures in place to secure sensitive data. However, in
this particular case, a configuration error occurred while entering the password into the system. The malicious
hacker accessed the password of the system administrator and gained the personal information of thousands of
users.
Real-World Scenario: DTS Data Breach

The biggest lesson from this incident is that even if the data is encrypted, a flaw in the authentication
system can render a system vulnerable.
The state spent about $9 million in total on remediation, including security audits, upgrades, and credit
monitoring for victims, in addition to $770/20 hours in resolution for each of the 122,000 victims.
Total fraud could amount to $406 million (Javelin Strategy and Research).

Question: Under which US regulatory law does DTS fall?


Answer: HIPAA
Security Operations Center (SOC)

A security operations center is a team of expert professionals dedicated to preventing cybersecurity threats

The goal of a SOC is to monitor, detect, investigate, and respond to all types of cyber threats around the clock

SOC is essential part of protection plan and data protection system that reduces the level of exposure of
information system to both external and internal risk
Security Operations Center (SOC)

• Security Information and Event Management


systems (SIEM)
Security team members • Firewalls
use a combination of • Intrusion Detection and Prevention System (IDPS)
technologies and • Penetration testing tools
processes including: • Unified Threat Management (UTM)
• Risk and Compliance (GRC) systems
• Application and database scanners
Security Operations Center (SOC)

SOC types:

Virtual SOC: No
Combined Global or
dedicated facility,
SOC/NOC: One Command SOC:
geographically
team and facility is Dedicated SOC: An Monitors a wide
distributed team
dedicated to in-house, area that
members, and
shared network dedicated facility encompasses
often delegated to
and security many other
a managed service
monitoring regional SOCs
provider
Learning Objectives

You are now able to:


Identify the requirements to build and implement the physical cloud
infrastructure
Define the process for running and managing the physical
infrastructure based on access, security controls, and availability
configurations, analysis, and maintenance
Identify the requirements to build and implement the logical cloud
infrastructure
Define the process for managing the logical infrastructure based
on access, monitoring, security controls, availability
configurations, analysis, and maintenance
Identify the necessary regulations and controls to ensure
compliance for the operation and management of the cloud
infrastructure
Describe the process of conducting a risk assessment of the
physical and logical infrastructure
Describe the process for the collection, accusation, and
preservation of digital evidence
Certified Cloud Security Professional (CCSP®)

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Legal, Risk, and Compliance

Certified Cloud Security Professional is a registered trademark of (ISC) 2


Learning Objectives
By the end of this domain, you will be able to:

Identify legal requirements and risks associated with the cloud


environment
Describe the potential personal and data privacy issues

Define the process, methods, and adaptions necessary for an


audit
Describe the different types of cloud-based audit reports

Identify the impact of diverse geographical locations and legal


jurisdictions
Understand implications of cloud-to-enterprise risk management

Explain the importance of cloud contract design and


management for outsourcing a cloud environment
Identify appropriate supply-chain management processes
Legislative Concepts
Case Study
Real World Scenario
In 2009, Los Angeles attempted a total migration of its workforce from a traditional on-premises
email system to a cloud service. The contract was given to Google Apps for 30,000 users.

The city benefitted from Google’s world-class security, reliability, and availability.

But, the Los Angeles Police Department (LAPD) informed that Google Apps could not meet the
FBI's strict security and privacy requirements for connecting to the bureau's national criminal
history database.

LAPD’s concern was Google couldn't or didn't want to subject overseas staff to the FBI's
background checks.

So, Los Angeles pulled the plug on the LAPD portion of its Gmail deployment and demanded that
Google pay the cost of maintaining the GroupWise email servers at the LAPD for the duration of
its contract.
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law

Federal Laws

State Law

Common Law These are the rules that govern relations between states or countries.

Components:
Criminal Law

Tort Law • International conventions: Establish rules expressively recognized by


contesting states
Administrative
• International customs: Accept general practice as law
Law
Privacy Law • General principles of law: Recognized by civilized nations

Restatement (Second) of • Judicial decisions and the teaching of qualified publicists: Determine rules
Conflict of Laws of law
Legislative Concepts

International Law

Federal Laws

State Law

Common Law • Federal laws govern the entire country. For example, laws against
kidnapping and bank robbery.
Criminal Law • If a person robs a bank, they would commit a federal crime and would
therefore be subject to federal prosecution, and punishment.
Tort Law
• Often, states will handle these types of cases, as they have prescribed
Administrative laws for it.
Law
Privacy Law • Generally, the issues of jurisdiction and subsequent prosecution are
worked out in advance between law enforcement and court jurisdictional
Restatement (Second) of bodies.
Conflict of Laws
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law

Federal Laws

State Law

Common Law • State law refers to the law of each state in the U.S.

• Examples of state law: Speed limits, state tax laws, criminal code, etc.
Criminal Law
• Federal laws are usually more comprehensive and may often
Tort Law supersede state laws.

Administrative
Law
Privacy Law

Restatement (Second) of
Conflict of Laws
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law

Federal Laws

State Law

Legal system in countries like the United States, Canada, and the United
Common Law Kingdom emphasizes on determinant of laws and sets a judicial precedent.

Criminal Law It consists of three branches of law:

Tort Law
• Criminal law
Administrative • Civil law or Tort law
Law
• Administrative or Regulatory law
Privacy Law

Restatement (Second) of
Conflict of Laws
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law

Federal Laws

State Law

Common Law • Addresses behavior that is harmful to society


• Includes punishments, such as monetary fines, imprisonment, and
Criminal Law
death
Tort Law • It is the prosecution’s responsibility to prove guilt beyond a
reasonable doubt
Administrative
Law
Privacy Law

Restatement (Second) of
Conflict of Laws
Legislative Concepts

International Law

Federal Laws

State Law
• A body of rights, obligations, and remedies that sets out reliefs for persons
suffering harm due to the wrongful acts of others
Common Law
• Tort actions are not dependent on an agreement between the parties to a
Criminal Law lawsuit
Tort law serves four objectives:
Tort Law
• Compensates victims for injuries suffered by the culpable action or inaction of
Administrative others
Law • Shifts the cost of injuries to the person or persons responsible for inflicting them
Privacy Law • Discourages injurious, careless, and risky behavior in the future
• Vindicates legal rights and interests that are compromised, diminished, or
Restatement (Second) of emasculated
Conflict of Laws
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law

Federal Laws

State Law

Common Law Laws and legal principles that address a number of areas include
international trade, manufacturing, environment, and immigration
Criminal Law

Tort Law

Administrative
Law
Privacy Law

Restatement (Second) of
Conflict of Laws
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law

Federal Laws

State Law

Common Law
• Privacy is the right of an individual to determine when, how, and to
what extent one releases personal information.
Criminal Law
• Privacy law includes language indicating that personal information
Tort Law must be destroyed when its retention is no longer required.

Administrative
Law
Privacy Law

Restatement (Second) of
Conflict of Laws
Legislative Concepts
Risks Associated with Cloud Infrastructure
International Law

Federal Laws

State Law

Common Law
• The restatement of conflict of laws is the basis for deciding which
laws are more appropriate when there are conflicting laws in
Criminal Law
different states.

Tort Law • The conflicting legal rules comes from US federal laws, the laws of the
states of US, or the laws of the other countries.
Administrative
Law
Privacy Law

Restatement (Second) of
Conflict of Laws
Intellectual Property Laws
Intellectual Property Laws

Main goal of this law is to


Intellectual property laws
protect property from
are designed to protect
being copied or used
tangible and intangible
without due compensation
items, and property.
to the inventor or creator.
Intellectual Property Laws

Patent Trademark Copyright Fair Use Trade Secrets


Intellectual Property Laws

Patent Trademark Copyright Fair Use Trade Secrets

Patent provides a monopoly to the patent holder on the right to use, make, or sell an invention for a period of
time in exchange to making his invention public.

The invention is publicly The patent is the


The invention must be The patent term is 20
available for production, strongest form of
novel, useful, and not years from the initial
upon expiration of the intellectual property
obvious. filing date.
patent. protection.

Patent Troll is a person or company which obtains patents to aggressively and opportunistically go after
another entity that tries to create something based upon them.
Intellectual Property Laws

Patent Trademark Copyright Fair Use Trade Secrets

Registered trademarks are associated with marketing. The purpose is create a brand identity that distinguishes the
source of products or services.

The trademark protects a word, name, symbol, sound, shape, and color.

The superscript TM symbol


An R with circle around it is used The superscript SM symbol is
indicates an unregistered mark
for register trademarks used to brand a service offering

! Companies cannot trademark a number or common words.


Intellectual Property Laws

Patent Trademark Copyright Fair Use Trade Secrets

• Protects the right of the creator of an original work to control the public distribution, reproduction, display,
and adaptation of that original work

• Covers many categories of work: pictorial, graphics, musical, dramatic, literary, pantomime, motion picture,
sculptural, sound recording, and architectural

• Allows the creator to hold the copyright when the work is created

• Ensures that the copyright lasts for either 70 years after the author’s death, or 120 years after the first
publication of a work for hire created under contract
Intellectual Property Laws

Patent Trademark Copyright Fair Use Trade Secrets

Academic Fair Use: Limited copies or presentations of copyrighted works for educational purposes

Critique: The work may be reviewed or discussed for assessing its merit and for critical reviews

News Reporting: Waived of some intellectual property protection for reporting

Scholarly Research: Similar to academic fair use, but is applicable to researchers


Intellectual Property Laws

Patent Trademark Copyright Fair Use Trade Secrets

Satire: A mocking sendup of the work created using a significant portion of the original work

Library Preservation: Libraries and archives make limited copies of original work to preserve it

Personal Backup: Single backup copy of legally purchased work for use if the original fails

Versions for People with Physical Disabilities: Specialized copies of licensed works for someone with disability
Intellectual Property Laws

Patent Trademark Copyright Fair Use Trade Secrets

• A trade secret is something that is proprietary to a company and important for its survival and profitability.

• Examples: Formulae used for a soft drink, such as Coke or Pepsi, a new form of mathematics, the source
code of a program, a method of making the perfect jelly bean, or ingredients for a special secret sauce.

• The organization must exercise due care and due diligence in the protection of their trade secrets.

• The most common protection methods are non-compete and non-disclosure agreements (NDA).

• A trade secret has no expiration date unless the information is no longer secret or no longer provides
economic benefit to the company.
Case Study

Spice Mobiles Ltd. and Samsung India Electronics Pvt. Ltd. vs. Somasundaram Ramkumar
Mr. Ramkumar holds a patent for the plurality of SIM cards in a single mobile, plurality of
Bluetooth devices in headphone and earphone jacks.

Mr. Ramkumar and his company filed a petition in the Madras high court claiming that the
companies importing dual SIM phones in India were using his patent.

In 2009, the Madras High Court restrained mobile phone and retailers from manufacturing and
selling multiple SIM-holding mobile phones.

Aggrieved Spice Mobiles Limited and Samsung India Electronics Pvt. Ltd. filed two applications in
the IPAB for revocation of the patent granted to Mr. Ramkumar.

Applicants challenged the validity of the patent on the ground of lack of novelty.
Business Scenario
Real World Scenario
Provisions in Section 146 in the Patents Act 1970, require a patentee to furnish proof to ensure
that the patented invention has been commercially worked in India. If the patent is not worked or
used in the territory of India, compulsory licensing may be invoked. The act also requires the
mandatory filing for a statement of working patent at the end of each financial year.

Patent holders who fail to file such a statement may be liable for a fine or imprisonment. This is a
huge deterrent for individuals or organizations who acquire patents with no intention to
manufacture or market that patent invention. The sole purpose is to make some quick money
through cease and desist orders and patent infringement litigations.
Business Scenario
Real World Scenario

Question: Those who abuse patent rights for the sake of licensing revenues and to
engage manufacturers in infringement suits to mostly seek damages are known as:

Answer: Patent trolls


Acts and Agreements
Contracts

A contract is an agreement between parties to engage in some specified activity, usually for mutual benefit. A
breach is a dispute when you fail to perform according to the activity specified in the contract.

Customer SLA Service Provider OLA Internal Organization


US Laws

Health Information
Health Insurance The Digital
Technology for
Graham-Leach-Bliley Sarbanes-Oxley Act Portability and Millennium
Economic and
Act (GLBA) (SOX) Accountability Act Copyright Act
Clinical Health
(HIPAA) (DMCA)
(HITECH) Act
US Laws

Health Information
Health Insurance The Digital
Technology for
Graham-Leach-Bliley Sarbanes-Oxley Act Portability and Millennium
Economic and
Act (GLBA) (SOX) Accountability Act Copyright Act
Clinical Health
(HIPAA) (DMCA)
(HITECH) Act

Allows banks to merge with and own


insurance companies 1
Warrants customer account information
2 security and privacy

Allows customers to opt out of


3
information-sharing arrangements
Holds the board of directors responsible
4 for the security issues within a financial
Emphasizes risk management institution
implementation, employee training on
5
information security, and testing and
security measures 6 States that the financial institutions must
have a written security policy in place
US Laws

Health Information
Health Insurance The Digital
Technology for
Graham-Leach-Bliley Sarbanes-Oxley Act Portability and Millennium
Economic and
Act (GLBA) (SOX) Accountability Act Copyright Act
Clinical Health
(HIPAA) (DMCA)
(HITECH) Act

1 2
Includes provisions for
Increases financial securing data, and
transparency of names the traits for
publicly-traded confidentiality, integrity,
corporations and availability
US Laws

Health Information
Health Insurance The Digital
Technology for
Graham-Leach-Bliley Sarbanes-Oxley Act Portability and Millennium
Economic and
Act (GLBA) (SOX) Accountability Act Copyright Act
Clinical Health
(HIPAA) (DMCA)
(HITECH) Act

1 2
Protects patient records and data
Mandates steep federal penalties for
known as electronic Protected Health
noncompliance
Information (ePHI)
US Laws

Health Information
Health Insurance The Digital
Technology for
Graham-Leach-Bliley Sarbanes-Oxley Act Portability and Millennium
Economic and
Act (GLBA) (SOX) Accountability Act Copyright Act
Clinical Health
(HIPAA) (DMCA)
(HITECH) Act

Promotes adoption and meaningful use of Electronic Health


01
Records (EHR) and supporting technology

02 Stipulates that technologies and standards do not compromise


HIPAA privacy and security laws

03 Requires practices to notify patients of any unsecured data


breaches related to Protected Health Information (PHI)

04 Includes mandatory penalties for “willful neglect”


US Laws

Health Information
Health Insurance The Digital
Technology for
Graham-Leach-Bliley Sarbanes-Oxley Act Portability and Millennium
Economic and
Act (GLBA) (SOX) Accountability Act Copyright Act
Clinical Health
(HIPAA) (DMCA)
(HITECH) Act

Provides provisions to protect owned


01 data in an Internet-enabled world

Makes cracking of access controls on


02 copyrighted media a crime

Enables copyright holders to ensure


03 removal of content in any site on the
Internet that may belong to them
Case Study
Real World Scenario

Timeline of the information breach at Advocate Health Care Network:


• 2016: Paid $5.5 million fine for endangering health records of more than 4 million patients
• July 2013: Personal health information was stolen from an administrative office and from a
business associate's network
• November 2013: Theft of an unencrypted laptop with personal information of more than 2200
individuals

HHS' Office for Civil Rights investigated the breaches. It found that the company did not assess
the data risks and limit access to its information systems.
NERC
NERC Cyber Security Standard

The North American Electric Reliability Corporation (NERC) is a not-for-profit corporation designed to improve the
reliability and security of the bulk electric system in the United States, Canada, and Northern Mexico. NERC
develops and enforces mandatory standards that define requirements for reliable planning and operation of the
bulk electric system.

NERC Critical Infrastructure Protection (NERC-CIP) is a set of standards which specifies the minimum security
requirements for the bulk electric systems. NERC-CIP consists of 9 standards and 45 requirements covering
security requirements ranging from perimeter protection, cyber assets control, end-to-end accountability and
reliability, training, security management, and disaster recovery.
NERC Cyber Security Standard

In the United States, NERC and its regional entities routinely monitor compliance. A number of methods,
including regular and scheduled compliance audits, random spot checks, and any additional specific
investigations as warranted, are used to identify where the standard may have been violated.

Penalties for non-compliance with NERC CIP can result in fines of up to $1 million per day, per incident, until a
state of compliance is ultimately achieved.
Privacy Shield and Generally Accepted Privacy Principles (GAPP)
Privacy Shield

• The Privacy Regulation supersedes the Data Directive, replacing it with Privacy Shield.

• This program is stringent than Safe Harbor and includes new provisions like annual meetings
between the intelligence agency and law enforcement officials of the United States and the EU.
Generally Accepted Privacy Principles (GAPP)

GAPP is an AICPA standard describing 74 privacy principles.

The 10 main privacy principles of GAPP are:

1. Management
2. Notice
3. Choice and consent
4. Collection
5. Use, retention, and disposal
6. Access
7. Third party disclosure
8. Security for privacy
9. Quality
10. Monitoring and enforcement
Jurisdictional Difference in Data Privacy
Jurisdictional Differences in Data Privacy

Determining and understanding jurisdictional differences is very important in data privacy, because of the
widespread flow of data across borders. Jurisdiction law is complex in the absence of a single global
agreement on data protection.

The question of determining jurisdiction has been a source of debate and law reform.

• The US Child Online Privacy Protection Act (COPPA) extends to foreign service providers that direct their
activities to US children or knowingly collect information from US children.
• In Japan, Act on the Protection of Personal Information (APPI) (2017) states that if a data controller outside
of Japan has collected or collects personal information relating to Japanese citizens, then that foreign data
controller is be required to comply with key sections of the Japanese Act.
• The EU General Data Protection Regulation (GDPR) (2018) mandates that the companies which collect data
on citizens in European Union (EU) need to comply with strict new rules around protecting customer data.
Terminologies and e-Discovery
Laws, Regulations, and Standards

Terminologies:

Laws Regulations Standards

The legal rules created by The rules created by The framework and guidelines
government entities departments of the created by nongovernmental
government or external entities organizations for businesses to
empowered by the government follow

E-Discovery: It refers to the process of identifying and obtaining electronic evidence for either prosecutorial
or litigation purposes.
Forensic Requirements and PII
Forensic Requirements

The ISO standards for digital forensics are:

ISO/IEC 27037:2012 Electronic evidence collection, identification, and preservation

ISO/IEC 27041:2015 Incident investigations

ISO/IEC 27042: 2015 Digital evidence analysis

ISO/IEC 27043:2015 Incident investigation principles and processes

ISO/IEC 27050-1:2016 Overview and principles for e-Discovery

Note: ISO/IEC 27050-1:2016, is the de facto worldwide standard for e-Discovery.


Contractual and Regulated PII

Contractual PII Regulated PII

PII protected as part of a contractual PII governed by statutory law or


obligation administrative regulation

Location of PII storage, processing, or


Compliance without government transmission determines the applicable
regulation or enforcement rules and regulations based on
jurisdictions

Contractual matter to be settled


Violation can lead to fines or criminal
between the parties if privacy is not
charges in some jurisdictions
protected
Gap Analysis, SOC Reports, and Chain of Custody
Gap Analysis

• Gap analysis: Benchmarks and identifies relevant gaps against specified frameworks or
standards

• The objective is to detect and report gaps or risks that affect the CIA of information assets.

Note: Audit findings from an unbiased external and independent person/agency should be considered valid
and trustworthy.
SOC Reports

SOC 1 SOC 2 SOC 3

What Controls relevant to Control regarding Seal and report on


financial reporting security, availability, controls
processing integrity,
confidentiality, and
privacy

Why Audits of financial Customer comfort Marketing purpose


statements GRC programs Details not required
Oversight
Due diligence
Audience Management Management Any user with need for
Regulators Regulators confidence in service
Others Others organization’s controls
Chain of Custody

Refers to the chronological documentation or paper trail, showing the custody, control, transfer, analysis, and
disposition of physical or electronic evidence.

Being able to demonstrate a strong chain of custody is very important for making an argument in court.
Vendor Management
CSA Security, Trust, and Assurance Registry (STAR)

The cloud security alliance star program was created to establish a first step in displaying transparency and assurance for
cloud based environments.

Levels of open certification framework:

Self-Assessment
1

CSA CCSM

Third-Party Assessment-Based Certification


2 ISO
AICPA
27001:
SOC2
2013

Continuous Monitoring–Based Certification


3

! https://cloudsecurityalliance.org/star
Cloud Computing Policies and Risk Attitude
Cloud Computing Policies

Policy Examples

• Password policies: If the organization’s policy requires an eight-digit password, is this true for the CSP?
• Remote access: If two-factor authentication is used to access network resources, is this true for the CSP?
• Encryption: If minimum encryption strength and relevant algorithms are required, is it met by the CSP or a
potential solution?
• Third-party access: If a third-party accesses cloud-based services or resources, is it logged and traced?
Cloud Computing Policies

Policy Examples

• Segregation of duties: Are controls required for the segregation of key roles and functions? Can these be
enforced and maintained on the cloud?
• Incident management: Are actions and steps undertaken for communication fulfilled when cloud-based
services are in scope?
• Data backup: Is data backup in line with backup requirements listed in relevant policies?
Risk Attitude

Risk Appetite The degree of uncertainty an entity is willing to take in anticipation of a reward

Risk Tolerance The degree, amount, or volume of risk that an organization or individual will
withstand

Risk Threshold The level of impact at which a stakeholder may have a specific interest
SLA
SLA Topics

It is similar to a contract signed between a customer and a CSP. The SLA forms a most crucial and fundamental
component of how security and operations will be undertaken.

Topics and content covered by SLA include:

Security and
Logging and
Availability Performance privacy of the
reporting
data

DR Location of the Data format Portability of


expectations data and structure the data

Identification Change- Dispute-


and problem management mediation Exit strategy
resolution process process
SLA Components

The components of SLA are:

Suspension of
SLA Penalties
Service

Uptime SLA Penalty


Guarantees Exclusions

Data Protection Security


Requirements Recommendations

Provider Disaster
Liability Recovery
Key SLA Elements

What are the number of risks Who will do what? Which mitigation techniques
and their potential effects? and controls reduce risks?

Risk profile Responsibilities Risk mitigation

Assessment Regulatory
Risk appetite Risk
of risk requirements
environment frameworks

Which frameworks are used


What types of risks does the What is the acceptable Will these meet the
level of risk? SLA? to assess effectiveness?
organization face?
Ensuring Quality of Service

The key components for metrics and monitoring requirements are:


• Availability (%): Measures the uptime of the relevant services over a period
• Outage duration (h and m): Captures and measures the loss of service time for each instance of
outage
• MTBF: Captures the indicative or expected time between consecutive or recurring service failures
• Mean-Time to Switchover metric (m): Provides the expected time to switch over from a service failure
to a replicated failover instance
• Response Time metric (s): Reports the time required to perform the requested operation or task
• Completion Time metric (ms): Provides the time required to complete the initiated or requested task
Risk Mitigation
Risk Frameworks

ISO/IEC
31000:2009

European
Network and
Information Three main risk frameworks
Security Agency
(ENISA)

National
Institutes of
Standards and
Technology
(NIST)
Risk Frameworks

ISO/IEC
31000:2009

European • Implementing ISO 31000:200916 does not address specific or legal requirements
related to risk assessments, risk reviews, and overall risk management.
Network and
Information
Security Agency • ISO 31000:2009 sets out terms and definitions, principles, a framework, and a
(ENISA) process for managing risk.

National
Institutes of
Standards and
Technology
(NIST)
ISO/IEC 31000:2009

ISO/IEC
31000:2009 The ISO/IEC 31000:2009 standard puts forth 11 principles of risk management:
• Risk management creates and protects value.
• Risk management is an integral part of the organizational procedure.
• Risk management is part of decision-making.
European • Risk management explicitly addresses uncertainty.
Network and • Risk management is systematic, structured, and timely.
Information
• Risk management is based on the best available information.
Security Agency
(ENISA) • Risk management is tailored.
• Risk management takes human and cultural factors into account.
• Risk management is transparent and inclusive.
National • Risk management is dynamic, iterative, and responsive to change.
Institutes of
• Risk management facilitates continual improvement and enhancement of
Standards and the organization.
Technology
(NIST)
European Network and Information Security Agency (ENISA)

ISO/IEC
31000:2009

European • Published Cloud Computing: Benefits, Risks, and Recommendations for


Network and Information Security in 2012 for risk management
Information • Outlined the Top 8 list of risks based on their probability of occurrence and
Security Agency potential impact on an organization
(ENISA)

National
Institutes of
Standards and
Technology
(NIST)
National Institutes of Standards and Technology (NIST)

ISO/IEC
31000:2009

• Published Cloud Computing Synopsis and Recommendations in 2012


European • Focused on risk in a cloud environment and recommendations for their
Network and analysis
Information
Security Agency
(ENISA)
Note: Document is the U.S. version of the ENISA document and pertains to U.S.
federal government computing resources.
National
Institutes of
Standards and
Technology
(NIST)
Risk Management Metrics

Five-level scale of risk:


• Minimal
• Low
• Moderate
• High
• Maximum or Critical
The ISO 28000:2007 Supply Chain Standard

• Stipulates security management system requirements


• Applies to all organizations that want to:
a. Establish, implement, maintain, and improve a security management system
b. Assure conformance with stated security management policy
c. Demonstrate such conformance to others
d. Seek certification/registration of its security management system by an Accredited Third-Party
Certification Body
e. Make a self-determination and self-declaration of conformance with ISO 28000:2007
Real-World Scenario: Cost-effectiveness

The contract of a public cloud provider specified maximum monthly upload and download
parameters.

The terms for leaving at the end of any contract period included a timeframe for the
customer to migrate the data from the provider's data center.

Assuming the customer uploads x GB of data each month, there would be 12x GB of data at
the end of the contract. So, leaving the contract is costly for the customer as it involves
migrating 12x GB of data in the final month.

Question: How can the customer avoid the high cost of migrating the data to a new data
center at the end of the contract?

Answer: The contract should mention that limits wouldn't apply in the transition period.
Key Takeaways

You are now able to:


Identify legal requirements and risks associated with the cloud
environment
Describe the potential personal and data privacy issues

Define the process, methods, and adaptions necessary for an


audit
Describe the different types of cloud-based audit reports

Identify the impact of diverse geographical locations and legal


jurisdictions
Understand implications of cloud-to-enterprise risk management

Explain the importance of cloud contract design and


management for outsourcing a cloud environment
Identify appropriate supply-chain management processes

You might also like