You are on page 1of 30

OSSTMM 3.0.

The Open Source Security Testing Methodology Manual


Release Candidate 7.3 – Pending the inclusion of Channels, Peer
Review and Final Review

NOT FOR PUBLIC RELEASE


Created by Pete Herzog

Any information contained within this document may not be modified or sold without the express consent of ISECOM.
OSSTMM for free dissemination under the Open Methodology License (OML) and
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

Version Information
The current version is OSSTMM 3.0. This version of the OSSTMM ends the 2.x series. All OSSTMM
versions prior to 3.0 are obsolete.

This version is not the official OSSTMM as it is missing Perfect Security review, Mapping guidelines,
half of Data Networks Security, the sections of Wireless Communications Security, Physical Security,
and Telecommunications Security, attributions for contributors, as well as pending peer and final
reviews. The official OSSTMM will not be labeled as a release candidate.

The original version was published on Monday, December 18, 2000. This current version is published
on Friday, August 5, 2005.

Restrictions
The methodology is free to read, apply and distribute under the Creative Commons 2.5 Attribution-
NonCommercial-NoDerivs license. This document may not be altered or sold either by itself or as
part of a collection. Application of this manual for personal, private, or commercial (for profit) use
is addressed in the Open Methodology License at the end of this manual or at
http://www.isecom.org/oml/.

Summary
The Institute for Security and Open Methodologies, a Non-Profit Organization registered in New York
of the United States of America and in Catalunya, Spain is releasing the third full release of the
Open Source Security Testing Methodology Manual. This manual has been developed for free use
and dissemination under the auspices of the international, open source community. This manual is
designed to meet international regulations regarding security controls as well as those from many
participating nations. Financing for this manual and all ISECOM projects has been provided
independent of commercial and governmental influence through ISECOM training partnerships,
subscriptions, certifications, and case-study-based research.

The specific guidelines in this manual provide the tools for a formal scientific method to security
testing, the metrics to quantify security within any infrastructure, the rules of engagement for
security testers to assure unbiased and logical analysis, and a standard for providing certified
security test reports.

Furthermore, your evaluation of this document, suggestions for improvements, and results of its
application for further study are required for further development. Contact us at
osstmm@isecom.org or visit us at www.isecom.org.

2
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

Acknowledgments
Those who have contributed to this manual in consistent, valuable ways have been listed here
although many more people should receive our thanks. Each person here receives recognition for
the type of contribution although not as to what was contributed. The use of contribution obscurity
in this document is for the prevention of biases and to promote fresh ideas. If you are interested in
contributing, please see the ISECOM website for more information.

OSSTMM 3.0 Contributors


The following people have contributed to the current version of the OSSTMM.

This list is pending final peer review.

Past Version Contributors


The following people have contributed to earlier versions of the OSSTMM of which this one has been
built upon.

This list is pending final peer review.

3
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

Table of Contents
The Open Source Security Testing Methodology Manual........................................................................... 1
Version Information...........................................................................................................................................2
Restrictions......................................................................................................................................................... 2
Summary.............................................................................................................................................................2
Acknowledgments............................................................................................................................................3
OSSTMM 3.0 Contributors............................................................................................................................ 3
Past Version Contributors............................................................................................................................3
Foreword ...........................................................................................................................................................5
Introduction ......................................................................................................................................................6
Purpose..........................................................................................................................................................6
Accreditation...............................................................................................................................................7
Certification..................................................................................................................................................7
Professional Certifications......................................................................................................................7
Certifications of Compliance................................................................................................................8
Document Scope........................................................................................................................................ 8
Related Terms And Definitions....................................................................................................................8
Intended Audience.....................................................................................................................................8
Analysis..........................................................................................................................................................8
Reporting.......................................................................................................................................................9
End Result......................................................................................................................................................9
Compliance.....................................................................................................................................................10
Regulations.................................................................................................................................................10
Rules Of Engagement.................................................................................................................................... 13
Sales and Marketing..................................................................................................................................13
Assessment / Estimate Delivery................................................................................................................ 13
Contracts and Negotiations.....................................................................................................................13
Scope Definition.........................................................................................................................................13
Test Plan...................................................................................................................................................... 14
Test Process.................................................................................................................................................14
Reporting.................................................................................................................................................... 14
The Methodology............................................................................................................................................15
The Security Presence................................................................................................................................15
The Channel Map......................................................................................................................................16
The Security Testing Process......................................................................................................................16
Applying the OSSTMM...............................................................................................................................17
Modules and Tasks.....................................................................................................................................19
Defining OSSTMM Security Metrics...........................................................................................................20
Using Risk Assessment Values...............................................................................................................21
Operational Security............................................................................................................................ 22
Loss Controls..........................................................................................................................................23
Security Limitations................................................................................................................................24
Actual Security......................................................................................................................................26
Rules of Thumb........................................................................................................................................... 27
...........................................................................................................................................................................28
Open Methodology License (OML)..............................................................................................................29

4
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

Foreword
After three and a half years in the making, the OSSTMM 3.0 should express its subject matter more
satisfactorily than previous versions. It's not that the OSSTMM 3.0 promotes revolutionary ideas but
that it applies so many pragmatic concepts that the package itself hopefully will change an
industry.

Why change? We know that security is improving. A greater focus on security over the last 5 years
has lead to a lot of new security solutions, some of which were absolutely creatively brilliant.
Unfortunately, much was not. Many of the new solutions were just castles in the swamp.
Furthermore, the same overall problems in security continue to apply as we talk about the
weakness of the human element and the dangers of tattle-tale emissions. Technologies change
the way we live and one of those technologies is security. Do we feel safer? Are we safer?

The idea of developing a full-spectrum security testing methodology that embraces all that we
need to protect meant understanding first how and when did “safety” become “security”?
Furthermore, at what point in this corporate oligarchy most of us live in did “integrity” be the answer
to security and “privacy” have to be the trade? Analysis of the defining points in business integrity
includes the big picture where humans have devolved to being popular opinion resources and the
replacement of “safety” with the feel good and often knee-jerk response as “security”. This reveals
that we have mostly a problem of vision. And by “we” I mean all of us from the legislators to the
public as the overwhelming majority that are guilty of perpetuating false security solutions as a
need to meet the public outcry demanding security they can touch and feel. “Are we safer” has
become the case study on solutions that make us feel safer. Shouldn't it be the other way around?
Public opinion may reflect the need to address the current levels of safety, privacy, and integrity,
but it would not improve security if the public really knew and understood the high personal costs
involved. Perhaps is because the public has less access to the facts which makes it difficult to
address security. Perhaps it is because people make selfish decisions which effect other people
around them “for their own good”. Perhaps the public doesn't understand the encroaching loss of
personal liberties and privacy. Perhaps....

While public opinion influence has proved effective for improving safety and privacy, it has not
been shown to work for integrity or security. At least not yet. Finally, the question of whether the
security decision meets public opinion's requirement for feeling good is itself not enough. The
safety, security, privacy, and integrity of the events in our lives and others must be decided upon
from the fact of how it affects the life, culture, and personal progress of everyone. I don't think it will
make some decisions less evil or gruesome for some people (the trade-off) but at least the true
intentions are revealed. And it is for this reason, the OSSTMM has been re-written.

The OSSTMM is about getting facts and through properly controlled and executed tests. The
OSSTMM explains how these tests become factual intelligence. I developed the OPST and OPSA
specifically to fill the gap to get people to understand how to get actual security data and create
factual intelligence from that data-- a lesson unique in sea of classes about hacking tools, year-old
exploits, and memorizing security vocabulary. Once you know how to get factual intelligence
though, it is simply another step to make representative metrics. And while many struggle with
defining security metrics based on what you have, the OSSTMM's RAVs focuses on measuring how
well it works. As people learn to use RAVs as “nutrition labels” on new security solutions explaining
what the product affects and by how much, it may hopefully change the security industry.
Hopefully it will change how we choose the trade-offs of inconveniences for the sake of security
and perhaps improve our lives. For a chance of that happening, I want to thank all the
contributors to the OSSTMM, the ISECOM team, all OPST and OPSA students who care about the
right way to do security testing, all those teaching Hacker Highschool, and all supporters of the
ISECOM projects especially including the ISECOM Partners and Affiliates. Thanks for all your help.

Pete Herzog, Managing Director, ISECOM

5
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

Introduction
This Open Source Security Testing Methodology Manual provides a methodology for a thorough
security test. A security test is an accurate measurement of security at an operational level, void of
assumptions and anecdotal evidence. A proper methodology makes for a valid security
measurement which is consistent and repeatable. An open methodology means that it is free from
political and corporate agendas. An open source methodology allows for free dissemination of
information and intellectual property. This is the OSSTMM. It is the collective development of a true
security test and the extraction of factual security metrics.

Started at the end of 2000, the OSSTMM quickly grew over the following years to encompass all
security channels with the applied experience of thousands of reviewers. The OSSTMM had been
housed under the domain ideahamster.org where it received a steady amount of traffic from
contributors dubbed as ideahamsters. An “ideahamster” is the nickname for people who were
spinning out new ideas like a hamster on a wheel. However, as the OSSTMM grew in popularity, the
organization and its name were pressured to grow up as well. In November of 2002, ideahamster
announced the name change to ISECOM which actually stood for the Institute for Security and
Open Methodologies. By January 2003, ISECOM had been registered as a non-profit organization
in Spain and in the United States of America and it officially served the public good. However, by
2005, the OSSTMM no longer stood for a way of ethical hacking; it became a way to verify security
was being done right at the operational level. As audits became mainstream the application of a
security test meant truth-finding. Auditors reviewing operations found that “best practice” by
definition was no longer best for everyone no matter how it seemed on paper.

As the OSSTMM continues to grow, it has never lost its vendor-free, politically neutral values. The
methodology has continued to provide straight, factual tests for factual answers. It includes
information for project planning, quantifying results, and the rules of engagement for those who will
perform the security tests. As a methodology you cannot learn from it how or why something
should be tested however, what you can do is incorporate it into your testing needs, harmonize it
with existing laws and policies, and use it as the framework it is to assure a thorough security test
through all channels to information or physical property.

It's recommended you read through this manual once completely before putting it to practice. It
aims to be a straight-forward tool for the implementation and documentation of a security test.
Further assistance is available for those who need help in understanding and implementing this
methodology at the ISECOM website.

Purpose
The primary purpose of this manual is to provide a scientific methodology for the accurate
characterization of security through examination and correlation in a consistent and reliable way.
Great effort has been put into this manual to assure reliable cross-reference to current security
management methodologies, tools, and resources. This manual is adaptable to penetration tests,
ethical hacking, security assessments, vulnerability assessments, red-teaming, blue-teaming, posture
assessments, and security audits.

The secondary purpose is to provide guidelines which when followed will allow the auditor to
perform a certified OSSTMM audit. These guidelines exist to assure the following:

1. The test has been conducted thoroughly.


2. The test includes all necessary channels.
3. The posture for the test includes compliance to basic human rights.
4. The results are measurable in a quantifiable means.

6
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

5. The results received are consistent and repeatable.


6. The results contain only facts as derived from the tests themselves.
7. The results report the precise security limitation including all components within that
security limitation, the security management process governing the security limitation,
whether the security limitation has been verified, and the security metric for the scope.

Accreditation
An OSSTMM Audit Report is required to be signed by the tester(s) or
analyst(s) and accompany all final reports for a verifiable OSSTMM
certified test. This audit report will show which Channel and the
associated Modules and Tasks had been tested to completion, not
tested to completion, and which tests were not applicable and why.
The report must be signed and provided with the final test report to
the client. A report which documents that only specific parts of the
Channel test has been completed due to time constraints, project
problems, or customer refusal may still be recognized as an official
OSSTMM audit.

An OSSTMM Audit report provides the following benefits:


Figure 1, OSSTMM Audit Seal
• Serves as proof of thorough OSSTMM testing.
• Makes auditors responsible for the test.
• Makes a clear statement to the client.
• Provides a convenient overview more so than an executive summary.
• Provides a valid checklist for the tester.
• Provides clearly calculated metrics.

Certification
No specific certification is required to use this manual, produce an OSSTMM Audit, or complete an
OSSTMM Audit Report. Certification is provided as a means of independent, non-political, third-
party validation and not as a requirement for applying this manual.

Anyone using this manual with or without the associated OSSTMM Audit Report is said to be
performing an OSSTMM audit (or “audit”) and is referred to as an OSSTMM Auditor (or “auditor”)
requiring the act of testing and analysis based on this manual. An OSSTMM Expert is one who has
full knowledge of the reasons for the applications of this manual and can apply this manual in
harmonization with security management methodologies beyond the scope of this manual.

Professional Certifications
OPST / OPSA / OPSE
Individual certification is available through ISECOM for the applied skills and
knowledge recognized for professional security testing and analysis ability and
meeting the formal process and high ethical standards as outlined in the
OSSTMM Rules of Engagement. The OPST (OSSTMM Professional Security Tester),
the OPSA (OSSTMM Professional Security Analyst) and the OSSTMM Professional
Security Expert (OPSE) are the official certifications for OSSTMM Auditors. More information is
available on the ISECOM website.

7
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

Certifications of Compliance
Organizations
OSSTMM certification is available for organizations or parts of organizations which maintain a
quarterly RAV level of 95% and validate a yearly, OSSTMM security test from a third-party auditor.
Validation of security tests or quarterly metrics are subject to the ISECOM validation requirements to
assure consistency and integrity.

Products and Services


OSSTMM evaluation seals are available for products, services, and business processes. This seal
defines the operational state of security, privacy, and legislative governance. This serves primarily
as clear overview of functions, security limitations, and the delta, also known as the “catalytic
effects” incited on the scope. The products, services, and processes will then carry this visible
certification seal.

Document Scope
The “scope” of this document is all components of a complete and thorough security test, the
calculated metrics from the tests, project planning assistance for these tests through the Rules of
Thumb, and the Rules of Engagement on how to market, perform, and deliver this test properly and
legally. The tests include all 5 Channels of the security presence to be tested: physical, wireless,
telecommunications, data networks, and personel. This manual makes no assumptions based on
scope size or location. Tests can be performed from the same location as the scope or from a
remote location. Tests can be performed with any level of scope knowledge (ie. black box, gray
box, white box) however, each module in the methodology must be addressed and no test may
be assumed.

Related Terms And Definitions


Throughout this manual we refer to words and terms that may be construed with other intents or
meanings. This is especially true within international translations. This manual attempts to use
standard terms and definitions as found in international testing vocabulary and based on NCSC-TG-
004 (Teal Green Book) from the U.S. Department of Defense. For specific tasks where no definition
exists, this manual will be as clear as possible to avoid misleading information.

Intended Audience
This manual is written for factual security verification on a professional level. There is no strictly
required type of audience. However, those who lack applied knowledge experience in the subject
of security may want to first review technical details regarding security with the Hacker Highschool
lessons available through ISECOM at www.hackerhighschool.org.

Analysis
This document does not include direct analysis information regarding the data collected when
using this manual. All analysis is the result of understanding the posture of the scope (the
appropriate laws, industry regulations, and business needs appropriate to the particular client and
the regulations for security and privacy for the client’s regions of operation).

8
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

Reporting
Analysis is often accompanied by solutions or consulting, of which neither is required in an OSSTMM
Audit. Solutions are provided traditionally by default as a means of expressing the vulnerability or
weakness and as a value-add to a security test. While it is considered default to provide solutions
within a security report, never should it be considered mandatory as often times there are no proper
solutions based on the limited view of business justification an auditor has of the client.

It is not every case where the auditor will have to deal with juxtapositions between an existing or
necessary security mechanism or process and the intended or unintended act which will defeat it.
However, at every point within the engagement, the tester must report the factual current state of
security with or without the means of defeating it. The difference being that the auditor must at
minimum, identify the current state of security, report identified issues, and report the processes
which presided over the security limitation. That alone will allow for an accurate measurement of
the current state of security.

The OSSTMM Audit Reports allow for an actual measurement of security and loss controls. The
calculation of this through RAVs is based on the thoroughness of the tests that are categorized as
“identified” or “verified”. Misrepresentation of results in reporting will lead to fraudulent verifications
and the end effect of a different, actual security level. For this, the auditor must accept
responsibility and hold the liability for accuracy in result reporting.

End Result
The ultimate goal is to set a standard in security testing methodology which when used results in
meeting practical and operational security requirements. The indirect result is creating a discipline
that can act as a central point in all security tests regardless of the size of the organization,
technology, or defenses.

9
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

Compliance
This manual was developed to satisfy the verification requirements in major bodies of legislation. The
verification tests performed provide the necessary information to analyze for security and privacy
concerns as per the following legislation, regulations, and practices.

Regulations
Australia
• Privacy Act Amendments of Australia-- Act No. 119 of 1988 as amended, prepared on 2
August 2001 incorporating amendments up to Act No. 55 of 2001. The Privacy Act 1988 (the
Privacy Act) seeks to balance individual privacy with the public interest in law enforcement
and regulatory objectives of government.
• National Privacy Principle (NPP) 6 provides that an individual with a right of access to
information held about them by an organization.
• National Privacy Principle (NPP) 4.1 provides that an organization must take reasonable
steps to protect the personal information it holds from misuse and loss and from
unauthorized access, modification or disclosure.
• Commonwealth Privacy Act

Austria
• Austrian Data Protection Act 2000 (Bundesgesetz über den Schutz personenbezogener
Daten (Datenschutzgesetz 2000 - DSG 2000)) specifically requirements of §14

Canada
• Corporate Governance requirements
• Provincial Law of Quebec, Canada Act Respecting the Protection of Personal Information in
the Private Sector (1993).

Germany
• Deutsche Bundesdatenschutzgesetz (BDSG)-- Artikel 1 des Gesetzes zur Fortentwicklung der
Datenverarbeitung und des Datenschutzes from 20. December 1990, BGBl. I S. 2954, 2955,
zuletzt geändert durch das Gesetz zur Neuordnung des Postwesens und der
Telekommunikation vom 14. September 1994, BGBl. I S. 2325
• IT Baseline Protection Manual (IT Grundschutzhandbuch) Issued by Bundesamt für Sicherheit
in der Informationstechnik (Federal Office for Information Security (BSI)) available at
http://www.bsi.de/gshb/english/menue.htm
• German IT Systems. S6.68 (Testing the effectiveness of the management system for the
handling of security incidents) and tests S6.67 (Use of detection measures for security
incidents)

India
• The Information Technology Act, 2000.

Italy
• D.Lgs. n. 196/2003 - Codice in materia di protezione dei dati personali

Malaysia
• Computer Fraud and Abuse Act
• The Computer Crimes Act

Mexico
• Ley Federal de Transparencia y Acceso a la Información Pública Gubernamental

10
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

• Ley de Propiedad Industrial (LPI)


• Ley Federal de Derechos de Autor (LFDA) and itś rules book(RLFDA)
• Código Penal Federal y Código Federal de Procedimientos Penales

Singapore
• Computer Misuse Act
• E-Commerce Code for Protection of Personal Information and Communications of
Consumers of Internet Commerce

Spain
• Spanish LOPD Ley Organica de Protección de Datos de Carácter Personal
• LSSICE 31/2002 (Ley de Servicios de la Sociedad de la Información y el Correo Electronico),
July,11,2002
• RD 14/1999 (Real Decreto de Regulación de la Firma Electrónica), September,17,1999
• RD 994/1999 (Real Decreto sobre el Reglamento de Seguridad de los Ficheros con
Información de Carácter Personal), June,11,1999

Switzerland
• Bundesverfassung (BV) vom 18. Dezember 1998, Artikel 7 und 13
• Obligationenrecht (OR) 2002 (Stand am 1. Oktober 2002), Artikel 707, 716, 716b, 717, 727ff
und 321a
• Datenschutzgesetz (DSG) vom 19. Juni 1992 (Stand am 3. Oktober 2000)

Thailand
• Computer Crime Law
• Privacy Data Protection Law

United Kingdom
• UK Data Protection Act 1998
• Corporate Governance requirements
• IT Information Library available at http://www.ogc.gov.uk/index.asp?id=2261 issued by the
British Office for Government Commerce (OGC)
• BSI ISO 17799-2000 (BS 7799) - this manual fully complies with all of the remote auditing and
testing requirements of BS7799 (and its International equivalent ISO 17799) for information
security auditing.
• UK CESG CHECK - this manual is compliant to meeting the requirements of the CESG IT
Health CHECK service.

United States of America


• U.S. Gramm-Leach-Bliley Act (GLBA)
• Clinger-Cohen Act
• Government Performance and Results Act
• FTC Act, 15 U.S.C. 45(a), Section 5(a)
• Children’s Online Privacy Protection Act (COPPA)
• Anticybersquatting Protection Act (ACPA)
• Federal Information Security Management Act.
• U.S. Sarbanes-Oxley Act (SOX)
• California Individual Privacy Senate Bill - SB1386
• USA Government Information Security Reform Act of 2000 section 3534(a)(1)(A)
• MITRE Common Vulnerabilities and Exposures - the RAV Security Limitations described within
this manual comply to the CVE descriptions for more efficient categorizations
(http://cve.mitre.org/about/terminology.html).
• DoD FM 31-21, Guerrilla Warfare and Special Forces Operations.
• Health Insurance Portability and Accountability Act of 1996 (HIPAA).

11
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

• OCR HIPAA Privacy TA 164.502E.001, Business Associates [45 CFR §§ 160.103, 164.502(e),
164.514(e)]
• OCR HIPAA Privacy TA 164.514E.001, Health-Related Communications and Marketing [45
CFR §§ 164.501, 164.514(e)]
• OCR HIPAA Privacy TA 164.502B.001, Minimum Necessary [45 CFR §§ 164.502(b), 164.514(d)]
• OCR HIPAA Privacy TA 164.501.002, Payment [45 CFR 164.501]
• HIPAA Standards for Privacy of Individually Identifiable Health Information (45 CFR parts 160
and 164)
• FDA: Computerized Systems used in Clinical Trails. Electronic Records; Electronic Signatures;
[21 CFR Part 11]

NIST Publications
This manual has matched compliance through methodology in remote security testing and
auditing as per the following National Institute of Standards and Technology (NIST) publications:
• An Introduction to Computer Security: The NIST Handbook, 800-12
• Guidelines on Firewalls and Firewall Policy, 800-41
• Information Technology Security Training Requirements: A Role- and Performance-Based
Model, 800-16
• Guideline on Network Security Testing, 800-42
• Security Self-Assessment Guide for Information Technology Systems
• PBX Vulnerability Analysis: Finding Holes in Your PBX Before Someone Else Does, 800-24
• Risk Management Guide for Information Technology Systems, 800-30
• Intrusion Detection Systems, 800-31
• Building an Information Technology Security Awareness and Training Program, 800-50
• DRAFT NIST Special Publication 800-53, Recommended Security Controls for Federal
Information Systems
• Security Metrics Guide for Information Technology Systems, 800-55
• Guide for the Security Certification and Accreditation of Federal Information Systems, 800-37
• DRAFT: An Introductory Resource Guide for Implementing the Health Insurance Portability
and Accountability Act (HIPAA) Security Rule, 800-66

Federal Financial Institutions Examination Council (FFIEC):


• Electronic Operations, 12 CFR Part 555
• Interagency Guidelines Establishing Standards for Safeguarding Customer Information, 12
CFR 570 Appendix B
• Interagency Guidelines Establishing Standards for Safety and Soundness, 12 CFR 570,
Appendix A
• Privacy of Consumer Financial Information, 12 CFR 573
• Procedures for Monitoring Bank Secrecy Act Compliance, 12 CFR 563.177
• Security Procedures Under the Bank Protection Act, 12 CFR 568
• Suspicious Activity Reports and Other Reports and Statements, 12 CFR 563.180

General
• Payment Card Industry (PCI) Data Security Requirements Version 1.0 December 15, 2004
• AICPA SAS 70 - verifications of process control activities are applicable to the Service
Auditor's Report in the Statement on Auditing Standards (SAS) no. 70 from the American
Institute of Certified Public Accountants guidance for Internal Auditors.
• SAC - this manual is compliant in design to the The Institute of Internal Auditors (IIA)Systems
Assurance and Control (SAC) model.
• ITIL - this manual is applicable to the operational security controls review and processes
inter-relations according to the IT Infrastructure Library (ITIL).

12
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

Rules Of Engagement
These rules define the operational guidelines of acceptable and ethical practices in marketing and
selling testing, performing testing work, and handling the results of testing engagements. While
those who are affiliates, partners, team members, or associates of ISECOM are contractually
required to uphold the following rules of engagement, proper use of the OSSTMM also requires
adherence to these rules. Failure of any person or organization to meet these rules can be
reported directly to ISECOM (osstmm<at>isecom.org).

Sales and Marketing


1.1 The use of fear, uncertainty, doubt, and deception may not be used in the sales or marketing
presentations, websites, supporting materials, reports, or discussion of security testing for the
purpose of selling or providing security tests. This includes but is not limited to personally
unverified crimes, facts, criminal or hacker profiles, and statistics used to assist sales.
1.2 The offering of free services for failure to penetrate or provide trophies from the target is
forbidden.
1.3 Public cracking, hacking, and trespass contests to promote security assurance for sales or
marketing of security testing or security products are forbidden.
1.4 Performing security tests against any scope without explicit written permission from the
appropriate authority is strictly forbidden.
1.5 The use of names of past clients for whom you have provided security testing for is forbidden
even upon consent of said client. This does not include consented referrals between divisions,
branches, offices, or institutions of the same organization or government.
1.6 It is required to advise clients truthfully and factually in regards to their security and security
measures. Ignorance is not an excuse for dishonest consultancy.

Assessment / Estimate Delivery


2.1 Verifying security limitations without explicit written permission is forbidden.
2.2 The security testing of obviously highly insecure and unstable systems, locations, and processes
is forbidden until the proper security infrastructure has been put in place.

Contracts and Negotiations


3.1 With or without a Non-Disclosure Agreement contract, the security tester is required to provide
confidentiality and non-disclosure of customer information and test results.
3.2 The tester must always assume a limited amount of liability as per responsibility. Acceptable
limited liability must be at least equal to the cost of service. This includes both malicious and
non-malicious errors and project mismanagement.
3.3 Contracts must clearly explain the limits and dangers of the security test as part of the
statement of work.
3.4 In the case of remote testing, the contract must include the origin of the testers by address,
telephone number and/or IP address.
3.5 Contracts must contain emergency contact persons and phone numbers.
3.6 The contract must include clear, specific permissions for tests involving survivability failures,
denial of service, process testing, and social engineering.
3.7 Contracts must contain the process for future contract and statement of work (SOW) changes.
3.8 Contracts must contain verified conflicts of interest for a factual security test and report.

Scope Definition
4.1 The scope must be clearly defined contractually before verifying vulnerable services.
4.2 The scope must clearly explain the limits of the security test.

13
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

Test Plan
5.1 The test plan must include both calendar time and man hours.
5.2 The test plan must include hours of testing.
5.3 The test plan may not contain plans, processes, techniques, or procedures which are outside
the area of expertise or competence level gained by training and education of the tester(s).

Test Process
6.1 The tester must respect and maintain the safety, health, welfare, and privacy of the public both
within and outside the test scope.
6.2 The tester must always operate within the law of the physical location(s) of the scope.
6.3 Client must provide a signed statement which provides testing permission exempting the
auditors from trespass within the scope and damages liability to a minimum of the cost of the
audit service.
6.4 No unusual or major scope changes allowed by the client during testing.
6.5 To prevent temporary raises in security only for the duration of the test, only notify key people
about the testing. It is the client’s judgment which discerns who the key people are, however,
it is assumed that they will be information and policy gatekeepers, managers of security
processes, incident response, and security operations.
6.6 If necessary for privileged testing, the client must provide two, separate, access tokens whether
they be logins and passwords, certificates, secure ID numbers, badges, etc. and they should be
typical to the users of the privileges being tested (no especially empty or secure accesses).
6.7 When testing includes known privileges, the tester must first test without privileges in a black box
environment prior to testing again with privileges.
6.8 The testers are required to know their tools, where the tools came from, how the tools work, and
have them tested in a restricted test area before using the tools on the client organization.
6.9 The exploitation of tests which are explicitly to test the denial of a service or process and/or
survivability may only be done with explicit permission and only to the scope where no
damage is done outside of the scope or the community in which the scope resides.
6.10Tests involving people may only be performed on those identified in the scope and may not
include private persons, customers, partners, associates, or other external entities without
written permission from those entities.
6.11 High risk vulnerabilities such as discovered breaches, vulnerabilities with known, high
exploitation rates, vulnerabilities which are exploitable for full, unmonitored or untraceable
access, or which may put immediate lives at risk, discovered during testing must be reported to
the customer with a practical solution as soon as they are found.
6.12 Any form of flood testing where a scope is overwhelmed from a larger and stronger source is
forbidden over non-privately owned channels.
6.13 The tester may not leave the scope in a position of less actual security than it had been
provided as.

Reporting
7.1 The tester must respect the privacy of all individuals and maintain their privacy for all results.
7.2 Results involving people untrained in security or non-security personnel may only be reported in
non-identifying or statistical means.
7.3 The tester and analyst may not sign test results and audit reports for which they were not
directly involved in.
7.4 Reports must remain object and without untruths or any personally directed malice.
7.5 Client notifications are required whenever the tester changes the testing plan, changes the
source test venue, has high risk findings, previous to running new, high risk or high traffic tests, if
any testing problems have occurred with and with regular, progress updates.
7.6 Reports must include valid and practical solutions for discovered security limitations.
7.7 Reports must clearly mark all unknowns and anomalies.

14
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

7.8 Reports must clearly state both discovered successful and failed security measures and loss
controls.
7.9 Reports must use only quantitative metrics for measuring security. These metrics must be based
on facts and void of subjective interpretations.
7.10 The client must be notified when the report is being sent as to expect its arrival and to confirm
receipt of delivery.
7.11 All communication channels for delivery of report must be end to end confidential.
7.12 Results and reports may never be used for commercial gain.

The Methodology
This methodology is designed on the principle of verifying the security of operations. While it does
not test policy and process directly, a successful test of operations will need to analyze the gaps
between operations and processes and between operations and policy. More simply put, “How
do current operations work, how does management think they work, and how do they need to
work?” (Illustration 1)

Illustration 1: OSSTMM Test Map

The Security Presence


The security presence is the total possible security environment of any physical or information
property. It is comprised of five channels which are also divided into the sections of this manual
(Table 1). Channels are the interactive space that extends from that which can be touched and
seen to that which is intangible and imperceptible to humans sans mechanical assistance. The
channels each overlap and as the test target may contain elements of all other sections. Sections
are the 5 segments of the OSSTMM methodology which maps tests for each of the 5 channels.

The sections in this manual are categorized under:

15
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

1. HUMSEC – Personnel Security


2. PHYSSEC/OCOKA - Physical Security
3. COMSEC - Telecommunications Security
4. COMSEC - Data Networks Security
5. SIGSEC/ELSEC - Wireless Communications Security

The Channel Map


The channel map (Figure 2) is a conceptualization of
of the interactive space to physical or intellectual
property.

While the 5 channels may be represented in any


way, within this manual they are organized as
recognizable means of communication and
interaction. The organization into 5 channels is
designed to facilitate the test process while
minimizing the inefficient overhead that is often
associated with strict methodologies.
Figure 2, The Map of the Security Presence.

Table 1, The Security Presence Channel Descriptions


Channel Description
Personnel Comprises of the human element of interaction where people are the gatekeepers
of information and physical proper

Physical Comprises of the tangible element of security where interaction requires physical
effort to manipulate.

Telecommunications Comprises of the telecommunication networks, digital or analog, where interaction


takes place over established network lines without human assistance.

Wireless Communications Comprises of all non-human interaction which takes place over the known
communication spectrum.

Data Networks Comprises of all data networks where interaction takes place over established
network lines without human assistance.

The Security Testing Process


“Security Testing” is an umbrella term to encompass all forms and styles of security tests from the
intrusion attempt to the hands-on audit. Referred to often as an OSSTMM audit, the application of
the OSSTMM methodology will not deter from the style of the test. Furthermore, the application of
the OSSTMM should assure that every test regardless of style reports that which has not been tested
under any statement of work which explicitly states this methodology.

Furthermore, as a standard, the OSSTMM is not to be followed “off-the-shelf”. Practical


implementation of the OSSTMM requires defining individual testing practices to meet the
requirements defined within this manual. This means that the security testing that you practice will
be your technique and the OSSTMM combined where your technique will reflect the type of test
you have chosen to do. Test types may be but aren't limited to the following common types:

• blind/blackbox - the tester knows nothing about the target but the client is fully aware of
what the tester will be attempting and when.

16
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

• double blind/blackbox - the tester knows nothing about the target and the client knows
nothing of the techniques of the tester.
• graybox - the tester knows about the security operations of the target and the client is fully
aware of what the tester will be attempting and when.
• double graybox - the tester knows about the security operations of the target and the client
knows about the techniques of the tester but not how or when they will be applied.
• tandem/whitebox - the tester has full knowledge of the target, it's processes, and security
and the client is also fully aware of what the tester will be attempting and when.
• reversal - the tester has full knowledge of the target, it's processes, and security but the
client is unaware of the tester's techniques.

Applying the OSSTMM


The OSSTMM methodology has a solid base
which may seem quite involved but it is
actually simple in practice. As you can see in
Figure 3, it is like a flowchart, however, unlike a
true flowchart, the flow, represented by the
arrows, may go back as well as forward. In this
way, the flow is more integrated and while the
beginning and the end are clear, the tester
defines the path based on the scope and
inevitably the time allotted and resources
applied to the test. This is because no
methodology can accurately assume the
business justification for channels which have
been provided. More directly, the OSSTMM
doesn’t assume best practice. Best practice,
or similar, is only best for some, generally the
originator. Business dictates how services
should be offered and those services dictate
the requirements for operational security.
Therefore a methodology that is invoked
differently for each test and by each tester
can still have the same end result if the tester
completes the methodology. This is the
purpose of a methodology. This is also why
one of the foundations of this methodology is
to record precisely what was not tested for Figure 3: OSSTMM 3.0 Methodology
any of various reasons. By comparing what
was tested and the depth of the testing, with other tests, it is possible to measure operational
security (OPSEC) based on the test results.

The methodology flow of OSSTMM 3.0 does not allow for a separation between data collection and
verification testing of and on that collected data. Nor does it differentiate between active and
passive testing. Active testing is the invasive or aggressive attempt to create an interaction with
the target and passive testing is the monitoring of non-instigated emanations from the target into
the tester's environment. A thorough security test will require both active and passive tests,
furthermore, it is often difficult to assume information gathered from a passive test is not actually the
delayed or unrelated activity from a previous active test. As is the nature of testing that
introduction of any outside event, including passively receiving information from the target, has the
potential to change the nature of the target and lowering the quality of a true test of operational
security. This will be certainly more prevalent in testing HUMSEC, PHYSSEC, and SIGSEC.

17
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

Realistically, in a security test there will be a point where all information is the result of data
collection. This is known as the simplified test sequence (Figure 4). The simplified test sequence
shows that the process of a security test is the verification of information to obtain a result. From the
entry point in a test, the tester needs information about the target. As the test continues, less
information is gathered as more verification tests are performed on the already collected data. At
some point, there is no more pertinent data to collect and all data collected has been verified.
This is then the result of the test.

In defining the methodology


of testing, it is important not to VERIFICATION TESTING
constrict the creativity of the
tester by introducing
standards so formal and IN OUT
unrelenting that the quality of
the test suffers. Additionally, it
is important to leave tasks
open to some interpretation
where exact definition will
cause the methodology to
suffer when new technology is DATA COLLECTION
introduced.

The OSSTMM begins with a Figure 4: Simplified Test Sequence


review of the scope's posture
and ends with verification and result comparisons to any alarms, alerts, or access logs. This is a full-
circle concept where the first step is to be aware of the legalities and operational requirements of
those or that which operate and interact with the target. The methodology then ends with
reviewing the records our tests have left behind. In simplest terms: you know what you need to do,
you do it, and then you check what you have done. The doing part itself, however, is what
appears so involved. In Table 2, the “doing” is described in the module description for each
separate security channel test. Some sections require more detail because the tests themselves
may apply to many different kinds of technologies, some of which may straddle the border
between two or more channels. For example, commonly found wireless LANs must be tested under
both data networks and wireless communications. This is why a properly defined testing scope is so
important. Channel hybridization is a constant and should not be overlooked. The OSSTMM is fully
capable of a “sidewalk to kernel” security test and therefore is completely capable of applying
these tests to a target whether the channels are clearly distinct and separated or void of any
boundaries. Therefore, for all scopes, the auditor should always anticipate the need to define a
scope to include all channels. Only under investigation will it become evident whether scope
contains targets at all and avoid the prospect of missing active channels at all.

Table 2: Standard Methodology Module Definitions

OSSTMM Modules Descriptions

Posture Review A thorough review of the regulations, legalities and operational environment of
processes interacting with the scope.

Logistics Reviewing controls such as distance, speed, and fallibility (yours and theirs) to
determine margins of accuracy in the results.

Active Detection Verification Verifying the practice and breadth of intrusion detection and response, including
response predictability.

Visibility Audit Determining the applicable gateways to be tested within the scope. “Visible” targets
provide opportunity. In this regard, visibility is regarded as “presence” and not limited

18
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

to human sight.

Controls Verification Measuring the use and effectiveness of non-fallible loss controls on determined targets.

Access Verification Measuring the breadth and depth of interactive access points within the scope
.
Trust Verification Determining the trust relationships from and between gateways within the scope. A
trust relationship exists wherever the gateway is freely interactive with another gateway
inside or outside the scope which is governed by a different security policy,
administration, or process.

Process Verification Determining the existence of security processes and measuring these processes for
effectiveness. A security process is the maintenance of existing actual security levels
and/or due diligence required by policy, law, or regulation.

Configuration Verification Determining the proper configuration of access controls, loss controls, and
applications.

Property Validation Measuring the breadth and depth of use of illegal and unlicensed intellectual property
or applications within the scope.
Segregation Review A gap analysis between privacy requirements by law, by right, and by regulations
within operations.

Exposure Verification Uncovering information which provides for or leads to authenticated access or allows
for access to multiple locations with the same authentication.

Competitive Intelligence The discovery of publicly available information, directly or indirectly, which could harm
Scouting or adversely affect the scope through external, competitive means.

Containment Measures Determining and measuring the effective use of quarantine for all access to and within
Testing the scope.

Privileges Audit Mapping and measuring the impact of misuse of privileges or unauthorized privilege
escalation.

Survivability Validation Determining and measuring the resistance of the gateways within the scope to
excessive or adverse changes.

Alert and Log Review A gap analysis between activities performed with the test and the true depth of those
activities as recorded or from third-party perceptions.

While the blurring of channels in a scope does not require a new methodology, the fact remains
that legislative or industry posture may define new or specific security requirements that hold
requirements higher than this methodology. For this, the OSSTMM can be harmonized with these
new doctrines to fulfill these new requirements. As an OSSTMM audit will produce the necessary
facts about the operational security state as well as specific, repeatable measurements of that
state, any additional reporting required can be based upon the OSSTMM results.

Modules and Tasks


The sections are broken down into modules and tasks. Where the sections are specific points in the
security map that overlap with each other and begin to dissect a whole that is much greater than
the sum of its parts, the sum of those parts are the modules. Furthermore, the parts which create a
module are specific tests for that module called tasks.

Each module has an input and an output. The input is the information used in performing each
task. The output is the result of completed tasks. Output may or may not be analyzed data (also
known as intelligence) to serve as an input for another module. It will even be likely that the same
output serves as the input for more than one module or section.

19
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

Some tasks yield no output; this means that modules will exist for which there is no input. Modules
which have no input can be ignored during testing but must be later documented with
explanation for not having been performed. Ignored modules do not necessarily indicate an
inferior test; rather they may indicate superior security.

Modules that have no output as the result can mean one of five things:

• The channel had been obstructed in some way during the performing of the tasks.
• The tasks were not properly performed.
• The tasks were not applicable.
• The task result data has been improperly analyzed.
• The tasks revealed superior security.

It is important that impartiality and open-mindedness exists in performing the tasks of each module.
Searching for something you expect to find may lead to you finding only what you are searching
for. In this methodology, each module begins as an input and ends as an output exactly for the
reason of keeping bias low. Each module gives a direction of what should be revealed to move to
another point within the methodology.

Testing time with the modules is relative to the scope. The scope is not the predefined range of
targets but just the range of targets for the chosen test perspective within the security presence.
The perspective is the direction of the test in relation to the security of the operation being tested.
For example, if you test the physical security of a door, then your test would most likely have two
perspectives-- the door's functional security from the outside of the house to the inside and then
from the inside of the house to the outside. Determining the proper scope based on perspective is
important because there may still be targets outside of the perspective and still within the security
presence which will not make up the current testing scope. Overall, larger scopes and multiple
perspectives means more time spent on each module and its tasks. The amount of time allowed
before returning with output data is not determined by this methodology and depends on the
tester, the test environment, and the scope of the testing.

A previous risk assessment may be incorporated to determine scope however this should not affect
modules as they are not independent tests. Modules are parts of a whole and assumption that any
particular module can be omitted for any reason is false and will lead to an improper test.
However, modules and tasks may be omitted if there is no input which is required for that particular
module without degrading the quality of the test. The difference is that in the first case, the module
or task is ignored based on an assumption and in the second the test itself dictated that the
module or task cannot be performed.

With the provision of testing as a service, it is highly important to communicate to the


commissioning party exactly what of the scope and the security presence has not or will not be
tested, thereby managing expectations and potentially inappropriate faith in the security of a
system.

Defining OSSTMM Security Metrics


The completion of a thorough security test has the advantage of providing accurate metrics on the
state of security. The less thorough a test means a less accurate the overall metric. Alternately,
lesser skilled testers and lesser experienced analysts will also adversely affect the quality of the
metric. Therefore, a successful metric of security requires the test which can be described as
testing (measuring) from the appropriate perspectives required, accounting for inaccuracies and
misrepresentations in the test data, and appropriately skilled and experienced security professionals
performing the test. Faults in these requirements will result in lower security determinations and
lower quality measurements.

20
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

OSSTMM Security metrics are called Risk Assessment Values (RAVs). While not a risk assessment in
itself, an OSSTMM test and RAVs will provide the factual basis for a more accurate and more
complete risk assessment.

Using Risk Assessment Values


The OSSTMM will define and quantify three areas within the chosen scope (see Table 3) and all
three of these areas together create the big picture defined as Actual Security. To assure proper
measurements when using the RAVs, you need to assure you maintain the quantification of your
scope and the perspective used to test in a consistent format. RAVs are scalable therefore
calculations are based on the scope. This allows for comparable values between two or more
scopes regardless of the size of the scope or how the scope was indexed. To index a scope is to
define how individual targets are calculated. This means a scope of 1 can be realistically
compared with a scope of 10,000.

While perspective will assist in keeping a scope consistent, it is still necessary to consistently index
the scope most often by the number of gateways or gatekeepers. Gateways are access points to
physical and information property. A gatekeeper is any human acting as a gateway. A single
gateway point may lead to multiple gateway points like a door which leads to a hall of elevators or
a single network port leading to multiple services such as from a reverse proxy. For this reason, a
proper scope index consists of the smallest number of gateways which can be identified with
unique access points in the appointed perspective. Not all gateways may be interactive and may
have only be determined as part of the security presence by secondary or indirect evidence.
Additionally, it may be necessary to narrow the scope to that within a given segment, region, or
location for logistics reasons. For example, a network scope may be identified either by public IP
addresses for an internet to DMZ perspective, private IP addresses for the internal to DMZ
perspective, and MAC addresses for the internal to internal perspective. Each scope would be
calculated separately for scope. However, due to the flexibility of the RAVs, it is possible to
calculate all three perspectives together for the big picture- a single RAV for the whole network.
This is as simple as adding up the scopes and the test values for each category and calculating the
total.

Table 3: RAV Categories


Value Types Descriptions

Operational Security The lack of security one must have to be interactive, useful, public, open, or available.
For example, limiting how a person buys goods or services from a store over a particular
channel is a method of security within the store's operations.

Loss Controls The assurance that the physical and information assets as well as the access gateways
and gatekeepers themselves are protected from various types of invalid interactions as
defined by the channel. For example, insuring the store in the case of fire is a loss
control that does not prevent the inventory from getting damaged or stolen but will
pay out equivalent value for the loss.

Security Limitations This is the current state of perceived and known limits for security and loss controls both
verifiable and theoretical. For example, an old lock that is rusted and crumbling used
to secure the gates of the store at closing time has an imposed security limitation
where it is at a fraction of the protection strength and easily broken. Determining that it
is old and weak through visual verification is an identified security limitation.
Determining it is old and weak by breaking it using 100 kg of force and comparing it
with a new lock that requires 1000 kg of force shows a verified security limitation.

21
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

Operational Security
To measure the security of operations (OPSEC) requires the analysis and calculation of visibility,
trust, and access from the scope. The number of gateways in the scope that can be determined
to exist by direct interaction, indirect information, or passive emanations is its visibility. Trust is any
non-authenticated interaction between any of the gateways within the scope or outside of it.
Access is the number of points and primary types of interaction for each gateway. The calculation
these three categories provides the OPSEC total, the OPSEC delta, and the OPSEC percentage of
the scope.

Table 4: Calculating OPSEC

OPSEC Categories Descriptions

Visibility Count each gateway only once regardless of how many different ways we can know
it’s there and regardless if it interacts. For example, the department to be tested
employs 50 people however only 38 of them are accessible from the test perspective
and another 3 do not react to any contact attempts. This would make the scope to be
38 and visibility to be 35. It is generally unrealistic to have more targets visible then are
targets in the scope however it may be possible.

Trust Count only each gateway or gatekeeper allowing for unauthenticated interaction
within the context of the scope. For example, a personnel test may reveal that the
help desk employees grant password resets for all calls coming from internal phones
without requesting identifying or authorizing information. Within this context, each help
desk employee who does this is counted as a trust for this scope. However, the same
cannot be held true for external calls, therefore, in that scope, the one with the
external to internal perspective, these same help desk employees are not counted as
trusts. However, the inverse is only true. These same help desk employees who allow
unauthenticated interaction with internal callers also respond the same with internally
arriving e-mails. This is a trust on two separate channels and therefore 2 trusts per help
desk person who responds in this way.

Access This is different from visibility where one is determining the number of existing gateways.
Here you must count each interaction point of the same type if the response is the
same. A computer that responds with a SYN/ACK to 5 of the TCP ports scanned and to
the rest with a RST is not said to have an access count of 65536 (including port 0). Since
66531 of the ports respond the same way to the same probe, we count an access of 2,
one for each type of response. However, if the 5 ports respond with services also, and
not just a SYN/ACK, then we count each individually for a total access number of 6,
one for the RST/ACKs and 1 for each service response. If we change the protocol, then
we can add more counts for access for each type of response (non-response cannot
be assumed as open and therefore cannot be counted as an access). More
importantly, with a protocol such as ICMP, changing the Type and Code may change
the service and therefore, you can have multiple counts under ICMP as long as the
Type and/or Code change is the catalyst for the response.

With personnel testing, this is much more simplified. A person who responds to a query
counts as an access with all types of queries (all the different questions you may ask or
statements made count as the same type of response on the same channel). Only a
person who completely ignores the request by not acknowledging the channel is not
counted.

OPSEC Total This is calculated as the sum of visibility, trust, and access as a positive integer.

OPSEC Delta This is the total taken as a negative integer (* -1) to show the value detracted from the
scope.

OPSEC % of Scope This is the total divided by the scope and subtracted from 100 to show the percentage
of operations from this scope that is protected by security measures.

22
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

Loss Controls
Loss controls are the 10 categories of authentication, non-repudiation, confidentiality, privacy,
indemnification, integrity, safety, usability, continuity, and alarm which inflect who, what, when,
where, why, and how assets can be accessed or utilized as opposed to blocking or filtering
interactivity. Authentication is having both authorization and credentials (identification is required
for credentials) for access to the gateway. Non-repudiation prevents the source from denying its
role in any interactivity regardless whether access was obtained. Confidentiality is the information
or physical property displayed or exchanged parties which can be known only to those parties.
Privacy is the method in which parties display or exchange information or physical property is
known only between those parties. Indemnification is the protection of information and physical
property including gateways and gatekeepers as assets by public law or privately by insurance to
recoup the real and current value of the loss. Integrity is the protection of information and physical
property from undisclosed changes. Safety is the function where the security mechanisms to
protect the interaction with and access to information, physical property, gateway, or gatekeeper
even in the event of failure. Usability is where security mechanisms for access or interaction require
no user-side interaction, option, or control. Continuity is where interaction with or through the
gateway or gatekeepers cannot halt interactions or deny access to information or physical
property upon failure. Alarm is the notification that operational security measures or loss controls
have failed or have been circumvented.

Table 5: Calculating Loss Controls

Loss Controls Descriptions


Categories

Authentication Count each type of authorization required to gain access. For example, if both a
special ID card and a thumb print scan is required to gain access then you need to
count both. However if access just requires one or the other then you may only count
one.

Non-Repudiation Count each gateway or gatekeeper who only provides access with non-repudiation
mechanisms for all interactions and count each system or process of non-repudiation
required separately. Take for example the entrance to the building has a camera and
to enter, a guard requires those entering to sign a log for an assigned key card for them
to use to enter the building and a record of each gate the key card is used to open is
logged. The camera on the front of the building, the log, and the assigned key card
are all providing non-repudiation on access for that door. However, even if the door is
sealed, the camera in the front of the building is still providing non-repudiation for those
approaching the building. However, if none of those mechanisms are stored in a
method that is retrievable then they are not providing non-repudiation.

Confidentiality Count each method used by a gateway or gatekeeper to keep secret information or
property from undisclosed or unauthorized third parties. A typical of confidentiality is
the non-disclosure process of applying encryption. However, obfuscation is also a type
of confidentiality. Do not confuse the technique with the flaw in the technique.

Privacy Count each method used by a gateway or gatekeeper to keep secret the interaction
or access to information or property from undisclosed or unauthorized third parties.
While “being private” is a common expression, the phrase is a bad example of what is
privacy as a loss control because it includes elements of confidentiality. When
something is done “in private” it means that only the doing is private but the property
itself may not be. A good example of privacy is seen in movies about drug crimes. In
the movies, the police may have the suspect, confirm that there is a drug being sold by
the number of people who have bought, but need to see the drugs for money change
hands before they will have enough to lock away the dealer for life. While not realistic,
the main point is how the drug dealer uses the loss control of privacy to keep secret the
deal.

23
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

Indemnification Count each method used to exact liability and insure compensation for all property
accessible through the gateway or gatekeepers. A basic example is a warning sign
threatening to prosecute trespassers. A more common type is property insurance. In a
scope of 200 computers, a blanket insurance policy against theft applies to all 200 and
therefore is a count of 200. Do not confuse the method with the flaw in the method. A
threat to prosecute without the ability or will to prosecute is still an indemnification
method with a weakness.

Integrity Count each method used for each access point which can assure that the interaction
process has finality and cannot be changed, continued, redirected, or reversed
without it being known to the systems or parties involved. For example, a test of a file
hash will tell if that file has been changed. Man in the middle attacks are attacks
against integrity and protocols such as HTTPS will notify when the integrity does not
meet expected results and ask the user to confirm.

Safety Count each functional mechanism for assuring interaction or access does not lose
filtering or authentication under situations of total failure. For example, if a guard
controlling a door is knocked unconscious, the door must remain locked. The same with
a web page requiring a login or password. If the web server or the database
controlling logins is knocked off-line, then all authentication should deny access rather
than permitting it.

Usability Count each gateway, gatekeeper, or access point which strictly do not allow for
client-side or user discretion to disregard the implemented filter, restriction, or loss
controls. Different from being a weakness in the mechanism, this applies to a weakness
in the design or implementation. If a login can be made in HTTP as well as HTTPS but
requires the user to make that distinction then it does not count toward Usability.
However, if the implementation requires the secured mode by default such as a PKI-
based internal messaging system then it does count towards usability for that access
point.

Continuity Count each gateway, access point, or gatekeeper which assures no interruption in
interaction or access is caused even under situations of total failure. For example, if the
entry way to a store becomes blocked, an alternate entry way is made accessible
before customers choose to turn around and leave. The same with a web server that is
knocked off-line; then an alternate web server provides redundancy.

Alarm Count each gateway, access point, or gatekeeper which will document or notify when
the implemented filters, restrictions, or loss controls are broken or have had attempts
made to circumvent them. For example count each server and service which a
network-based intrusion detection system monitors. Do not factor time into this count.
Access logs may count even if they are not used to send a notification alert
immediately when the failure or circumvention is attempted. However, logs which are
not designed to be used for such notifications, such as a counter of packets sent and
received, does not classify as an alarm as there is too little data stored for such use.

Loss Controls Total This is calculated as the sum of all loss controls.

Loss Controls Delta This is calculated as the sum of all loss controls and multiplied by 0.1 (each individual
loss control is worth a fraction of the whole which is all loss controls total = 1).

Loss Controls % of Op Sec This is the Loss Controls delta divided by the Op Sec total and multiplied by 100 to
make the percentage.

Loss Controls % of Scope This is the Loss Controls delta divided by the Scope total and multiplied by 100 to make
the percentage.

Security Limitations
To measure the current state of security in regard to known flaws within the security and loss
controls for the access gateways the classifications of vulnerability, weakness, concern, exposure
and anomaly are calculated as security limitations. A vulnerability is defined as a perceived flaw
within a mechanism that allows for privileged access to assets where privileged means it can be
either authenticated or unauthenticated access and the authorization for the privilege has been

24
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

granted without identification or the acceptance of fraudulent identification. A weakness is any


misconfiguration, survivability fault, or usability fault that allows for privileged access or access
failure or when the mechanism fails to meet stated security requirements whether it is law or policy.
A concern is any visible gateway or interactive access point is absent from policy and has no clear
business justification regardless if it provides privileged or unprivileged access. An exposure is a
perceived flaw within a process or mechanism that allows for unprivileged access to information
concerning data, business processes, people, or infrastructure, generally used to gain privileged
access or gain knowledge on operational security implementations. Exposures are often called
information leaks. Finally, an anomaly is any unidentifiable or unknown element that is a response
to the tester’s stimulus but has no known impact on security. This refers often to data gathered
which tends to make no sense or serve any purpose as far as the tester can tell and is reported
solely for the reason that it is a response that can be triggered and may be a sign for deeper
problems inaccessible to the tester.

Furthermore, these classifications of security limitations are divided between verified and identified
security limitations. A verified security limitation is one that has been proved by the tester to exist by
exploiting the limitation. However, not all security limitations can be safely or timely exploited and
therefore they may only be referred to as identified security limitations. The difference between
identified and verified is more than just the removal of false positives and false negatives but also
impacts the quality of the test by eliminating bias.

Table 6: Calculating Security Limitations

Security Limitations Descriptions


Categories

Verified A security limitation is verified when the limitation itself has been exploited and known
to be true. Failure to exploit however means only that the exploit did not work in this
test and the reason for the failure must be analyzed with the rest of the data.

Identified A security limitation is identified when the limitation itself has been interpolated or
deduced matching known security limitations with their targets. Identified limitations
cannot be said to be true.

Vulnerability Count separately each identified and each verified flaw which allows for privileged
access. For example, an RFID mechanism for door access will also unlock the door if a
particular frequency is generated in rapid bursts near the mechanism is a vulnerability.

Weakness Count separately each identified and each verified misconfiguration, survivability fault,
or usability fault that allows for privileged access or access failure or when the
mechanism fails to meet stated security requirements whether it is law or policy. For
example, the website login page is accessible with HTTPS using SSL2 when policy states
that only TLS1.0 is allowed. A more common example is the discovery of a single point
of failure for a network.

Concern Count separately each identified and each verified visible gateway or interactive
access point that is absent from policy and has no clear business justification regardless
if it provides privileged or unprivileged access. For example, a Concern may include a
door held open by a rock in the back of a building where employees step outside for a
break can easily get back inside the same way if it is absent from policy and serves no
clear business justification.

Exposure Count separately each identified and each verified perceived flaw within a process or
mechanism that allows for unprivileged access to information concerning data,
business processes, people, or infrastructure, generally used to gain privileged access
or gain knowledge on operational security implementations. For example, an exposure
could be something as simple as a list of manager names and phone numbers
readable on the security desk in the lobby. It could also be a server name, internal IP
address, or precise server type and version.

25
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

Anomaly Count separately each identified and each verified interaction that was not expected
or cannot be accounted for within the scope of the test and the controls of the test
network. For example, repeated attempts to open a locked door causes a small but
noticeable shock of electricity from the lock could be classified as a verified anomaly.

Security Limitations Delta This shows the negative impact security limitations cause on the scope.

Security Limitations Total This is the sum of the security limitations divided by the scope to provide a value based
on a single item within the scope.

The calculation of the security limitations are independent by classification and verification. To
avoid using arbitrarily weighted values based on classification, the total value comes from the
Scope and the Op Sec Total divided by the Scope and the Loss Controls Total. For verified
Weaknesses, divide the value from verified Vulnerabilities from the Op Sec Percentage of Scope
and subtract it from the verified Vulnerabilities value. Repeat tis for each successive Verified
classification down to Verified Anomaly.

Handling the Identified Security Limitations requires calculating the same for Identified Vulnerability
as was done for Verified Weakness and continuing the trend downward to Identified Anomaly.

While the values are indeed weighted to provide a consistent metric, the weight is not dependent
upon outside values and allows for flexible calculations regardless of the size of the scope.

Actual Security
To measure the current state of operations with applied loss controls and discovered limitations, the
final calculation is required to define Actual Security.

Security Limitations Descriptions


Categories

Actual Delta The actual security delta is the sum of Op Sec Delta and Loss Controls Delta and
subtracting the Security Limitations Delta. The Actual Delta is useful for comparing
products and solutions by previously estimating the change (delta) the product or
solution would make in the scope.

Actual Security (Total) Actual security is the true (actual) state of security calculated as the sum of Op Sec
Total divided by 100, Loss Controls Delta divided by 100, and Security Limitations Total
divided by 100 and then all subtracted from 100. The Actual Security is the final total as
a percentage of the scope's security. This percentage is final and can be compared
with the totals of other scopes, industries, etc.

26
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

Rules of Thumb
These are the rules of thumb for security testing and estimating testing time per channel. These rules
can be used to facilitate project management and assist in estimating person hours as well as the
over-all project time.

• In planning a security test, be sure to reserve approximately 4 man hours per person per
calendar week for research, development, and general maintenance.
• Total testing time should never exceed 3 months for a single test.
• 1/2 the time spent testing will be needed for reporting.
• The report should be delivered 3 days minimum before explaining the results.
• The participants from the auditing organization should not outnumber the invited client
attendees at the workshop with the exception of if there is only 1 attendee then there may
be two representatives from the auditing organization.
• Of the participants from the auditing organization at a workshop, one should always be the
actual tester.

Table 7: Rules Of Thumb Calculations

Channel Rules of Discovery Rules of OSSTMM Audits

Personnel HOURS= PERSON HOURS=


EMPLOYEES * CHANNELS * 0.15 VISIBILITY * ACCESS * 8

Physical HOURS= PERSON HOURS=


Total m2 / 4000 VISIBILITY m2 / 500

Telecommunications HOURS= PERSON HOURS=


SCOPE/5/60 (local) VISIBILITY*20/60
SCOPE/4/60 (VOIP)
SCOPE/2.5/60 (regional)
SCOPE/1.875/60 (int.)

Wireless HOURS= PERSON HOURS=


Total m2 / 1000 m2 VISIBILITY m2 / 250
Communications
Data Networks HOURS= PERSON HOURS=
(((20+HOPS)*ADDRESSES) (((150+(HOPS*15))*VISIBILITY)
/(BANDWIDTH/64/0.2)) /(BANDWIDTH/64/0.65))
/60 /60

• Testing time does not


improve if BANDWIDTH is
increased above 1 Mb in an
OSSTMM Audit.
• Must add Discovery time to
this if VISIBILITY is an
estimated number.

27
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

....
Developer edits stopped here.

28
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
6. August 2005

Open Methodology License (OML)


C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005,
Institute for Security and Open Methodologies (ISECOM)

PREAMBLE

A methodology is a tool that details WHO, WHAT, WHICH, and WHEN. A methodology is intellectual
capital that is often protected strongly by commercial institutions. Open methodologies are
community activities which bring all ideas into one documented piece of intellectual property
which is freely available to everyone.

With respect the GNU General Public License (GPL), this license is similar with the exception for the
right for software developers to include the open methodologies which are under this license in
commercial software. This makes this license incompatible with the GPL.

The main concern this license covers for open methodology developers is that they will receive
proper credit for contribution and development as well as reserving the right to allow only free
publication and distribution where the open methodology is not used in any commercially printed
material of which any monies are derived from whether in publication or distribution.
Special considerations to the Free Software Foundation and the GNU General Public License for
legal concepts and wording.

TERMS AND CONDITIONS

1. The license applies to any methodology or other intellectual tool (ie. matrix, checklist, etc.) which
contains a notice placed by the copyright holder saying it is protected under the terms of this
Open Methodology License.

2. The Methodology refers to any such methodology or intellectual tool or any such work based on
the Methodology. A "work based on the Methodology" means either the Methodology or any
derivative work by copyright law which applies to a work containing the Methodology or a portion
of it, either verbatim or with modifications and/or translated
into another language.

3. All persons may copy and distribute verbatim copies of the Methodology as are received, in any
medium, provided that you conspicuously and appropriately publish on each copy an appropriate
copyright notice and creator or creators of the Methodology; keep intact all the notices that refer
to this License and to the absence of any warranty; give any other recipients of the Methodology a
copy of this License along with the Methodology, and the location as to where they can receive
an original copy of the Methodology from the copyright holder.

4. No persons may sell this Methodology, charge for the distribution of this Methodology, or any
medium of which this Methodology is apart of without explicit consent from the copyright holder.

5. All persons may include this Methodology in part or in whole in commercial service offerings,
private or internal (non-commercial) use, or for educational purposes without explicit consent from
the copyright holder providing the service offerings or personal or internal use comply to points 3
and 4 of this License.

6. No persons may modify or change this Methodology for republication without explicit consent
from the copyright holder.

7. All persons may utilize the Methodology or any portion of it to create or enhance commercial or
free software, and copy and distribute such software under any terms, provided that they also

29
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies
OSSTMM 3.0 - The Open Source Security Testing Methodology Manual
5. August 2005

meet all of these


conditions:

a) Points 3, 4, 5, and 6 of this License are strictly adhered to.

b) Any reduction to or incomplete usage of the Methodology in the software must strictly and
explicitly state what parts of the Methodology were utilized in the software and which parts were
not.

c) When the software is run, all software using the Methodology must either cause the software,
when started running, to print or display an announcement of use of the Methodology including an
appropriate copyright notice and a notice of warranty how to view a copy of this License or make
clear provisions in another form such as in documentation or delivered open source code.

8. If, as a consequence of a court judgment or allegation of patent infringement or for any other
reason (not limited to patent issues), conditions are imposed on any person (whether by court
order, agreement or otherwise) that contradict the conditions of this License, they do not excuse
you from the conditions of this License. If said person cannot satisfy simultaneously his obligations
under this License and any other pertinent obligations, then as a consequence said person may not
use, copy, modify, or distribute the Methodology at all. If any portion of this section is held invalid or
unenforceable under any particular circumstance, the balance of the section is intended to apply
and the section as a whole is intended to apply in other circumstances.

9. If the distribution and/or use of the Methodology is restricted in certain countries either by patents
or by copyrighted interfaces, the original copyright holder who places the Program under this
License may add an explicit geographical distribution limitation excluding those countries, so that
distribution is permitted only in or among countries not thus excluded. In such case, this License
incorporates the limitation as if written in the body of this License.

10. The Institute for Security and Open Methodologies may publish revised and/or new versions of
the Open Methodology License. Such new versions will be similar in spirit to the present version, but
may differ in detail to address new problems or concerns.
NO WARRANTY

11. BECAUSE THE METHODOLOGY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE
METHODOLOGY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN
WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE METHODOLOGY "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE
RISK AS TO THE QUALITY AND PERFORMANCE IN USE OF THE METHODOLOGY IS WITH YOU. SHOULD
THE METHODOLOGY PROVE INCOMPLETE OR INCOMPATIBLE YOU ASSUME THE COST OF ALL
NECESSARY SERVICING, REPAIR OR CORRECTION.

12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY
COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY USE AND/OR REDISTRIBUTE THE
METHODOLOGY UNMODIFIED AS PERMITTED HEREIN, BE LIABLE TO ANY PERSONS FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF
THE USE OR INABILITY TO USE THE METHODOLOGY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR
DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY ANY PESONS OR THIRD PARTIES OR A
FAILURE OF THE METHODOLOGY TO OPERATE WITH ANY OTHER METHODOLOGIES), EVEN IF SUCH
HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

30
C Creative Commons 2.5 Attribution-NonCommercial-NoDerivs 2005, The Institute for Security and Open Methodologies

You might also like