Seeing the Elephant can mean several things the reason I picked it:
Not seeing the need for better tracking via metrics in the security room
In ancient times if an unprepared army saw an elephant it ran! Same with many in
operations and metric collection
Story of blind people describing an elephant each has a different story (or metrics)
Slide 2
Slide 3
A bit of a background
Artis-Secure
Cigital
I started as a C/C++ developer in the 90s. This then morphed into a senior
engineer/architect role in the 2000s. I joined the Microsoft Consulting org in 2002 and
started making the move to secure development work by 2003. In 2005 I learned about the
MS-SDL and started using it in my projects. I had to change quite a few activities to enable
the SDL to work in a heterogeneous Agile work environment. By 2009 I was doing
governance on around 100 worldwide projects and had integrated the core SDL into the
Services Application Lifecycle Management process.
I joined Cigital in 2011 and was in put in charge of creating a SSI in two major clients. I will
refer to these two in my examples later on.
My overall goal since 2009 has been to build metrics into the process of building up a SSI.
Ive developed a methodology based my experiences at clients and based on my use of
several frameworks/standards/processes (incl. MSFT SDL, BSIMM, CLASP, ISO 27001 series,
SAMM, CMM(I) )
Slide 4
touchpoints
Current Software Security Initiatives
SSIs fall into 2 categories, technical lifecycle frameworks and software maturity models.
I do not count ISO/IEC 27001:2005 (extension of quality system), COBIT Security (regulatory
compliance), CRAMM (risk analysis), SABSA (business operational risk based architectures),
CMMI (Capability Maturity Model
I have used the following SSIs :
CLASP, Touchpoints, MS-SDL several specific SSIs for secure development
Advantage very technical, very descriptive
Disadvantage hardly any business-side guidance (governance, risk, compliance,
operations, strategic planning, etc.)
BSIMM, SAMM maturity model to measure implementation of the Build Security In
framework, jointly created by Cigital and Fortify
Advantage large pool of participants, descriptive of activities *within* pool,
includes some reference to business practices
Disadvantage does not record activities falling outside BSI framework, owned by
one organization, very simple metrics, levels are not a tech tree with prerequisites, light on business practices
Slide 5
The technical models fall short on explaining how to measure the progress of the initiatives,
while the maturity models are made for simple measurement.
MS-SDL, Touchpoints, CLASP several specific SSIs for secure development
Bug bars
MSFT could use common MS taxonomies but
(all projects) Hard to nail down which bugs were meaningful for the
project. Some security bugs questioned by dev leads.
threat model measurements
Early MSFT - Each TM was different and no common components to
measure off.
Later MSFT consistent but STRIDE and DREAD not good for measuring
Cigital - non-measurable modelling processes
How do we measure a complete TM?
We may have a complete TM, but how do we measure inclusion of proper
threats and correct mitigations?
number of code review bugs
Competing taxonomies to use for metrics; which is correct?
Categorize by criticality, BUT this varies between organizations!
Criticality not a great apples/apples measure; different bug types bundled
together.
number of security testing issues
Same problems as bug review issues
SAMM declarative Software Assurance Model owned by OWASP
Simple yes/no measurements of whether an activity is being done, as per
reviewers estimate
Same of the BSIMM weaknesses exist here; the assessment may show out of
band activities which skew the metrics
BSIMM maturity model to measure implementation of the Build Security In
framework, jointly created by Cigital and Fortify
Simple yes/no measurements of whether an activity is spotted by the reviewer
Some activities get spotted out of band, which makes for strange, incomplete
maturity tree measurements
Metrics become weaker evidence because the model only shows presence of an
activity, not proficiency
Slide 6
CWE and CAPEC globally accepted, taxonomies tie to each other and CVE
OWASP Top 10
Potentially changes each year; not stable enough.
DREAD
Everyone fought over what numbers to put up, on scale of 1-10
STRIDE
Good start but there is overlap
SECURITY FRAME
This breaks down top developer issues better
CWE
Globally used and frequently maintained
CAPEC
Attack taxonomy that ties directly to CWE and CVE
Slide 7
Lessons
Its very hard to measure anything in the real world when one cannot do an apples/apples
comparison.
Many of the processes lacked a common category
It gets even harder when one mixes the technical models in with the maturity models.
Inexperience causes failures in nearly 60% of the cases
How does one measure competency with a given task?!?
There may be cultural differences
Unanticipated process slowdowns
Slide 8
Conclusions
Ensured that the technical models are part of the maturity models
Technical model is sub-part of SSI buckets/scorecard
Made a common set of SSI buckets to enable measurement
SOMC buckets
Balanced Scorecard
Measured cost efficiency
Measuring each step by 3s
Black & Scholes modified for costs
Slide 9
OpenSAMM
BSIMM
Training
Auditing
VERIFICATION
SSDL TOUCHPOINTS
Operations Management
DEPLOYMENT
DEPLOYMENT
CONSTRUCTION
INTELLIGENCE
Strategic Planning
Financial Planning
Strategic Contacts
Ability to Project SSO Vision
SSG = Software
Security Group
Business Marketing
Security Anecdotes
Performance Incentives
SSG activity growth
Supplier Management
The two maturity models I worked on have the same origins but both are lacking in the more
business-oriented security processes. I overlaid a number of metrics for checking the
security organizational growth.
--- Progress in growing strategic contacts needed to be captured.
--- Internal marketing needed to be captured with extra processes (Projection and Business
marketing).
--- A maturing security group has the ability to positively influence other groups to follow its
guidance. This is captured with Performance Incentives metrics.
Under SSG activity growth, I also oriented several other security domains towards their
more common non-security processes.
--- Governance, Risk & Compliance are normally grouped
--- Auditing is normally separate
--- Operations management normally incorporates deployment activities
--- Strategic planning requires its own breakdown
--- Financial planning requires its own breakdown for any security group within an
organization
I added Supplier Management as a necessary domain for Security group activity.
Slide 10
Balanced Scorecard
Strategic Contacts
FINANCIAL
CUSTOMERS
Business Marketing
CUSTOMERS
Security Anecdotes
Performance Incentives
CUSTOMERS
FINANCIAL
Auditing
INTERNAL BUSINESS
INTERNAL BUSINESS
Operations Management
INTERNAL BUSINESS
Supplier Management
CUSTOMERS
INTERNAL BUSINESS
Strategic Planning
FINANCIAL
Financial Planning
FINANCIAL
(F) Reduce
Security
Costs, SSG
finance
(C)
Customer
confidence,
SSG
organization
Strategy
(IL) Security
Awareness,
SSG data
collection
Balanced Scorecard
The Balanced Scorecard can be used to help the security group provide key metrics to
management in a form that they understand. The standard Balanced Scorecard is broken
into 4 blocks; Financial, Internal Business, Learning and Customer (oriented metrics).
The SSI bucket metrics (previous page) are fed into the Balanced Scorecard as follows:
Slide 11
Data Validation
Authentication
Authorisation
Configuration
Sensitive Data
Session
Cryptography
Exception
Logging
The Security Frame lists the 9 most common areas that developers make mistakes. Created
by the Patterns & Practice team of Microsoft. It provides a way to tag security work
throughout the secure engineering lifecycle.
Data Validation vetting data before it gets consumed
Authentication Who are you?
Authorisation Are you allowed access to this particular area?
Configuration What are the system dependencies?
Sensitive Data PII? PCI data? Secrets?
Session How are related two-party communications managed?
Cryptography key generation, key management
Exception How are unexpected errors handled?
Logging Who did what and when?
Slide 12
Data Validation
Category - Injection (Injecting Control Plane content through the Data Plane) - (152)
Category - Abuse of Functionality - (210)
Authentication
Authorisation
Configuration
Sensitive Data
Session
Cryptography
Exception
Logging
Slide 13
e = 2.71828... etc.
Cost of options
goes
up distribution
with time, volatility
Normal_Distribution
= standard
normal
Security
issue costs
fix cost
is(potential
highercosts
when
Consultancy_hr
= Consultancy
per hour
for experts to fix)
Project_Wage
costs per
given the project budget
=code
is hour
older
risk of
immature
secure
engineering
increases
Project_Security_Risk
= 0 to
1 measurement
of technical
SSI competence
heres a brief overview of the equation
Volatility_Over_Time = Project_Security_Risk*SQRT(Project_Days_Used/365)
Project_Days_Used = Project_Current_Date - Project_Incept_Date
Slide 14
Project
Ministerstvo zdravotnictva SR
GIE - SHIFT-10-01
PetroC OMS Phase I Extend
PetroC OMS Maintenance Project
Internet Banking Client
DWH Migration ( Monitoring)
TIB-Corporate Internet Banking
RWE Smart Home
LMI - Domain Awareness System
Vita Phase II
Smart Home - 2.PQR
CBS-BOC
SmartHome V2
Meo at PC -- Passagem a Producao
SHaS BPM Platform Establishment Project
Date Received
26/10/2009
08/10/2009
10/05/2010
07/11/2010
27/09/2010
11/08/2010
22/02/2010
27/10/2009
31/07/2009
18/10/2009
06/12/2010
08/02/2010
27/10/2010
08/12/2009
28/08/2009
0.085
$18.73
In this example you can see how a higher project security risk yields a higher cost to fix an
issue, and a lower project security risk yields a much lower cost to fix. The assumption is
that the hourly cost per team member is USD 200.
The Dollar_Cost_of_Risk represents the cost per hour of finding and fixing a security issue,
given the state of the projects secure engineering. To put it another way, if
The security requirements were missing or incomplete, they would make it difficult to
articulate the security controls needed
The threat model was missing or poor, it would miss threats and mitigations, which would
make it more difficult to focus on dangerous code
The code review was incomplete or non-existent, it would yield potentially dangerous
code
The security testing was missing, verification coverage wouldnt exist
In short, it would be dangerous code that was difficult to introduce fundamental changes to.
The cost of fixing such code would be high.
This model assumes all bugs are the same but the model can be modified for criticality of
bug. This can be done by changing the Consultancy_hr cost to reflect the cost of bringing an
expert in to fix the issue.
Slide 15
Thank you!
Geoffrey Hill
Artis Secure Ltd.
geoff-h@artis-secure.com
Feb 28, 2014
Seeing the Elephant can mean several things the reason I picked it:
Seeing the elephant of poor data collection in the security room
In ancient times if an unprepared army saw an elephant it ran! Same with many ops and
metric collection
Story of blind people describing an elephant each has a different story (or metrics)