You are on page 1of 17

Slide 1

Seeing the Elephant


Using collected data points to design and roll out software security initiatives
Geoffrey Hill
Artis Secure Ltd.
Feb 28, 2014

Seeing the Elephant can mean several things the reason I picked it:
Not seeing the need for better tracking via metrics in the security room
In ancient times if an unprepared army saw an elephant it ran! Same with many in
operations and metric collection
Story of blind people describing an elephant each has a different story (or metrics)

Slide 2

How I plan to run my talk


First Im going to give you who I am
Then Im going to give a quick background on the Software
Security Initiative (SSI) frameworks I used
Then Im going to talk about what I tried
. . . What failed (and how I had to fix it)
. . . And what worked
And Ill finish with a few questions

Slide 3

A bit of a background

Artis-Secure
Cigital
I started as a C/C++ developer in the 90s. This then morphed into a senior
engineer/architect role in the 2000s. I joined the Microsoft Consulting org in 2002 and
started making the move to secure development work by 2003. In 2005 I learned about the
MS-SDL and started using it in my projects. I had to change quite a few activities to enable
the SDL to work in a heterogeneous Agile work environment. By 2009 I was doing
governance on around 100 worldwide projects and had integrated the core SDL into the
Services Application Lifecycle Management process.
I joined Cigital in 2011 and was in put in charge of creating a SSI in two major clients. I will
refer to these two in my examples later on.
My overall goal since 2009 has been to build metrics into the process of building up a SSI.
Ive developed a methodology based my experiences at clients and based on my use of
several frameworks/standards/processes (incl. MSFT SDL, BSIMM, CLASP, ISO 27001 series,
SAMM, CMM(I) )

Slide 4

touchpoints
Current Software Security Initiatives

SSIs fall into 2 categories, technical lifecycle frameworks and software maturity models.
I do not count ISO/IEC 27001:2005 (extension of quality system), COBIT Security (regulatory
compliance), CRAMM (risk analysis), SABSA (business operational risk based architectures),
CMMI (Capability Maturity Model
I have used the following SSIs :
CLASP, Touchpoints, MS-SDL several specific SSIs for secure development
Advantage very technical, very descriptive
Disadvantage hardly any business-side guidance (governance, risk, compliance,
operations, strategic planning, etc.)
BSIMM, SAMM maturity model to measure implementation of the Build Security In
framework, jointly created by Cigital and Fortify
Advantage large pool of participants, descriptive of activities *within* pool,
includes some reference to business practices
Disadvantage does not record activities falling outside BSI framework, owned by
one organization, very simple metrics, levels are not a tech tree with prerequisites, light on business practices

Slide 5

Bug bars But, *which* bugs were meaningful to teams?


Measuring Threat Models CIA? STRIDE/DREAD? What else?
Non-measurable models.
What

IS a complete Threat Model?

Code review and testing many competing taxonomies confusion


Varying criticality measures by group (security, dev, ops)
Criticality measure not a great Apple's to Apples measure

. . . Maturity Model issues interviewer bias


Measured only presence of activity, not proficiency
Out of band activities what to do?

How did I measure these Initiatives?

The technical models fall short on explaining how to measure the progress of the initiatives,
while the maturity models are made for simple measurement.
MS-SDL, Touchpoints, CLASP several specific SSIs for secure development
Bug bars
MSFT could use common MS taxonomies but
(all projects) Hard to nail down which bugs were meaningful for the
project. Some security bugs questioned by dev leads.
threat model measurements
Early MSFT - Each TM was different and no common components to
measure off.
Later MSFT consistent but STRIDE and DREAD not good for measuring
Cigital - non-measurable modelling processes
How do we measure a complete TM?
We may have a complete TM, but how do we measure inclusion of proper
threats and correct mitigations?
number of code review bugs
Competing taxonomies to use for metrics; which is correct?
Categorize by criticality, BUT this varies between organizations!
Criticality not a great apples/apples measure; different bug types bundled
together.
number of security testing issues
Same problems as bug review issues
SAMM declarative Software Assurance Model owned by OWASP
Simple yes/no measurements of whether an activity is being done, as per
reviewers estimate

Same of the BSIMM weaknesses exist here; the assessment may show out of
band activities which skew the metrics
BSIMM maturity model to measure implementation of the Build Security In
framework, jointly created by Cigital and Fortify
Simple yes/no measurements of whether an activity is spotted by the reviewer
Some activities get spotted out of band, which makes for strange, incomplete
maturity tree measurements
Metrics become weaker evidence because the model only shows presence of an
activity, not proficiency

Slide 6

OWASP Top 10 not stable, could change year on year


DREAD everyone fought over what rank to measure each item
STRIDE overlap of some of the elements, confusing measurements
Patterns & Practice Security Frame 9 common areas of dev security confusion
Used across SDLC to measure Apples/Apples
Easy to teach developers
Simple metric to use and classify security requirements, architecture and bugs by

CWE and CAPEC globally accepted, taxonomies tie to each other and CVE

What about security taxonomies for measuring?

OWASP Top 10
Potentially changes each year; not stable enough.
DREAD
Everyone fought over what numbers to put up, on scale of 1-10
STRIDE
Good start but there is overlap
SECURITY FRAME
This breaks down top developer issues better
CWE
Globally used and frequently maintained
CAPEC
Attack taxonomy that ties directly to CWE and CVE

Slide 7

Lessons I learned (the hard way pain pain pain)

Lessons
Its very hard to measure anything in the real world when one cannot do an apples/apples
comparison.
Many of the processes lacked a common category
It gets even harder when one mixes the technical models in with the maturity models.
Inexperience causes failures in nearly 60% of the cases
How does one measure competency with a given task?!?
There may be cultural differences
Unanticipated process slowdowns

Slide 8

Ensured the technical models fed into maturity models


Made a set of buckets common to all SSI
Mapped both maturity models to commond IT business activitiess
Balanced Scorecard

Measured Efficiency of work


Security Frame across SDLC
CAPEC For Threat Models
Modified Black Scholes options pricing model for cost of fixing bugs

and the Conclusions that I came to

Conclusions
Ensured that the technical models are part of the maturity models
Technical model is sub-part of SSI buckets/scorecard
Made a common set of SSI buckets to enable measurement
SOMC buckets
Balanced Scorecard
Measured cost efficiency
Measuring each step by 3s
Black & Scholes modified for costs

Slide 9

SSG Organizational growth

OpenSAMM

BSIMM

Education & Guidance

Training

Governance, Risk & Compliance (GRC)

Policy & Compliance

Policy & Compliance

Auditing

Policy & Compliance

Policy & Compliance

SSG quality gates (touchpoints)

VERIFICATION

SSDL TOUCHPOINTS

Operations Management

DEPLOYMENT

DEPLOYMENT

SSO Core Competencies

CONSTRUCTION

INTELLIGENCE

Strategic Planning

Strategy & Metrics

Strategy & Metrics

Financial Planning

Strategy & Metrics

Strategy & Metrics

Strategic Contacts
Ability to Project SSO Vision

SSG = Software
Security Group

Business Marketing
Security Anecdotes
Performance Incentives
SSG activity growth

Supplier Management

Common SSI buckets

The two maturity models I worked on have the same origins but both are lacking in the more
business-oriented security processes. I overlaid a number of metrics for checking the
security organizational growth.
--- Progress in growing strategic contacts needed to be captured.
--- Internal marketing needed to be captured with extra processes (Projection and Business
marketing).
--- A maturing security group has the ability to positively influence other groups to follow its
guidance. This is captured with Performance Incentives metrics.
Under SSG activity growth, I also oriented several other security domains towards their
more common non-security processes.
--- Governance, Risk & Compliance are normally grouped
--- Auditing is normally separate
--- Operations management normally incorporates deployment activities
--- Strategic planning requires its own breakdown
--- Financial planning requires its own breakdown for any security group within an
organization
I added Supplier Management as a necessary domain for Security group activity.

Slide 10

SSG Organizational growth

Balanced Scorecard

Strategic Contacts

FINANCIAL

Ability to Project SSO Vision

CUSTOMERS

Business Marketing

CUSTOMERS

Security Anecdotes

INNOVATION & LEARNING

Performance Incentives

CUSTOMERS

SSG activity growth


Governance, Risk & Compliance (GRC)

FINANCIAL

Auditing

INTERNAL BUSINESS

SSO quality gates (touchpoints)

INTERNAL BUSINESS

Operations Management

INTERNAL BUSINESS

Supplier Management

CUSTOMERS

SSO Core Competencies

INTERNAL BUSINESS

Strategic Planning

FINANCIAL

Financial Planning

FINANCIAL

(F) Reduce
Security
Costs, SSG
finance

(C)
Customer
confidence,
SSG
organization

Strategy

(IB) Tech &


MM metrics

(IL) Security
Awareness,
SSG data
collection

Balanced Scorecard

The Balanced Scorecard can be used to help the security group provide key metrics to
management in a form that they understand. The standard Balanced Scorecard is broken
into 4 blocks; Financial, Internal Business, Learning and Customer (oriented metrics).
The SSI bucket metrics (previous page) are fed into the Balanced Scorecard as follows:

Slide 11

Data Validation
Authentication

Authorisation
Configuration
Sensitive Data
Session
Cryptography
Exception
Logging

Security Frame for continuity

The Security Frame lists the 9 most common areas that developers make mistakes. Created
by the Patterns & Practice team of Microsoft. It provides a way to tag security work
throughout the secure engineering lifecycle.
Data Validation vetting data before it gets consumed
Authentication Who are you?
Authorisation Are you allowed access to this particular area?
Configuration What are the system dependencies?
Sensitive Data PII? PCI data? Secrets?
Session How are related two-party communications managed?
Cryptography key generation, key management
Exception How are unexpected errors handled?
Logging Who did what and when?

Slide 12

Data Validation

Category - Injection (Injecting Control Plane content through the Data Plane) - (152)
Category - Abuse of Functionality - (210)

Category - Data Structure Attacks - (255)


Category - Time and State Attacks - (172)
Category - Probabilistic Techniques - (223)

Authentication

Category - Exploitation of Authentication - (225)


Category - Spoofing - (156)

Authorisation

Category - Exploitation of Privilege/Trust - (232)

Configuration

Category - Physical Security Attacks - (436)


Category - Resource Manipulation - (262)
Attack Pattern - Supply Chain Attacks - (437)

Sensitive Data
Session

Category - Data Leakage Attacks - (118)

Cryptography

[Category - Exploitation of Authentication - (225)]

Exception

Category - Resource Depletion - (119)


[Category - Probabilistic Techniques - (223)]

Logging

Attack Pattern - Network Reconnaissance - (286)

Mapping to CAPEC to allow for global use and flexibility

1000 - Mechanism of Attack


Category - Data Leakage Attacks - (118)
Category - Resource Depletion - (119)
Category - Injection (Injecting Control Plane content through the Data Plane) - (152)
Category - Spoofing - (156)
Category - Time and State Attacks - (172)
Category - Abuse of Functionality - (210)
Category - Probabilistic Techniques - (223)
Category - Exploitation of Authentication - (225)
Category - Exploitation of Privilege/Trust - (232)
Category - Data Structure Attacks - (255)
Category - Resource Manipulation - (262)
Category - Physical Security Attacks - (436)
Attack Pattern - Network Reconnaissance - (286)
Attack Pattern - Social Engineering Attacks - (403)
Attack Pattern - Supply Chain Attacks - (437)

Slide 13

Modified Black & Scholes options model

Dollar_Cost_of_Risk = Consultancy_hr * delta - Consultancy_hr * e^(-0.01 * Project_Days_Used/365) * Normal_Distribution(d1


Options are financial rights (not obligation)
- Volatility_Over_Time)

Security issues are similar (no obligation to fix)

e = 2.71828... etc.
Cost of options
goes
up distribution
with time, volatility
Normal_Distribution
= standard
normal
Security
issue costs
fix cost
is(potential
highercosts
when
Consultancy_hr
= Consultancy
per hour
for experts to fix)
Project_Wage
costs per
given the project budget
=code
is hour
older

risk of
immature
secure
engineering
increases
Project_Security_Risk
= 0 to
1 measurement
of technical
SSI competence
heres a brief overview of the equation

Volatility_Over_Time = Project_Security_Risk*SQRT(Project_Days_Used/365)
Project_Days_Used = Project_Current_Date - Project_Incept_Date

d1 =LN(Consultancy_hr/Project_Wage) + ( (0.01 + Project_Security_Risk^2) /2) * (Project_Security_Risk/365) /


Volatility_Over_Time
delta = Normal_Distribution(d1)

Measuring the potential cost of fixing issues

Modified Black & Scholes


The financial model is to measure value of options with incomplete information. It states
that riskier/more volatile markets yield higher option values.
Security issues are similar
Option to fix them, not obligation
Cost of fixing them (ie. Buying the option) is more when the technical risk from the SDLC
is higher and code phase is more advanced

Dollar_Cost_of_Risk = Consultancy_hr * delta - Consultancy_hr * e^(-0.01 *


Project_Days_Used/365) * Normal_Distribution(d1 - Volatility_Over_Time)
e = 2.71828... etc.
Normal_Distribution = standard normal distribution
Consultancy_hr = Consultancy costs per hour
Project_Wage = costs per hour given the project budget
Project_Security_Risk = 0 to 1 measurement of technical SSI competence
Volatility_Over_Time = Project_Security_Risk*SQRT(Project_Days_Used/365)
Project_Days_Used = Project_Current_Date - Project_Incept_Date
d1 =LN(Consultancy_hr/Project_Wage) + ( (0.01 + Project_Security_Risk^2) /2) *
(Project_Security_Risk/365) / Volatility_Over_Time
delta = Normal_Distribution(d1)

Slide 14

Project
Ministerstvo zdravotnictva SR
GIE - SHIFT-10-01
PetroC OMS Phase I Extend
PetroC OMS Maintenance Project
Internet Banking Client
DWH Migration ( Monitoring)
TIB-Corporate Internet Banking
RWE Smart Home
LMI - Domain Awareness System
Vita Phase II
Smart Home - 2.PQR
CBS-BOC
SmartHome V2
Meo at PC -- Passagem a Producao
SHaS BPM Platform Establishment Project

Date Received
26/10/2009
08/10/2009
10/05/2010
07/11/2010
27/09/2010
11/08/2010
22/02/2010
27/10/2009
31/07/2009
18/10/2009
06/12/2010
08/02/2010
27/10/2010
08/12/2009
28/08/2009

Project Security Risk


0.854
0.842
0.894
0.894
0.842
0.781
0.644
0.527
0.469
0.469
0.527
0.439
0.436
0.356

Dollar Cost of Risk


$126.89
$126.11
$124.89
$118.23
$114.26
$109.25
$98.41
$85.92
$79.75
$77.96
$74.91
$70.96
$64.25
$60.09

0.085

$18.73

Here are some tracking examples from my previous work

In this example you can see how a higher project security risk yields a higher cost to fix an
issue, and a lower project security risk yields a much lower cost to fix. The assumption is
that the hourly cost per team member is USD 200.
The Dollar_Cost_of_Risk represents the cost per hour of finding and fixing a security issue,
given the state of the projects secure engineering. To put it another way, if
The security requirements were missing or incomplete, they would make it difficult to
articulate the security controls needed
The threat model was missing or poor, it would miss threats and mitigations, which would
make it more difficult to focus on dangerous code
The code review was incomplete or non-existent, it would yield potentially dangerous
code
The security testing was missing, verification coverage wouldnt exist
In short, it would be dangerous code that was difficult to introduce fundamental changes to.
The cost of fixing such code would be high.
This model assumes all bugs are the same but the model can be modified for criticality of
bug. This can be done by changing the Consultancy_hr cost to reflect the cost of bringing an
expert in to fix the issue.

Slide 15

Thank you!
Geoffrey Hill
Artis Secure Ltd.

geoff-h@artis-secure.com
Feb 28, 2014

Seeing the Elephant can mean several things the reason I picked it:
Seeing the elephant of poor data collection in the security room
In ancient times if an unprepared army saw an elephant it ran! Same with many ops and
metric collection
Story of blind people describing an elephant each has a different story (or metrics)

You might also like