You are on page 1of 32

2017 Application Security

Research Update
By HPE Software Security Research Team
Table of contents
Contents

3 Section I: Executive summary

4 Section II: The life of a vulnerability


5 Finding AppSec weaknesses
6 Managing risks from vulnerabilities
6 Vulnerability reporting timeline
7 Effect on AppSec programs
8 SDLC models impact on vulnerabilities

9 Section III: New vulnerability trends from 2016


9 Java deserialization
9 Template injection (server and client side)
9 LDAP entry poisoning
10 JNDI injection

11 Section IV: How soon is too soon to worry about a new weakness?

15 Section V: Software analysis


15 Why and how we do this analysis
16 Application results
17 Mobile results
18 Top vulnerabilities in applications
21 Top mobile vulnerabilities

23 Section VI: Risk analysis of external components


23 Reliance on open source components

27 Section VII: Remediation


27 Number of vulnerabilities fixed
28 Remediation process in light of different development models
29 Scan results

31 Section VIII: Conclusions

32 Authors

32 Contributors
Application Security research report 3

Section I: Executive summary


Number of applications that contained at least Key risks
one critical or high security vulnerability
A single weak point in a line of code can create an open door for attackers. Our researchers
have identified the top 5 AppSec risks that threaten your business.

80% 1 Open source


dependency
of web and Enterprises continue to increase their dependence upon open source. While open source can
desktop apps reduce costs, effort must still be exerted to ensure that it has been tested to be free of serious
68% of mobile
vulnerabilities.

apps
2 Lengthy exposure
to zero-days
91%
of open
There can be significant gaps between when a vulnerability is discovered and when a
patch becomes available. Even a prompt patch management program can leave enterprises
source components vulnerable for months. During the time period when a researcher collaborates with a company
to help their developers address a disclosed vulnerability, others may independently find, and
potentially maliciously use, the same vulnerability.

3 Costly
remediation
Removing security flaws gets more time consuming and expensive the longer they live in code.
Removing vulnerabilities and preventing new ones from being introduced during development
keep developers focused on delivering innovation.

4 Developers
repeat mistakes
Whether discussing mobile, web, or desktop applications, data analysis indicates that
developers continue to inject the same critical bugs into their code.

5 Remediation efforts and impact of DevOps need to


be considered when scheduling security assurance
As companies move to integrate and automate security scans as part of CI/CD, shorter
remediation times increase the confidence that projects can be delivered on time while
addressing security concerns.

Percentage of 100%
83%
projects utilizing
open source 65%
79%
Enterprises are increasingly betting their businesses
components, from
past reports: 50%
on open source components.
Each year, roughly half of the projects we scanned were
composed of more open source than custom code.
2015

2016

2017
Application Security research report 4

Section II: The life of a vulnerability


Distinctions can be made in order to have more precise conversations around software security
assurance. Specifically, differentiating between weaknesses, vulnerabilities, and attacks is
beneficial. Software weaknesses are errors in architecture, design, source code, or configuration,
which may lead to vulnerabilities. Software vulnerabilities are instances of software weaknesses,
found in software or hardware (firmware), which when exploited will lead to a negative impact
to confidentiality, integrity, or availability. Attacks are attempts to exploit a given vulnerability, at
runtime, with an intent to compromise confidentiality, integrity, or availability.

The lifecycle of a vulnerability can be broken down as having a birth, a life, and a death.
Software vulnerabilities are born and start their lives as weaknesses, which are introduced
during design, implementation, or configuration of a software program. The real “life” of a
vulnerability is then realized the moment that the software is built, configured, and released into
a staging or production environment. If the vulnerability is “living” in the staging or production
environment, it may be attacked and exploited resulting in a compromise to confidentiality,
integrity, or availability. A vulnerability may be mitigated, essentially rendering it not exploitable,
while it is still “living“ in order to help ensure the safety of the program. A vulnerability, however,
does not “die” until the underlying weakness in the design, code, or configuration is fixed in a
future version of the software product.

Figure 1: The life of a vulnerability and process of vulnerability reporting

Static analysis and Vulnerability disclosure IT operations and


code review management Dynamic analysis patch management

New vulnerability
and/or new
Security vulnerability research 1.a weakness category

Runtime monitoring
implied vulnerability

Tools/techniques Tools able to Tools able to


able to detect Zero-day identified protect against
detect vulnerability
2.a weakness 3.a 4.a 5.a attack

Scanning/reviewing Zero-day Scanning software Runtime


software for responsibly for vulnerability monitoring / WAF /
weakness disclosed IDS / IPS
2.b 3.b 4.b 5.b

Audit Weakness is a Zero-day Vulnerability Audit Apply patch


vulnerability acknowledged
2.c 3.c 4.c 5.c

Patch created Patch No


available?
3.d
Yes
CVE disclosed
Legend: roles
3.e • Owner of application (2.b, 2.c, 3.a, [3.c], 3.d, 3.e, 4.b, 4.c, 5.b, 5.c)
• Owner of dependency (2.b, 2.c, 3.a, [3.c], 3.d, 3.e)
• Consumer of application or dependency (4.b, 5.b, 5.c)
• Independent researcher (1.a, 2.b, 2.c, 3.a, 3.b, 4.b, 4.c)
• SAST vendor (1.a, 2.a, 2.b, 2.c)
• DAST vendor (1.a, 4.a, 4.b, 4.c)
• IAST vendor (1.a, 5.a, 5.b)
• RASP vendor (1.a, 5.a, 5.b, 5.c)
Application Security research report 5

Finding AppSec weaknesses


In Figure 1, static analysis and code review are commonly used to identify potential security
weaknesses in source code when they are “born.” Static code analysis creates a model from
source code, which is then analyzed to determine if rules, intended to identify weaknesses,
are violated (Figure 1: 2b). This approach allows large amounts of software to be analyzed in
a time-efficient manner, effectively augmenting the amount of time that would be required
to conduct code review to find the same weaknesses. The results of static analysis are then
reviewed by auditors and developers to identify which potential weaknesses need to be
fixed. Code review is also used to analyze source code and it is important to remember that
automated code assessment is not a complete replacement for code review. Code review is still
essential for certain portions of code, which may suffer from design flaws. Within the context of
security, any source code that implements security features should be reviewed for design flaws
related to confidentiality, integrity, and availability.

Ultimately, software that has undergone static code assessment and code review, will be built
into a process that is then tested for quality and eventually released into production. One of
the limitations of static code assessment, during development and testing, is that it does not
consider the context of the resulting program when it is executing. On the other hand, dynamic
analysis includes black-box testing approaches, which attempt to find vulnerabilities while they
“live” in an application as it executes from within a staged testing environment or production
environment (Figure 1: 4b). Once vulnerabilities are detected by dynamic analysis, human
auditors review the results to determine which potential vulnerabilities must be addressed.
Because dynamic analysis evaluates a program during execution, it is capable of detecting
vulnerabilities that are caused by configuration and environmental flaws that are not present
during static analysis and code review. However, dynamic analysis is also not able to detect
certain vulnerabilities in a program during runtime because it does not have access to the
underlying code. Remember, this is a black-box approach, which is restricted to manipulating
inputs of the program and analyzing the resulting output.

Security vulnerability researchers employ many approaches to uncover vulnerabilities, including


those depicted in Figure 1. Interestingly, identified vulnerabilities can fall into two primary
buckets. The first bucket includes those vulnerabilities that are instantiations of weaknesses of
known types (e.g., Cross-Site Scripting (XSS), Buffer Overflow, Command Injection, and Insecure
Transport). There are literally hundreds of different known weakness types1. In the second
bucket, we have the rarer species of vulnerabilities that are instances of a previously unknown
type of weakness. Each year, a certain number of new weakness types are identified in the
industry, which must be treated differently than the vulnerabilities in the first bucket.

https://vulncat.hpefod.com.
1
Application Security research report 6

Managing risks from vulnerabilities


When a new weakness type is identified, other security researchers and enterprises wishing to
reduce their risk must identify products, tools, and techniques to determine if these weaknesses
and vulnerabilities exist in their programs (Figure 1: 2a, 4a, 5b). If a weakness type is detectable,
then enterprises can reduce their own risk by identifying vulnerabilities as early as possible
during their secure development lifecycle. Because a new weakness type was undetectable in
the past, it would be best to update the runtime monitoring capabilities (Figure 1: 5a, 5b) to
serve as a temporary mitigation. Runtime application self-protection (RASP) and other runtime
protection mechanisms, such as web application firewalls (WAF), intrusion detection systems
(IDS), and intrusion prevention systems (IPS) provide mechanisms to detect or mitigate attacks,
which exploit vulnerabilities in their “living” environment (Figure 1: 5a, 5b). These mechanisms
are very useful for protecting integrity, confidentiality, and availability until the vulnerabilities
“die” in a future patched version of the software that has the underlying weaknesses fixed
(Figure 1: 5c).

Once the production environment has been secured, the vulnerability may be explored in the
earlier phases of the lifecycle (Figure 1: 2a, 4a) where the weakness may be fixed appropriately.

Vulnerability reporting timeline


MITRE published guidelines for security researchers in 2016 to provide clarity on how to reserve
CVE ID(s) before publicizing a new vulnerability.2 It is important to note that each vulnerability
does not have an assigned CVE ID. It is at the discretion of the researcher and the vendor
involved with the vulnerable product, either of who can apply for a CVE ID. Publishing a CVE
ID allows better tracking of a vulnerability and the patch. These guidelines ensure an adequate
coordination between researchers and the vendor of the vulnerable product during the period
between when the vulnerability was discovered and the patch was released.

It is intended to avoid confusion of communication of a unique vulnerability. A lot happens


behind the scenes between when a vulnerability is discovered, disclosed, and the patch is
made available publicly. Figure 2 depicts a timeline for a recent example of a Java Unsafe
Deserialization issue in WebLogic that is largely credited for increased focus on this decade-old
weakness type.

Figure 2: Vulnerability disclosure timeline.

2006 08/24/2011 1/1/2015 07/21/2016


Java Unsafe Deserialization
weakness category was Spring RCE gadget Apache Commons- WebLogic team
announced by researchers disclosed Collections gadget acknowledged (3.c), patch
with theoretical examples CVE-2011-2894 disclosed created (3.d), CVE
(1.a) (3.e) (3.e) disclosed to public (3.e)

2006 2007 ~ 2010 2011 2012 2013 2014 2015 2016 2017

Researchers published 0-day in


WebLogic publicly without Patch bypass reported in
following a responsible Oracle released patch for WebLogic and reserved
disclosure policy (3.a) CVE-2015-4852 CVE-2016-3510
(3.b)
(5a) needed by users of products

11/06/2015 11/10/2015 02/16/2016

https://cve.mitre.org/cve/researcher_reservation_guidelines.
2
Application Security research report 7

An unsafe Java deserialization vulnerability was discovered in WebLogic sometime in 2015


and was responsibly disclosed to Oracle, which was in the process of fixing it when another
independent researcher chose to provide a full disclosure of the vulnerability without first
following a responsible disclosure practice,3 marking an incomplete vulnerability disclosure
management cycle in Figure 1.

This string of events means that organizations that rely on external products and dependencies
in their environments need to have an efficient patch management process in place because a
vulnerability is typically discovered much earlier than the patch is released and details become
publicly available. Organizations might need to have compensating controls in place to account
for an insufficient patch and irresponsible disclosures. Furthermore, organizations may need
to have continuous monitoring (Figure 1: 5a) to possibly catch vulnerabilities that may never
be disclosed. These are the vulnerabilities that did not follow any part of the vulnerability
disclosure management lifecycle, and are exploited in the wild.

Effect on AppSec programs


A mature security assurance program might already have controls to address each stage a
Protection against zero-days vulnerability follows during the lifecycle. An absence of a controls adds to risk. Organizations
requires investment in robust often do not have resources to address all controls with the same priority. Maturity models
like Building Security in Maturity Model (BSIMM) and OWASP Software Assurance Maturity
AppSec detection and protection Model (OpenSAMM) can provide the framework to evaluate their current software assurance
capabilities along with a strategy against what other enterprises do and formulate their security assurance strategy
accordingly. BSIMM, for instance, breaks all software security initiative into four groups
proactive approach that of activities (e.g., governance, intelligence, secure software development lifecycle (SSDL)
anticipates delays in disclosure touchpoint, and deployment) with three practices each. It then divides each practice into
and patch availability. various tasks to achieve the objective of the practice. Each task is rated by the number of
enterprises that include it in their software security assurance program. BSIMM collected
data from 95 enterprises in its seventh release.4 It listed identifying and monitoring open
source dependencies under Intelligence and reported that these are done by 22% and 8% of
participating enterprises respectively. Dynamic testing is listed under deployment and is done
by 57% of participants. Code review using automated tools is a SSDL touchpoint task and is
done by 66% of the participants, while runtime monitoring on deployment is only done by 3% of
the enterprises studied.

Static code analysis helps identify potential zero-days in organic code. A zero-day in
dependencies, however, is what often creates a blind spot when the source code in the
dependent library is not available to analyze or was not included in the analysis. Dynamic
analysis and runtime monitoring may help compensate for those blind spots.

To summarize, enterprises are potentially unprotected at several points in the vulnerability


timeline. By understanding the process and potential exposures, you can better assess the
efforts required for a holistic AppSec program. The points of exposure include:
1. Zero-days that are in the CVE lifecycle
2. Zero-days that remain undiscovered in the code
3. Zero-days that remain undiscovered in your dependencies
4. Zero-days related to weaknesses, which are unsupported by AppSec detection technologies

Protection against zero-days requires investment in robust AppSec detection and


protection capabilities along with a proactive approach that anticipates delays in disclosure
and patch availability.
https://foxglovesecurity.com/2015/11/06/what-do-
3

weblogic-websphere-jboss-jenkins-opennms-and-your-
application-have-in-common-this-vulnerability/.
https://go.bsimm.com/hubfs/BSIMM/BSIMM7.pdf.
4
Application Security research report 8

SDLC models impact on vulnerabilities


Figure 3: Promise versus reality of security in DevOps Different development models exist, including spiral/waterfall/agile, providing a framework to
developing applications. Each of these frameworks implies certain cultural, procedural, and
development methods that impact how and when security assurance steps are applied. They
also involve writing new code and potentially consuming code from third parties. DevOps
applies agile and lean approaches to development operations and brings with it greater
17% automation so there is an expectation that the automation would include security. Automating
security scans offers the opportunity to improve efficiency and effectiveness of security efforts,
38% along with consistent application of AppSec techniques.

20% In the 2016 research paper, “Application Security and DevOps,” we show that DevOps alone
does not make you more secure. While there was an expectation by 99% of those surveyed that
DevOps had the potential to make you more secure, the actions of those same respondents
25% did not match their expectations. A surprising 17% were doing no application security and 25%
were relying upon network security to protect their applications. In addition, only 20% were
practicing AppSec testing in development while 38% were waiting for a pre-production gate
Pre-Production Gate (a more traditional waterfall approach).5
Network
Testing During Development
So, while there is an almost universal expectation that DevOps offers the opportunity to
improve AppSec, common AppSec practices have not yet evolved to meet that expectation.
None

The data studied here, from those already practicing AppSec testing, would indicate greater
interest in more rapidly finding and fixing security flaws. Your AppSec posture is less about
the development methodology you practice and more about how, and when, you integrate
application security assurance practices into the chosen development model.

99%
of those surveyed agreed that DevOps is an opportunity
to improve application security. But, only

20%
perform application security testing during development.
Most wait until late in the SDLC—or not at all!5

HPE Secure DevOps Survey, September 2016.


5
Application Security research report 9

Section III: New vulnerability trends from 2016

Application security is a live discipline that keeps evolving and growing over time. While some vulnerability types are being
slowly controlled with secure defaults and defensive frameworks, others (new and unknown) emerge from increasing software
complexity and dependencies in software development.

In this section, we cover new vulnerability trends. Some of the vulnerabilities are new this year, resulting from the latest software
security research, while others have been flying under the radar for quite some time and were exposed to mainstream attention
during the course of 2016.


1 Java
deserialization
Java Deserialization flaws and exploits are not new, however, in 2016, this vulnerability category has become mainstream
due to a great amount of research on the subject. This research has led to an arsenal of exploits, protection measures, and
techniques to bypass them. More importantly, however, the research helped spread the most comprehensive information
on this topic to the masses. The increased popularity has made it easier for developers to know about this vulnerability and
deploy necessary controls.6

Any application consuming Java-serialized untrusted data can be attacked. We could discuss whether a known gadget
(regular classes in your classpath repurposed by an attacker for exploitation) needs to be available for the attacker in the
application classpath in order to exploit the weakness; however, remember that the fact that you are not aware of any gadgets
in your application classpath does not mean they are not there and that a hacking group/mafia does not know about them. If
the application takes untrusted data and uses it for Java deserialization, it needs to be protected and you should not wait for
the newest and shiniest gadget to be released.

Depending on the gadgets available to a vulnerable application classpath, an attacker may carry out different attacks—from
remote code execution and denial of service to attacks against the logic of the application.

These vulnerabilities should be detectible in your code/application by products implementing static application
security testing (SAST), dynamic application security testing (DAST), interactive application security testing (IAST),
or runtime application self-protection (RASP).


2 Template injection
(server and client side)
Server-Side Template Injection (SSTI) and Client-Side Template Injection (CSTI) are not new. SSTI was presented at Black Hat
USA 2015 and sandbox escapes for AngularJS (an MVC framework developed by Google that uses client-side templates)
have been known for a while although they became more relevant since the publication of its server-side sibling. In 2016, we
saw new research on these vulnerabilities, as well as changes in some of the templating engines, which have made them even
more relevant.7

With SSTI, depending on the template framework used, an attacker may gain remote code execution (RCE), cause a denial of
service (DoS), or leak sensitive information present in the template context.

With CSTI, an attacker may be able to inject arbitrary data on the client-side template and bypass any sandboxes installed to
prevent access to the browser DOM, as well as gain the ability to execute arbitrary JavaScript code in the context of the page.

These vulnerabilities should be detectible in your code by products implementing SAST, DAST, IAST, or RASP.
Although for client-side injections, IAST may not be as effective as it has limited visibility of the browser’s DOM.

https://community.hpe.com/t5/Security-Research/The-perils-of-Java-
6
Server-side template injection: https://www.blackhat.com/
7

deserialization/ba-p/6838995#.WInW0rbyvpQ. docs/us-15/materials/us-15-Kettle-Server-Side-Template-
Injection-RCE-For-The-Modern-Web-App-wp.pdf.
Application Security research report 10

3 LDAP entry
poisoning
LDAP Entry Poisoning was introduced at Black Hat USA 2016, and quickly gained adoption with multiple CVEs reported for
vendors such as Oracle, Spring, Apache, Atlassian, ForgeRock, and JFrog.8

An attacker able to modify an LDAP entry in a directory service may gain remote code execution on any vulnerable
application querying for the attacker-controlled entry, including applications integrated with a directory service for
authentication purposes. As demonstrated at the Black Hat session, after poisoning an LDAP user, an attacker can easily gain
remote code execution on vulnerable applications by just trying to log into the application, even using an incorrect password.

Because LDAP Entry Poisoning is a second order vulnerability where attackers need to first poison an LDAP entry,
and in a second stage attack the vulnerable application, SAST is the most effective approach to find this vulnerability.
In order to detect it with DAST or IAST, you would need to first modify a test LDAP account and then use DAST/IAST
to exercise the vulnerability.


4 JNDI
injection
Java Naming and Directory Interface (JNDI) Injection was also presented at Black Hat USA 2016. Multiple CVEs have been
granted in recent months for this issue by vendors such as Oracle.

Applications are vulnerable to JNDI injection when they pass untrusted data to a JNDI lookup operation or any wrapper
around the lookup operation. If an application fails to sanitize the data passed to these APIs, an attacker may easily gain
remote code execution by forcing the JNDI reference resolver to look for an attacker-controlled JNDI reference. Due to the
method in which the JNDI naming manager resolves JNDI references, the attacker-controlled reference will be able to force
the vulnerable application to download an attacker-controlled payload and execute it on the vulnerable server.

These vulnerabilities should be detectible in your code/application by products implementing SAST, DAST, IAST, or
RASP, but because of the required interactions with attacker-controlled servers, SAST is the best fit to pinpoint the
potential exploitability of this vulnerability.

https://www.blackhat.com/docs/us-16/materials/us-16-Munoz-
8

A-Journey-From-JNDI-LDAP-Manipulation-To-RCE-wp.pdf.
Application Security research report 11

Section IV: How soon is too soon to worry about


a new weakness?
Newly reported weaknesses should be closely monitored, especially if security assurance
products can find them. For example, Figure 4 shows a selection of vulnerabilities that Fortify
Security researchers responsibly disclosed in referenced libraries and applications. Let us take
one of the four reported issues last year, Dynamic Code Evaluation: Unsafe Deserialization, as a
point of reference.

Figure 4: Number of disclosures by Fortify researchers in new categories

0 2 4 6 8 10

Dynamic Code Evaluation:


JNDI Reference Injection 5

Dynamic Code Evaluation:


Unsafe Deserialization 6

LDAP Entry Poisoning 10

Access Control:
SecurityManager Bypass 2

OGNL Expression Injection:


Double Evaluation 1

Number of disclosures

The community has known about unsafe deserialization since at least 2006 (Schoenefeld,
2006). Schoenefeld’s paper presented how this weakness could culminate in a vulnerability.
However, only the discovery of high-profile vulnerabilities in systems such as WebSphere and
Jenkins led to the community placing a spotlight on this weakness. Around the same time as
the exploitation of the issue using Apache Commons collection library, Fortify researchers
added new rules for detecting this category of weaknesses in source code. As part of the work
leading to these enhancements, researchers discovered the existence of Unsafe Deserialization
issues in various vendor offerings. Following responsible disclosure practices, Fortify
researchers worked with these vendors to help ensure that their products were eventually
patched. Six issues were responsibly disclosed to vendors. All six reports were acknowledged
by vendors. Then, once remediation details were published, with common vulnerabilities and
exposures (CVE) designation, dependent products are able to update their library versions.
Application Security research report 12

Tracking these disclosures and CVEs provides the unique opportunity to present the
following general observations:
• Researchers identify new security weakness that may lead to serious consequences,
presented with theoretical examples (Figure 1: 1a).
• Weakness may fail to gather attention from security assurance product vendors and security
community at large until a zero-day is reported (Figure 1: 3a) such as in the case of unsafe
deserialization in 2015, which was reported in widely used critical software, nine years after
the weakness type was first presented. This exposed many systems to a critical zero-day,
forcing application owners to resort either to providing a patch (Figure 1: 5c) or assuming the
risk of keeping their systems online with a known zero-day, or losing productivity by turning
off the functionality.
• Source code analyzers and dynamic scanners can identify weaknesses giving vendors a
chance to find, fix, and, if applicable, inform their customers about the issue before external
researchers identify it (Figure 1: 2a to 2c and 4a to 4c). For example, rules added to detect
unsafe deserialization weaknesses in the static code analyzer offered by Fortify identified
4470 instances in 323 unique applications, over nine months since February 2016, in
web applications scanned by the Fortify on Demand service, emboldening the claim of
commonality of this weakness. It is safe to predict that unsafe deserialization vulnerabilities
will continue to plague web applications for some time.
• Application owners might learn about the vulnerability directly (Figure 1: 4a) if they employ
an external dependency security management system such as Sonatype or Black Duck.
Figure 4 reflects vulnerabilities identified in Vulnerability Disclosure (steps 3a to 3e in Figure
1). From the six vendors that received advisories from Fortify, it is clear that an attacker
could know of the vulnerability for months before an application owner has a chance to take
action to fix the defect or have consumers prevent the vulnerability from compromising their
environments. Attackers may either independently discover or learn about zero-days before a
patch is available. They may watch various indicators of software development activities such
as library development in public code repositories (especially popular dependencies that are
referenced widely).

After remaining quiet for nine years, Unsafe Deserialization gained a spotlight in 2015 partly
due to the existence of classes in commonly available libraries used in server environments
such as the Apache Commons collection library that could be used to exploit vulnerable
deserialization instances. WebSphere and Jenkins are widely popular and the existence of
complex but critical issues, such as Unsafe Deserialization, immediately made them an imperial
target for attackers. Just by having a library as a dependency may not trigger the issue,
however, when combined with deserialization code and input from untrusted sources, it could
manifest into the exploitable instance.

Fortify researchers disclosed the


existence of unsafe deserialization
issues in various gadgets and end
Just by having a library as a dependency may not points. Five vendors have already
released a patch with a median
trigger the issue, however, when combined with response time of 99 days; the sixth
deserialization code and input from untrusted sources, vendor is still working on the fix.

it could manifest into the exploitable instance.


Application Security research report 13

Fortify researchers also disclosed issues to various vendors in four other categories—Dynamic
Code Evaluation: JNDI Reference Injection, LDAP Entry Poisoning, Access Control: Security
Manager Bypass, and OGNL Expression Injection: Double Evaluation. All reported issues were
rated critical, except one of the two disclosed issues in Access Control: Security Manager
Bypass, which was rated to be of medium severity. Figure 5 shows the median response time
to acknowledge and patch respectively in each of the reported categories.

Figure 5: Median vendor response time to acknowledge and issue a patch

0 50 100 150 200

Dynamic Code Evaluation: 8


JNDI Reference Injection Days to acknowledge (median)
40
Days to patch (median)
Dynamic Code Evaluation: 18
Unsafe Deserialization 99

LDAP Entry Poisoning 4


112

Access Control: 8
SecurityManager Bypass 155

OGNL Expression Injection: 1


Double Evaluation 17

Figure 6 shows the number of applications that were identified to be vulnerable by Fortify SCA
(Static Code Analyzer) and whether dependencies were found to be vulnerable by Sonatype, an
external dependency security management product.

Figure 6: Number of applications found to be vulnerable to newer type of weaknesses in 2016 by SCA and/or Sonatype

50 150 250 350

0 Number of vulnerable web


Dynamic Code Evaluation: 0 application projects (SCA)
JNDI Reference Injection 0
Number of vulnerable
13 dependent projects (Sonatype)
LDAP Entry Poisoning 0
0 Overlap between SCA
and Sonatype
3
Access Control: 51
SecurityManager Bypass 0
0
OGNL Expression Injection: 13
Double Evaluation 0
323
Dynamic Code Evaluation: 67
Unsafe Deserialization 32
Application Security research report 14

There were

323
applications found to be vulnerable in the Dynamic
Code Evaluation: Unsafe Deserialization category
over a 9-month period. Vendors took, on average,

99
days to patch these types of vulnerabilities.

One interesting observation from Figure 6 is that both SCA and Sonatype have reported
issues in three out of five categories, however, they are different. High numbers for Dynamic
Code Evaluation: Unsafe Deserialization from both SCA and Sonatype provide evidence for the
publicity it has attracted that it is indeed a very commonly occurring erroneous coding pattern.
The existence of a large number of gadgets in the classpath of these applications makes any
found vulnerabilities highly likely to be exploited and, once found, they could have severe
consequences. This assigns the weakness category an overall rating of very critical.

It is also interesting to see that Dynamic Code Evaluation: JNDI Reference Injection was not
flagged by SCA nor Sonatype during the time period of the collected data. This could be
because the category was added in the second quarter of 2016. The SCA rule for detecting the
LDAP entry poisoning issue was released around the same time also. Furthermore, out of 10
disclosed LDAP entry poisoning issues, only six issues are fixed and a patch was made available.
Although two of the patches issued are out of the analysis period for this report. This leads to a
conclusion that there are at least four products out there with potential zero-day vulnerabilities
and likely with severe consequences.

In section 5, we present further analysis focused on risk imposed by external components. The
section highlights top weaknesses, top libraries, and severity of the weaknesses reported in web
and mobile applications scanned by Fortify on Demand with Sonatype. Attackers often target
their efforts on popular components and this analysis will highlight popular components and
their state of security.
Application Security research report 15

Section V: Software analysis


The following sections of the report are based on the data collected by the HPE Security Fortify
on Demand (FOD) software as a service (SAAS) platform. The service combines a number
of techniques and analysis types for identifying security holes in customer code. In order to
provide a meaningful analysis of the data, it is important to use a common vocabulary. The
HPE Software Security Taxonomy classifies security findings into “kingdoms” of categories.
Introduced in 2005 and rejuvenated in 2014, the “Seven Pernicious Kingdoms” taxonomy
continues to evolve, and is constantly being expanded to include new types of vulnerabilities
and re-aligned to provide more meaningful and actionable classifications. The results discussed
in this report use the most current taxonomy.

Why and how we do this analysis


Each year, the security research team at HPE Security releases a report on the current state
of application security. The report highlights the current trends in security so that software
developers can better understand the present risks and have the data to make better informed
decisions regarding AppSec in their software development lifecycle (SDLC) and security of their
applications. This application security risk report analyzes vulnerabilities exclusively in software.

The anonymized and sanitized data analyzed in this report was collected through the HPE
Security Fortify on Demand service between October 30, 2015, and October 30, 2016. Due
to the difference in the number of mobile and non-mobile applications scanned, and the types
of vulnerabilities reported in both datasets, the report considers them separately. Non-mobile
applications, referred to as “apps” in this report, primarily consists of web and desktop apps. The
“apps” dataset is relatively similar to that of last year, with almost 7500 applications. The mobile
dataset consists of more than 570 Android and iOS applications.
Application Security research report 16

Application results
Figure 7 shows the percentage of non-mobile applications vulnerable to at least one
vulnerability from a specific kingdom. Most of the results are similar between 2015 and 2016.

The Security Features kingdom was again the top vulnerability this year with 91% of
applications vulnerable, followed by Environment with 76% and Encapsulation with 60%.
The percentage of applications found to be vulnerable to Security Features or Environment
remained relatively the same as last year. Encapsulation, however, had a 12% drop from 72% last
year to 60% this year. Re-alignment of categories to alternative kingdoms contributed to the
decrease in the percentage of apps vulnerable to the categories in the Encapsulation as well as
the Errors kingdoms, which had a drop of 20% from 43% last year to 21% this year.

Figure 7: Likelihood of apps found to be vulnerable per kingdom

25% 50% 75% 100%

30% Overall 2015 percentage


API Abuse 34%
Overall 2016 percentage
21%
Code Quality 30%
72%
Encapsulation 60%
77%
Environment 76%
43%
Errors 21%
Input Validation 44%
and Representation 50%
90%
Security Features 91%
20%
Time and State 24%

0.0 12.5 25.0 37.5 50.0 62.5 75.0 87.5 100.0

The percentage of applications vulnerable to Web Server Misconfiguration: OPTIONS


HTTP Method, which is part of the Environment kingdom, decreased significantly this year,
which allowed the overall percentage of apps vulnerable to Environment to stay the same
despite our reassignment of the Poor Error Handling: Server Error Message category to the
Environment kingdom. We renamed Poor Error Handling: Server Error Message to Web Server
Misconfiguration: Server Error Message, then re-assigned it to the Environment kingdom
because fixing the problem involves changing the web server’s configuration settings rather
than implementing better handling of error messages. Security Features, Environment, and
Encapsulation continue to be the top three vulnerabilities this year.

The number of applications vulnerable to code vulnerabilities in code quality increased by 9%


from 21% last year to 30% this year due to the increase in the number of applications vulnerable
to dereferencing null values.
Application Security research report 17

Mobile results
Figure 8 illustrates that the percentage of mobile applications vulnerable to vulnerabilities in
specific kingdoms has not changed much at all between 2015 and 2016. The number of apps
vulnerable to Security Features categories stayed at the top, followed by Encapsulation and
Environment. It is worth noting, however, that unfortunately almost all mobile applications are
vulnerable to at least one category from the Security Features kingdom. Incorrect use of SSL,
keychains, and permissions, as well as various types of privacy violations, remain problematic for
mobile applications.

On the other hand, while Security Features, Encapsulation, and Environment remain at the top
for both web and mobile datasets with Security Features in the lead, the order for the other
two is different (refer to Figure 9). For web apps, Environment follows Security Features, and for
mobile apps, it is Encapsulation. Considering that securing mobile applications should include
implementing protections against other potentially malicious apps residing on the same system,
the fact that there are more mobile apps vulnerable to encapsulation categories than web and
desktop apps makes sense.

Figure 8: Likelihood of mobile apps found to be vulnerable per kingdom

25% 50% 75% 100%

API Abuse 79% Overall 2015 percentage


75%
Overall 2016 percentage
Code Quality 9%
14%

Encapsulation 93%
96%

Environment 88%
86%

Errors 23%
19%
Input Validation 82%
and Representation 81%

Security Features 99%


96%

Time and State 40%


40%

0.0 12.5 25.0 37.5 50.0 62.5 75.0 87.5 100.0


Application Security research report 18

Two additional differences between the mobile and non-mobile results are in the percentage
of apps vulnerable to API Abuse and Input Validation and Representation kingdoms. Similar
to last year, three main categories contributing to the API abuse issues affecting mobile apps
are Often Misused: Push Notifications, Often Misused: Ad/Analytics Frameworks, and Often
Misused: General Pasteboard. For Input Validation and Representation, they are SQL Injection,
URL Scheme Manipulation, and Cross-Site Scripting: Reflected.

Figure 9: Comparing kingdom incidence in mobile and non-mobile applications scanned

25% 50% 75% 100%

API Abuse 34% Web app containing type


75% of weakness

Code Quality 30% Mobile app containing


14% type of weakness

Encapsulation 60%
96%

Environment 76%
86%

Errors 21%
19%
Input Validation 50%
and Representation 81%

Security Features 91%


96%

Time and State 24%


40%

Top vulnerabilities in applications


The graph in Figure 10 presents the 10 most commonly detected vulnerability types in non-
mobile applications within the last year. Nine out of 10 vulnerability types stayed the same as
in 2015, however, Cookie Security: Persistent Cookie replaced Hidden Field in the chart. Just
like last year, System Information Leak: External remained at the top with more than half of all
the apps containing at least one issue of this type. Cross-Frame Scripting jumped from fourth to
second place, while Insecure Transport: Weak SSL Protocol moved from ninth to fifth.

Figure 10: The 10 most commonly occurring vulnerabilities in the applications dataset

20% 40% 60%


Percentage of
System Information Leak: External 51% vulnerable apps
Cross-Frame Scripting 50%
Insecure Transport: HSTS not Set 50%
Cookie Security: Cookie not Sent Over SSL 48%
Insecure Transport: Weak SSL Protocol 46%
Cookie Security: HTTPOnly not Set 43%
Web Server Misconfiguration: Unprotected Directory 39%
Privacy Violation: Autocomplete 37%
Web Server Misconfiguration: Server Error Message 36%
Cookie Security: Persistent Cookie 36%
Application Security research report 19

Figure 11 shows the 10 most commonly detected high and critical vulnerabilities in the
applications dataset. Similar to the results for most commonly occurring vulnerabilities, nine
out of 10 most commonly occurring critical vulnerabilities stayed the same as compared to the
year before. The good news is that Insecure Transport: Weak SSL Cipher is no longer on the list.
The bad news is that in addition to Password Management: Hardcoded Password, Password
Management: Password in Configuration File also made the top 10 this year. Cross-Frame
Scripting jumped ahead of Cross-Site Scripting: Reflected and made it to number one, while
Insecure Transport jumped from ninth to fifth place.

Overall, the data indicates that problems caused by cross-frame scripting are on the rise, and
there are generally no positive changes with respect to the types of vulnerabilities and the
number of applications vulnerable to them. Furthermore, almost 80% of applications contain at
least one critical or high vulnerability.

Figure 11: The 10 most commonly occurring critical vulnerabilities in the applications dataset

5% 15% 25% 35%

Cross-Frame Scripting 29% Percentage of apps with


critical vulnerabilities
Cross-Site Scripting: Reflected 24%
Insecure Transport: Weak SSL Protocol 21%
Null Dereference 17%
Insecure Transport 14%
Unreleased Resource: Streams 13%
Password Management: Password in Configuration File 13%
Privacy Violation 13%
Often Misused: Login 13%
Password Management: Hardcoded Password 11%

Nearly
of applications contain
80% at least one critical or
high vulnerability.
Application Security research report 20

Top five vulnerability categories in applications


Now let us look more closely at the five most commonly detected vulnerability categories in
the applications dataset. They span three kingdoms—Security Features, Encapsulation, and
Environment—which is consistent with our analysis of vulnerability distribution by kingdoms
across applications, and may be found in Figure 12. The categories and percentages stayed
relatively the same when compared to the previous year, with the exception of Cross-Frame
Scripting, which replaced Privacy Violation. Once again, we see Cross-Frame Scripting making the
charts it was not part of last year, and climbing higher in the charts that it was already part of.

Misuse of SSL, evidenced by Insecure Transport and Cookie Security vulnerabilities, still
contributes to a vast majority of reported issues. In the Web Server Misconfiguration category,
Unprotected Directory, Server Error Message, Insecure Content-Type Setting, and Unprotected
File are the types of vulnerabilities that contribute to misconfiguration issues the most. In the
System Information Leak category, vulnerabilities that contribute the most and are also the
most critical are the ones that result in leakage of useful information about the system to an
external attacker. While not being the most critical vulnerability in and of itself, it may help an
attacker form a plan for a more severe attack.

Figure 12: The five most frequently spotted categories across applications

0% 10% 20% 30% 40% 50% 60% 70% 80%

Insecure Transport 68% Percentage of


vulnerable apps
Web Server Misconfiguration 62%
Cookie Security 58%
System Information Desk 53%
Cross-Frame Scripting 50%

The most critical vulnerabilities are


the ones that result in leakage of
useful information about the
system to an external attacker,
because it may help an attacker
form a plan for a more
severe attack.
Application Security research report 21

Top mobile vulnerabilities


The 10 most commonly detected vulnerabilities in the mobile dataset are shown in Figure 13.
Unlike web and desktop applications, where external leaks of system information occurred in
more apps than any other vulnerability type, more mobile applications contain internal leaks
of system information than any other vulnerabilities. Internal system leaks include information
revealed to other mobile applications installed on the same system, which is a much bigger
concern in the mobile world. When compared to the previous year, several vulnerability types
made a cameo, including Weak Cryptographic Hash, which seems to have replaced Weak
Encryption, Insecure Storage: Insufficient Data Protection that replaced Insecure Deployment:
Missing Jailbreak Protection, and SQL Injection, which is new to the list.

Figure 13: The 10 most commonly occurring vulnerabilities in the mobile applications dataset

0% 25% 50% 75% 100%

System Information Leak: Internal 89% Percentage of


vulnerable mobile apps
Weak Cryptographic Hash 78%
Insecure Storage: Insufficient Data Protection 72%
Insecure Storage: Lacking Data Protection 63%
Privacy Violation: Geolocation 54%
SQL Injection 53%
Privacy Violation: iOS Property List 51%
Often Misused: Push Notifications 47%
Often Misused: Ad/Analytics Frameworks 46%
Privacy Violation: Screen Caching 45%
Application Security research report 22

Figure 14 presents similar analysis as above, but for critical vulnerabilities. Overall, more than
68% of mobile applications contain at least one critical or high security vulnerability, which is
actually down from the previous year (75%) and less than in the case of web and desktop apps
(80%). Furthermore, Insecure Transport vulnerabilities related to the lack of SSL usage that we
saw leading in 2015 got pushed down by Insecure Transport: Weak SSL Protocol and Insecure
Transport: Weak SSL Cipher vulnerabilities in 2016.

While the results are far from perfect, it is progress that only 12% of mobile applications this
year do not use SSL versus 30% in last year’s report. Another positive change is a decrease in
the number of mobile applications vulnerable to Privacy Violation issues, which dropped from
29% in 2015 to 4% in 2016. Similarly, null dereference issues dropped from 27% in 2015 to 3% in
2016. Unfortunately, Cross-Site Scripting: Reflected entered the charts in 2016 at 15%.

Figure 14: The 10 most commonly occurring critical-severity vulnerabilities in the mobile dataset

0% 5% 10% 15% 20% 25%

Account Management: Inadequate Account Lockout 22% Percentage of


vulnerable mobile apps
Password Management: Weak Password Policy 20%

Insecure Transport: Weak SSL Protocol 19%

Cross-Site Scripting: Reflected 15%

Intent Manipulation: Unvalidated Input 13%

Insecure Transport 12%

Parameter Tampering: Special Characters 9%

Insecure Transport: Weak SSL Cipher 9%

Privacy Violation: Insufficient Authentication Mitigation 8%

Insecure Storage: Lacking Data Protection 7%

Top five vulnerability categories in mobile


The top five categories most commonly detected in mobile applications are shown in Figure 15.
The Insecure Storage category that contains issues related to misuse of data protection APIs
continues to lead. System Information Leak category dominated by issues related to leaking
data internally, as well as externally, and Privacy Violation category dominated by issues related
to geolocation, iOS property lists and screen caching, switched places. On the other hand, Weak
Cryptographic Hash category entered the chart with more than 78% of applications containing
issues related to misuses of cryptographic hashing APIs.

Figure 15: The five most frequently spotted mobile vulnerabilities

0% 25% 50% 75% 100%

Insecure Storage 91% Percentage of


vulnerable mobile apps
System Information Leak 90%
Privacy Violation 88%
Weak Cryptographic Hash 78%
Often Misused 74%
Application Security research report 23

Section VI: Risk analysis of external components


In this fast-paced application development environment, use of external components,
specifically open source, is an essential part of development strategy. Application owners must
keep track of risks inherited by association to external components.

Our sample consists of 263 CVEs that were reported across 184 different libraries, if all versions
of the same library are counted as a single library. (If we count each version as a separate
library, the total is 606.) The usage data was collected from 465 non-open source projects. All
of these numbers are significantly up from last year.

Figure 16: Fraction of open source components vs. Figure 17: Severity distribution of CVEs
closed source in referenced external libraries reported in external components

25% 24% 80%


23%
22% 69%
70

20
60%
17%
15% 50
15%
40%
10 30

22%
20%
5%
10
9%

0 0

Critical

Severe

Moderate
0%

1%–24%

25%–49%

50%–74%

75%–100%

Percentage of projects containing fraction


of open source components

Reliance on open source components


Last year, we observed that 79% of the projects we scanned used at least one open source
component. This year, that has risen to 83%. Although this year 17% of scanned applications
referenced only proprietary external libraries as shown in Figure 16.

As shown in Figure 17, 22% of the issues reported were critical in nature. While most others
were severe, only 9% were with moderate severity. Most organizations have a policy of no
release with medium or above severity, which means application owners should keep a rigorous
patch management system and hold external dependency owners accountable for releasing
patches in a timely fashion.
Application Security research report 24

Commons-httpclient is the most commonly used reference library in scanned applications


as shown in Figure 18. It has two CVEs reported of severe or medium severity.

Figure 18: Popular dependencies

5% 10% 15% 20%

commons-httpclient 19%
commons-fileupload 18%
xalan 12%
axis 12%
spring-web 11%
spring-webmvc 10%
xercesImpl 10%
standard 9%
spring-core 9%
struts 9%
0% 5% 10% 15% 20%
Percentage of projects

Input validation and representation issues continue to be the most reported kingdom as shown
in Figure 19, while code quality has not been able to gain any attention from the crowdsourced
security community this year either. Kingdom standing remains the same as last year.

Figure 19: CVE distribution in kingdoms

20% 40% 60% 80%

Input Validation and 68%


Representation
Security Features 20%
Time and State 4%
Encapsulation 3%
Environment 2%
API Abuse 2%
Errors 1%
0% 10% 30% 50% 70%
Percentage of CVE Count
Application Security research report 25

XML external entity injection (XXE) remains the top reported issue again this year as shown
in Figure 20 albeit sharing the place with various types of cross-site scripting issues counted
together as CVE disclosures don’t always provide enough information to determine their type
(e.g., reflected, persistent, or DOM based).

Figure 20: The 10 flaws most commonly seen in our scans, by CVE

20% 40% 60%

XML External Entity Injection 11% Percentage


of CVE Count
Cross-Site Scripting 11%
Denial of Service 10%
Access Control: Authorization Bypass 5%
Directory Traversal 4%
OGNL Expression Injection: Struts 2 4%
Header Manipulation 3%
Insecure SSL: Server Identity Verification Disabled 3%
Access Control: Missing Authorization Check 3%
Access Control: SecurityManager Bypass 2%
0% 2% 4% 6% 8% 10% 12%

Although XXE injection seems to be popular among most reported issues, it is denial of
service and Insecure SSL: Server Identity Verification Disabled that affect most projects in our
dataset as shown in Figure 21. While making up only 3% of total CVEs reported, Insecure SSL:
Server Identity Verification Disabled issues claim a larger number of projects (27%) than other
top issues.

Figure 21: CVE categories affecting applications scanned

10% 20% 30%

Denial of Service 29% Percentage


of projects
Insecure SSL: Server Identity Verification Disabled 27%
XML External Entity Injection 25%
Cross-Site Scripting 19%
XSLT Injection 14%
Directory Traversal 14%
Dynamic Code Evaluation: Unsafe Deserialization 12%
Web Server Misconfiguration: Server Error Message 12%
Often Misused: File Upload 12%
Access Control: Authorization Bypass 11%
0% 5% 10% 15% 20% 25% 30%
Application Security research report 26

XML external entity injection (XXE) remains the top reported issue again this year as shown in
Figure 20. Because it shares the place with various types of cross-site scripting issues counted
together as CVE disclosures, it does not always provide enough information to determine their
type (e.g., reflected, persistent, or DOM based).

Figure 22: Top 10 common categories in external libraries

20% 40% 60%

XML External Entity Injection 22% Percentage of


affected libraries
Denial of Service 22%
Cross-Site Scripting 17%
Insecure SSL: Server Identity Verification Disabled 9%
Access Control: SecurityManager Bypass 8%
Access Control: Authorization Bypass 8%
Directory Traversal 5%
Dynamic Code Evaluation: Code Injection 5%
Header Manipulation 5%
Access Control: Missing Authorization Check 4%
0% 5% 10% 15% 20% 25%

The severity distribution across the top 10 categories was similar to last year’s results. While
most Input Validation and Representation vulnerabilities can lead to serious consequences,
when we take into consideration the context in which the vulnerability is reported, we can
reduce the maximum possible impact of the issue for that CVE. This is evident from looking at
severity distribution in the top 10 categories in the analyzed dataset as shown in Figure 23.
Eight out of all reported XXE injection issues were found to be critical while none of cross-site
scripting issues reached the critical level. Instead OGNL Expression Injection: Struts 2, which
was ranked fifth has more CVEs that were found to be critical.

Figure 23: Distribution of severity of the reported CVE in top categories

5% 15% 25% 35%

XML External Entity Injection 8 22 1 Critical


Severe
Cross-Site Scripting 26 4
Moderate
Denial of Service 5 19 1

Access Control: Authorization Bypass 3 8 1

OGNL Expression Injection: Struts 2 8 3 1

Directory Traversal 10 1

Header Manipulation 8

Insecure SSL: Server Identity Verification Disabled 7

Access Control: SecurityManager Bypass 6

Cross-Site Request Forgery 6

0% 5% 10% 15% 20% 25% 30% 35%


Application Security research report 27

Section VII: Remediation


While different technologies may assist with automated detection of vulnerabilities, remediating
Not them in the year 2016 was still a manual process. This leads to interesting patterns in the data
fixed based on an organization’s SDLC.
Confirmed
fixed The data under study includes issues that were both detected and fixed within the same
Fixed
(not yearlong period (October 30, 2015, to October 30, 2016). All the issues represented in this
verified) dataset were triaged and closed. The closed issues may or may not have been remediated. This
metric is considered in the discussion below. Based on the above-mentioned constraints, this
dataset will be a subset of the dataset considered in previous sections.
70%
of vulnerabilities Number of vulnerabilities fixed
were remediated
Figure 24 shows the percentage of issues in applications that were fixed across all kingdoms.

Overall, more than 70% of the issues under study were remediated. This includes 55% of issues
that were confirmed to be fixed. The remaining were claimed to be fixed, but yet to be verified
by a subsequent scan. This number is slightly lower than our initial expectation. Looking deeper
into the issues that were not fixed, the following were the top reasons for not fixing them: The
issue is to be fixed in a different module by a different team, the issue is to be remediated
using a compensating control, or the issue is to be deferred for a future release. Hence, 70%
represents the total number of issues that were fixed directly in the application’s code or
configuration. By eliminating the issues fixed by other means not in the code, almost 85% of all
triaged issues in our dataset are addressed.

Figure 24: Application issues fixed across all kingdoms

100%

85%
80% 78%
75%
69%
65%
60% 58%
51% 51%

40%

20%

API Abuse Code Quality Encapsulation Environment Errors Input Validation Security Features Time and State
and
Representation

Critical High Medium Low Total Fixed per kingdom


Application Security research report 28

Input Validation and Representation and Code Quality represent the kingdoms with the largest
number of issues that were remediated. With input validation issues, the top remediated issues
include cross-site scripting: reflected, log forging, path manipulation, header manipulation,
and open redirect. Specifically, cross-site scripting: reflected is seen in the top of both
issues detected and remediated. At the same time, 87% of XSS issues in our sample set are
remediated. Based on this, it could be inferred that while developers continue to fail validating
input reflections, it is their top priority in getting fixed.

In the Security Features kingdom, while privacy violation was the most fixed issue, it totaled
only 52% of all the privacy issues. Some of these instances remained unfixed because they were
left to the application users’ discretion on categorizing them as violations. Similarly, only 43% of
insecure transport issues were fixed, possibly because the scans didn’t represent the production
environment. The expectation is that these issues would not exist in the actual deployment.

While many instances of deferring remediation actions to a future date still exist, the majority of
the issues seem to have been handled quickly.

Remediation process in light of different development models


Figure 25 depicts a typical workflow from an initial scan being requested to the final fixes and
remediation scan. This process was detailed in last year’s HPE Cyber Risk report 2016 (Figure:
67, Page: 75). While the process largely assumes traditional SDLCs, we wanted to understand
the effects of DevOps in this context.

Figure 26 is a generic representation of incorporating a security service into a DevOps lifecycle.

Figure 25: Traditional secure development lifecycle

Iterate

Scan Review
requested Scan Audit Publish results Fix Validate Batch/submit Scan
Scan (Dev team) (QA team) (Sec team)
(user) (Sec team)
1 3 4 1 2 3 4 5

Manual
analysis
2 HPE User

Figure 26: Secure development lifecycle in DevOps

Iterate

Scan
requested Scan Audit Publish Queued for
(user) review
1 3 4
1
Manual
analysis
2 CI/CD
Review results Triage Fix Validate automated
(Sec/Dev team) (Dev team) (Dev team) (QA team) submission
HPE 2 3 4 5 6

Release
Application stakeholder
Application Security research report 29

One of the biggest changes in the new model is the way issues are reviewed. Due to the faster
builds and deployments, it is possible to queue issues from multiple builds for a batch review
at a given time. This may happen when organizations use continuous integration/continuous
deployment (CI/CD) environments in their build infrastructure. While a security team may usually
review the scan results, some organizations may have a developer with security knowledge on the
team. This allows the security team to be available for expert consult, while the development team
may be relieved of any external dependencies.

In the case of certain apps, the nature of the DevOps culture may force the time available to triage,
fix and validate issues to be shorter than when the app is tested, deployed and maintained in a
traditional IT environment. In other cases, the DevOps frequency might allow security scans to
happen on a regular cadence by scheduling automated security scans. In either case, all issues
may or may not be fixed in a timely manner. Depending upon the nature of the software, security
assurance steps need to be timed and applied appropriately during any development model and
combined with a suitable security policy which defines acceptable risk criteria and frequency of
activities. Also, depending on the risk acceptance criteria, fixes for issues in non-critical applications
may be released before their remediation scan results are reviewed. This implies that security
issues could be tracked and remediated in parallel to the application’s releases. This is quite the
opposite of a model where security assurance steps are done only at the end of a release.. Critical
issues may block an upcoming release, although it is expected to be an exception and not a
frequently occurring event as development teams mature with respect to secure coding. However, if
blockers show up too often towards the end of a release, it could imply the necessity for developer
training, or to increase the cadence of security testing.

Scan results
The scanning pattern for static and dynamic scans varies due to the nature of the technologies and
the type of issues they target. Based on our sample set, dynamic scans were executed a median of
once every 20 days. This may suggest that the application is submitted for dynamic analysis once
every sprint. On the other hand, static scans were executed more frequently, with a median of once
every four days. This is in line with expectations on following a secure SDLC, because static scans
can help identify issues earlier, and enable shorter time to results.
Application Security research report 30

In order to study the remediation patterns among static scans, we compiled a sample of 167
applications across 58 customers. Each application contains an average of 445,259 and a median
of 137,134 scanned lines of code. Figure 27 shows that a majority of the issues were fixed within
the first 30 days of detecting the issue. Within this range, we observed that more than 80% of
the issues were fixed within the initial 15 days. This shows a very quick turnaround time for fixes,
and may indicate a world where those applying application security testing within DevOps style
processes demand quicker fixes and releases.

The graph also shows that most of the mediums and lows are fixed before the critical and high
issues. By splitting the range further, it can be seen that almost all mediums are addressed within
the first 15 days, while it takes almost two months to address the critical and high issues. This
could imply that certain issues with higher severity are more complicated to fix, thus taking longer
than most issues. For example, a majority of the code quality issues are fixed in the first range,
while more input validation and representation issues are carried over to the second range.

In order to interpret the data in the context of a given organization, the ranges can be
converted to number of scans. Because we observed a median of four days between static
scans, the first range could represent one to seven, the next range eight to 15 scans, and so on.
Based on this, it can be seen that almost all issues are remediated within 22 scans (90 days).

Figure 27: Application remediation patterns based on static analysis scans

100%

87.09%
Percentage of issues remediated

80%

60%

40%

20%
12.39%

0.25% 0.00% 0.28%

0-30 31-60 61-90 91-120 121-150


Number of days

Critical High Medium Low Total


Application Security research report 31

Section VIII: Conclusions


In this report, we’ve seen that attackers continue to use well-established techniques to infiltrate
a target enterprise with little change in the top vulnerabilities from year to year. The fact that
the same vulnerabilities live on to plague software year after year would indicate that the
same mistakes are being performed by developers. These mistakes continue to be detected
by secure development practices. Furthermore, it is the blind spots within secure development
practices that place additional risks on enterprises. By not fixing security flaws during
development, not only is an enterprise at risk for exploit, but the remediation effort is much
more costly and time consuming than if the security flaw had been found and removed earlier
in the lifecycle.

As the use of open source becomes more prevalent, enterprises should be aware that third-
party code may not be secure and that reported issues may not be quickly remediated. If you
are waiting for a zero-day discovery and patch release for an open source issue, you may be
exposed for a longer than acceptable period of time. An attacker can know the vulnerability
for months before application owners have a chance to take action to fix or prevent it from
compromising their environments. Software development organizations must look holistically
at vulnerability prevention, detection, protection, and remediation in their in-house code as well
as in their dependencies. The most efficient and effective approach is to help developers detect
vulnerabilities early.
This is the first HPE Risk Report focused solely on Application Security. It is
written to help customers better manage security risk in their software no
matter who wrote it.

Authors:
Cindy Blake
Alexander M. Hoole
Alvaro Munoz
Sasi Siddharth Muthurajan
Yekaterina Tsipenyuk O’Neil
Nidhi Shah
Jason Schmitt
Ronny Tey

Contributors:
Sonatype Inc.

Learn more at
hpe.com/software/fortify

Sign up for updates

© Copyright 2016–2017 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change
without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Java is a registered trademark of Oracle.
3509ENW, August 2017

You might also like