You are on page 1of 94

Communications

in Computer and Information Science

72

Carlos Serro Vicente Aguilera Daz


Fabio Cerullo (Eds.)

Web Application
Security
Iberic Web Application Security Conference
IBWAS 2009
Madrid, Spain, December 10-11, 2009
Revised Selected Papers

13

Volume Editors
Carlos Serro
ISCTE-IUL Lisbon University Institute
OWASP Portugal Ed. ISCTE
Lisboa, Portugal
E-mail: carlos.serrao@iscte.pt
Vicente Aguilera Daz
Internet Security Auditors
OWASP Spain
Barcelona, Spain
E-mail: vicente.aguilera@owasp.org
Fabio Cerullo
OWASP Ireland
OWASP Global Education Committee
Rathborne Village, Ashtown, Dublin, Ireland
E-mail: fcerullo@owasp.org

Library of Congress Control Number: 2010936707


CR Subject Classification (1998): C.2, K.6.5, D.4.6, E.3, H.4, J.1
ISSN
ISBN-10
ISBN-13

1865-0929
3-642-16119-7 Springer Berlin Heidelberg New York
978-3-642-16119-3 Springer Berlin Heidelberg New York

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer. Violations are liable
to prosecution under the German Copyright Law.
springer.com
Springer-Verlag Berlin Heidelberg 2010
Printed in Germany
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
06/3180

Preface

IBWAS 2009, the Iberic Conference on Web Applications Security, was the first
international conference organized by both the OWASP Portuguese and Spanish chapters in order to join the international Web application security academic and industry
communities to present and discuss the major aspects of Web applications security.
There is currently a change in the information systems development paradigm. The
emergence of Web 2.0 technologies led to the extensive deployment and use of Webbased applications and Web services as a way to develop new and flexible information
systems. Such systems are easy to develop, deploy and maintain and they demonstrate
impressive features for users, resulting in their current wide use. The social features
of these technologies create the necessary massification effects that make millions
of users share their own personal information and content over large web-based interactive platforms. Corporations, businesses and governments all over the world are also
developing and deploying more and more applications to interact with their businesses, customers, suppliers and citizens to enable stronger and tighter relations with
all of them. Moreover, legacy non-Web systems are being ported to this new intrinsically connected environment.
IBWAS 2009 brought together application security experts, researchers, educators
and practitioners from industry, academia and international communities such as
OWASP, in order to discuss open problems and new solutions in application security.
In the context of this track, academic researchers were able to combine interesting
results with the experience of practitioners and software engineers.
The conference held at the Escuela Universitaria de Ingeniera Tcnica de Telecomunicacin of the Universidad Politcnica de Madrid (EUITT/UPM) was organized
for the very first time and represented a step forward in the OWASP mission and
organization. During the two days of the conference, more than 50 attendees enjoyed
different types of sessions, organized around different topics. Two renowned keynote
speakers, diverse invited speakers and several accepted communications were presented and discussed at the conference. During these two days, the conference agenda
was distributed in two major abstract panels, industry and research sessions, organized
according to the following topics:

Secure application development


Security of service-oriented architectures
Threat modelling of Web applications
Cloud computing security
Web application vulnerabilities and analysis
Countermeasures for Web application vulnerabilities
Secure coding techniques

VI

Preface

Platform or language security features that help secure Web applications


Secure database usage in Web applications
Access control in Web applications
Web services security
Browser security
Privacy in Web applications
Standards, certifications and security evaluation criteria for Web applications
Attacks and vulnerability exploitation

On the final day of the conference, a panel discussion was held around a specific
topic: Web Application Security: What Should Governments do in 2010. From this
discussion panel a set of conclusions were reached and some specific recommendations were produced:
1. Challenge governments to work with organizations such as OWASP to increase the transparency of Web application security, particularly with respect
to financial, health and all other systems where data privacy and confidentiality requirements are fundamental.
2. OWASP will seek participation with governments around the globe to develop recommendations for the incorporation of specific application security
requirements and the development of suitable certification frameworks within
the government software acquisition processes.
3. Offer OWASP assistance to clarify and modernize computer security laws,
allowing the government, citizens and organizations to make informed decisions about security.
4. Ask governments to encourage companies to adopt application security standards that, where followed, will help protect us all from security breaches,
which might expose confidential information, enable fraudulent transactions
and incur legal liability.
5. Offer to work with local and national governments to establish application
security dashboards providing visibility into spending and support for application security.
Although organized together by the OWASP Portugal and Spain chapters, IBWAS
2009 was a truly international event and welcomed Web application security experts
from all over the world, supported by the OWASP open and distributed community.
We, as organizers of the IBWAS 2009 conference, would like to thank the different
authors who submitted their quality papers to the conference, and the members of the
Programme Committee for their efforts in reviewing the multiple contributions that we
received. We would also like to thank the amazing keynote and panel speakers for
their collaboration in making IBWAS 2010 a success.
Finally, we would like to thank the EUITT/UPM for hosting the event and for all their
support.
December 2009

Carlos Serro
Vicente Aguilera Daz
Fabio Cerullo

Organization

Programme Committee
Chairs

Aguilera Daz V., Internet Security Auditors, OWASP Spain, Spain


Cerullo F., OWASP Ireland, Ireland
Serro C., ISCTE-IUL Instituto Universitrio de Lisboa,
OWASP Portugal, Portugal

Secretary

Cerullo F., OWASP Ireland, Ireland

Members

Agudo I., Universidad de Malaga, Spain


Chiariglione L., Cedeo, Italy
Correia M., Universidade de Lisboa, Portugal
Costa C., Universidade de Aveiro, Portugal
Cruz R., Instituto Superior Tcnico, Portugal
Delgado J., Universitat Politecnica De Catalunya, Spain
Dias M., Microsoft, Portugal
Elias W., OWASP Brasil, Brazil
Ferreira J., Universidade de Lisboa, Portugal
Filipe V., Universidade de Trs-os-Montes e Alto Douro,
Portugal
Hernndez-Goya C., Universidad de La Laguna, Spain
Hernando J., Universitat Politecnica de Catalunya, Spain
Hinojosa K., New York University, USA
Huang T., Peking, University, China
Kudumakis P., Queen Mary University of London, UK
Lemes L., Unisinos, Brazil
Lopes S., Universidade do Minho, Portugal
Maran G., Consejo Superior de Investigaciones Cientficas,
Spain
Marinheiro R., ISCTE-IUL Instituto Universitrio de Lisboa,
Portugal
Marques J., Instituto Politcnico de Castelo Branco, Portugal
Metrlho J., Instituto Politcnico de Castelo Branco, Portugal
Muro J., Universidad Politcnica de Madrid, Spain
Neves E., OWASP Brasil, Brazil
Neves N., Universidade de Lisboa, Portugal
Oliveira J., Universidade de Aveiro, Portugal
Radu F., Universitat Oberta de Catalunya, Spain
Ribeiro C., Instituto Superior Tcnico, Portugal
Roman R., Universidad de Mlaga, Spain
Saeta J., Barcelona Digital, Spain

VIII

Organization

Santos O., Instituto Politcnico de Castelo Branco, Portugal


Santos V., Microsoft, Portugal
Sequeira M., ISCTE-IUL Instituto Universitrio de Lisboa,
Portugal
Sousa P., Universidade de Lisboa, Portugal
Torres V., Universitat Pompeu Fabra, Spain
Vergara J., Universidad Autnoma de Madrid, Spain
Vieira M., Universidade de Coimbra, Portugal
Villagr V., Universidad Politcnica de Madrid, Spain
Yage M., Universidad de Mlaga, Spain
Zquete A., Universidade de Aveiro, Portugal

Table of Contents

Abstracts
The OWASP Logging Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Marc Chisinevski

SQL Injection - How Far Does the Rabbit Hole Go? . . . . . . . . . . . . . . . . . .


Justin Clarke

OWASP O2 Platform - Open Platform for Automating Application


Security Knowledge and Workows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dinis Cruz

The Business of Rogueware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Luis Corrons

Microsoft Infosec Team: Security Tools Roadmap . . . . . . . . . . . . . . . . . . . .


Simon Roses

Empirical Software Security Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Dave Harper

11

Assessing and Exploiting Web Applications with the Open-Source


Samurai Web Testing Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Raul Siles

13

Authentication: Choosing a Method That Fits . . . . . . . . . . . . . . . . . . . . . . .


Miguel Almeida

15

Cloud Computing: Benets, Risks and Recommendations for


Information Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Daniele Catteddu

17

OWASP TOP 10 2009 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Fabio E. Cerullo

19

Deploying Secure Web Applications with OWASP Resources . . . . . . . . . .


Fabio E. Cerullo

21

Thread Risk Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Martin Knobloch

23

Protection of Applications at the Enterprise in the Real World: From


Audits to Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Javier Fern
andez-Sanguino

25

Table of Contents

Papers
A Semantic Web Approach to Share Alerts among Security Information
Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Jorge E. L
opez de Vergara, Vctor A. Villagr
a, Pilar Holgado,
Elena de Frutos, and Iv
an Sanz

27

WASAT- A New Web Authorization Security Analysis Tool . . . . . . . . . . .


Carmen Torrano-Gimenez, Alejandro Perez-Villegas, and
Gonzalo Alvarez

39

Connection String Parameter Pollution Attacks . . . . . . . . . . . . . . . . . . . . . .


Chema Alonso, Manuel Fernandez, Alejandro Martn, and
Antonio Guzm
an

51

Web Applications Security Assessment in the Portuguese World Wide


Web Panorama . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Nuno Teodoro and Carlos Serr
ao

63

Building Web Application Firewalls in High Availability


Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
Juan Galiana Lara and Angel
Puigvent
os Gracia

75

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

The OWASP Logging Project


Marc Chisinevski
Digiplug, France
marc.chisinevski@gmail.com

The presentation explained current shortcomings of Security Information Management systems. A new solution and a working prototype were presented.
In the current Security Information Management Systems it is difficult to obtain
relevant views of consolidated data (for instance alarms concerning different clients
and different Data Centres on different periods of time), the difficult to calculate
essential indicators for management (such as risk indicators such as Annual Lost Expectancy for Assets and the Cost effectiveness of proposed safeguards), difficult to
compare with historical data and also some severe performance issues.
The proposed solution for these problems is based on the usage of multidimensional database, which presents several advantages, such as presenting risk assessment and safeguard cost-effectiveness scenarios to CFO/CEO and presenting data
through different useful views (Client, Asset, Data Center, Time, Geography). The
Client view is particularly important for Software-as-a-Service and Cloud providers in
order to assess conformity with Service Level Agreements and legal obligations for
each customer. The Asset view is essential for management, allowing them to assess
the risks for business processes and information.
To achieve this, the raw data acquired by the Security Information Management
system (events on servers) needs to be correlated and consolidated. The following
facts need to be taken into account when assessing the risk: an asset has an intrinsic
value and the assets value increases if other assets (information, business processes,
servers) depend on it. Also, risk indicators are easy to calculate and analyze and it is
easier to clearly define aggregation levels, such as raw data (Event, Server) and
consolidated data (Alarm (correlated events), Asset, Client, Datacenter, Time, Geography). Reporting queries no longer run on the Security Information Management
systems production database, it is possible to analyze the data (Drill-down, roll-up,
slice) without writing SQL and to integrate data from different sources.

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 1, 2010.
Springer-Verlag Berlin Heidelberg 2010

SQL Injection - How Far Does the Rabbit Hole Go?


Justin Clarke
Gotham Digital Science, United Kingdom
justin.clarke@owasp.org

SQL Injection has been around for over 10 years, and yet it is still to this day not truly
understood by many security professionals and developers. With the recent mass
attacks against sites across the world, and well publicised data breaches with SQL
Injection as a component, it has again come to the fore of vulnerabilities under the
spotlight, however many consider it to only be a data access issue, or parameterized
queries to be a panacea. This talk explores the deeper, darker areas of SQL Injection,
hybrid attacks, SQL Injection worms, and exploiting database functionality. Explore
what kinds of things we can expect in future.

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 3, 2010.
Springer-Verlag Berlin Heidelberg 2010

OWASP O2 Platform - Open Platform for Automating


Application Security Knowledge and Workflows
Dinis Cruz
OWASP, United Kingdom
dinis.cruz@owasp.org

In this talk Dinis Cruz will show the OWASP O2 Platform, which is an open
source toolkit specifically, designed for developers and security consultants to be
able to perform quick, effective and thorough 'source-code-driven' application
security reviews. The OWASP O2 Platform (http://www.owasp.org/index.php/
OWASP_O2_Platform) consumes results from the scanning engines from Ounce
Labs, Microsoft's CAT.NET tool, FindBugs, CodeCrawler and AppScan DE, and
also provides limited support for Fortify and OWASP WebScarab dumps. In the
past, there has been a very healthy skepticism on the usability of Source Code
analysis engines to find commonly found vulnerablities in real world applications.
This presentation will show that with some creative and powerful tools, it IS possible to use O2 to discover those issues. This presentation will also show O2's
advanced support for Struts and Spring MVC.

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 5, 2010.
Springer-Verlag Berlin Heidelberg 2010

The Business of Rogueware


Luis Corrons
Panda Security, Spain
luis.corrons@pandasecurity.com

The growth and complexity of the underground cybercrime economy has grown significantly over the past couple of years due to a variety of factors including the rise of
social media tools, the global economic slowdown, and an increase in the total number of Internet users. For the past 3 years, PandaLabs has monitored the ever-evolving
cybercrime economy to discover its tactics, tools, participants, motivations and victims to understand the full extent of criminal activities and ultimately bring an end to
the offenses. In October of 2008, PandaLabs published findings from a comprehensive study on the rogueware economy, which concluded that the cybercriminals behind fake antivirus software applications were generating upwards of $15 million per
month. In July of 2009, it released a follow-on study that proved monthly earnings
had more than doubled to approximately $34 million through rougeware attacks distributed via Facebook, MySpace, Twitter, Digg and targeted Blackhat SEO. This session will reveal the latest results from PandaLabs ongoing study of the cybercrime
economy by illustrating the latest malware strategies used by criminals, examining the
changes in their attack strategies over time. The goal of this presentation is to raise the
awareness of this growing underground economy.

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 7, 2010.
Springer-Verlag Berlin Heidelberg 2010

Microsoft Infosec Team: Security Tools Roadmap


Simon Roses
Microsoft, United Kingdom
simonros@microsoft.com

The Microsoft ITs Information Security (InfoSec) group is responsible for information security risk management at Microsoft. We concentrate on the data protection of
Microsoft assets, business and enterprise. Our mission is to enable secure and reliable
business for Microsoft and its customers. We are an experienced group of IT professionals including architects, developers, program managers and managers.
This talk will present different technologies developed by Infosec to protect Microsoft and released for free, such as CAT.NET, SPIDER, SDR, TAM and SRE and how
they fit into SDL (Security Development Lifecycle).

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 9, 2010.
Springer-Verlag Berlin Heidelberg 2010

Empirical Software Security Assurance


Dave Harper
Fortify Software, USA
dharper@fortify.com

By now everyone knows that security must be built in to software; it cannot be bolted
on. For more than a decade, scientists, visionaries, and pundits have put forth a multitude of techniques and methodologies for building secure software, but there has been
little to recommend one approach over another or to define the boundary between
ideas that merely look good on paper and ideas that actually get results. The alchemists and wizards have put on a good show, but it's time to look at the real empirical
evidence.
This talk examines software security assurance as it is practiced today. We will
discuss popular methodologies and then, based on in-depth interviews with leading
enterprises such as Adobe, EMC, Google, Microsoft, QUALCOMM, Wells Fargo,
and Depository Trust Clearing Corporation (DTCC), we present a set of benchmarks
for developing and growing an enterprise-wide software security initiative, including
but not limited to integration into the software development lifecycle (SDLC). While
all initiatives are unique, we find that the leaders share a tremendous amount of common ground and wrestle with many of the same problems. Their lessons can be applied in order to build a new effort from scratch or to expand the reach of existing
security capabilities.

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 11, 2010.
Springer-Verlag Berlin Heidelberg 2010

Assessing and Exploiting Web Applications with the


Open-Source Samurai Web Testing Framework
Raul Siles
Taddong, Spain
raul@raulsiles.com

The Samurai Web Testing Framework (WTF) is an open-source LiveCD based on


Ubuntu and focused on web application security testing. It includes an extensive
collection of pre-installed and pre-configured top penetration testing and security
analysis tools, becoming the perfect environment for assessing and exploiting web
applications. The tools categorization guides the analyst through the web-app penetration testing methodology, from reconnaissance, to mapping, discovery and exploitation. The project web page is http://sf.net/projects/samurai/.
Samurai WTF pretends to become the weapon of choice for professional web app
pen-testers, offering a well established environment that acts as a time saver as it includes all the required web application security tools pre-configured and ready to run.
This talk describes the actively developed Samurai WTF distribution, its tool set,
including the recently created Samurai WTF Firefox add-ons collection (to convert
the browser in the ultimate pen-testing tool), available at https://addons.mozilla.org/
en-US/firefox/ collection/samurai, the advanced features provided by the integration
of multiple attack tools, plus the new tool update capabilities. This recently added
SVN update functionality provides frequent update capabilities for Samurai WTF,
new update feature for the most actively developed security testing tools, and offers
an improved collaboration model between the Samurai WTF community members.
The talk ends up with a live demonstration on a target web application of the advanced attack techniques provided by the integration of tools like Sqlninja and Metasploit. The combination of both tools offers the pen-tester the option to take full
control of a vulnerable web infrastructure, including the internal database servers.

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 13, 2010.
Springer-Verlag Berlin Heidelberg 2010

Authentication: Choosing a Method That Fits


Miguel Almeida
Independent Security Consultant, Portugal
miguelalmeida@miguelalmeida.pt

Through the last five years, we, in the security field, have been witnessing an increase
in the number of attacks to (web) application user's credentials, and the refinement
and sophistication these attacks have been gaining. There are currently several methods and mechanisms to increase the strength of the authentication process for web
applications. To improve the user authentication process, but also to improve the
transaction authentication. As an example, one can think of adding one-time password
tokens, or digital certificates, EMV cards, or even SMS one-time codes. However,
none of these methods comes for free, nor do they provide perfect security. Also, one
must consider usability penalties, mobility constraints, and, of course, the direct costs
of the gadgets. Moreover, there's evidence that not all kinds of attacks can be stopped
by even the most sophisticated of these methods. So, where do we stand? What
should we choose? What kind of gadgets should we use for our business critical app,
how much will they increase the costs and reduce the risk, and, last but not least, what
kind of attacks well be unable to stop anyway? This presentation will focus on ways
to figure out how to evaluate the pros and cons of adding these improvements, given
the current threats.

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 15, 2010.
Springer-Verlag Berlin Heidelberg 2010

Cloud Computing: Benefits, Risks and Recommendations


for Information Security
Daniele Catteddu
ENISA, Greece
Daniele.Catteddu@enisa.europa.eu

The presentation Cloud Computing: Benefits, risks and recommendations for information security will cover some the most relevant information security implications
of cloud computing from the technical, policy and legal perspective.
Information security benefit and top risks will be outlined and most importantly,
concrete recommendations for how to address the risks and maximise the benefits for
users will be given.

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 17, 2010.
Springer-Verlag Berlin Heidelberg 2010

OWASP TOP 10 2009


Fabio E. Cerullo
OWASP Ireland, Ireland
fabio.e.cerullo@aib.ie

The primary aim of the OWASP Top 10 is to educate developers, designers, architects and organizations about the consequences of the most important web application
security weaknesses. The Top 10 provides basic methods to protect against these high
risk problem areas and provides guidance on where to go from here.
The Top 10 project is referenced by many standards, books, tools, and organizations, including MITRE, PCI DSS, DISA, FTC, and many more. The OWASP Top 10
was initially released in 2003 and minor updates were made in 2004, 2007, and this
2010 release. We encourage you to use the Top 10 to get your organization started
with application security.
Developers can learn from the mistakes of other organizations. Executives can start
thinking about how to manage the risk that software applications create in their
enterprise.
This significant update presents a more concise, risk focused list of the Top 10
Most Critical Web Application Security Risks. The OWASP Top 10 has always been
about risk, but this update makes this much more clear than previous editions, and
provides additional information on how to assess these risks for your applications.
For each top 10 item, this release discusses the general likelihood and consequence
factors that are used to categorize the typical severity of the risk, and then presents
guidance on how to verify whether you have problems in this area, how to avoid
them, some example flaws in that area, and pointers to links with more information.

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 19, 2010.
Springer-Verlag Berlin Heidelberg 2010

Deploying Secure Web Applications with OWASP


Resources
Fabio E. Cerullo
OWASP Ireland, Ireland
fabio.e.cerullo@aib.ie

Secure applications do not just happen they are the result of an organization deciding that they will produce secure applications. OWASPs does not wish to force a
particular approach or require an organization to pick up compliance with laws that
do not affect them as every organization is different.
However, for a secure application, the following at a minimum are required:
Organizational management which champions security
Written information security policy properly derived from national standards
A development methodology with adequate security checkpoints and
activities
Secure release and configuration management
Many of the tools, documentation and controls developed by OWASP are influenced by requirements in international standards and control frameworks such as
COBIT and ISO.
Furthermore, OWASP resources can be used by any type of organization ranging
from universities to financial institutions in order to develop, test and deploy secure
web applications. This presentation will introduce you to some of the most successful
projects such as:
- OWASP Enterprise Security API which can be used to mitigate most common flaws in web applications;
- OWASP ASVS which is intended as a standard on how to verify the security
of web applications;
- OWASP Top 10 which helps to educate developers, designers, architects and
organizations about the consequences of the most important web application
security weaknesses;
- OWASP Development Guide which shows how to architect and build a secure application;
- OWASP Code Review Guide which shows how to verify the security of an
application; source code;
OWASP Testing Guide which shows how to verify the security of your running
application.
Finally, as OWASP believes education is a key component in building secure applications, some of the initiatives being carried out by the OWASP Global Education
Committee are going to be highlighted.

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 21, 2010.
Springer-Verlag Berlin Heidelberg 2010

Thread Risk Modelling


Martin Knobloch
OWASP Netherlands, Netherlands
martin.knobloch@owasp.org

How secure must an application be? To take the appropriate measures we have to
identify the risks first and think about the measures later. Threat risk modelling is an
essential process for secure web application development. It allows organizations to
determine the correct controls and to produce effective countermeasures within
budget. This presentation is about how to do a Tread Risk Modelling. What is needed
to start and where to go from there!

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 23, 2010.
Springer-Verlag Berlin Heidelberg 2010

Protection of Applications at the Enterprise in the Real


World: From Audits to Controls
Javier Fernndez-Sanguino
Universidad Rey Juan Carlos, Spain
jfs@gsyc.escet.urjc.es

Securing application development in the enterprise world, where applications range


from small in-house applications developed by a small department to large applications developed through an outsourcing company in a project spanning several years.
In addition those applications that initially where not considered critical, suddenly
become part of a critical process or those that were going to be used in a small and
limited internal environment suddenly get promoted and published as a new service
on the Internet.
To get a better feeling of what works and what does not work in the harsh world
outside, this talk will present examples of do's and donts coming from real world
projects attempting to protect security applications in different stages: from the introduction of technical measures to prevent abuse of Internet-facing applications to
source-code driven application security testing.

C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, p. 25, 2010.
Springer-Verlag Berlin Heidelberg 2010

A Semantic Web Approach to Share Alerts among


Security Information Management Systems
Jorge E. Lpez de Vergara1, Vctor A. Villagr2, Pilar Holgado1,
Elena de Frutos2, and Ivn Sanz3
1

Computer Science Department, Universidad Autnoma de Madrid,


Calle Francisco Toms y Valiente, 11, 28049 Madrid, Spain
2
Telematic Systems Engineering Department, Universidad Politcnica de Madrid,
Avenida Complutense, s/n, 28040 Madrid, Spain
3
Telefnica Investigacin y Desarrollo
Calle Emilio Vargas, 6 28043 Madrid, Spain
jorge.lopez_vergara@uam.es, villagra@dit.upm.es,
mpilar.holgado@estudiante.uam.es, e_le_na@hotmail.com,
isahe@tid.es

Abstract. This paper presents a semantic web-based architecture to share alerts


among Security Information Management Systems (SIMS). Such architecture is
useful if two or more SIMS from different domains need to know information
about alerts happening in the other domains, which is useful for an early response to network incidents. For this, an ontology has been defined to describe
the knowledge base of each SIMS that contains the security alerts. These
knowledge bases can be queried from other SIMS, using standard semantic web
protocols. Two modules have been implemented: one to insert the new security
alerts in the knowledge base, and another one to query such knowledge bases.
The performance of both modules has been evaluated, providing some results.
Keywords: SIMS, Semantic Web, IDMEF, SPARQL, Jena, Joseki, RDF,
OWL.

1 Introduction
Security is an important issue for Internet Service Providers (ISP). They have to keep
their systems safe from external attacks to maintain the service levels they provide to
costumers. Security threats are identified at routers, firewalls, intrusion detection
systems, etc. generating several alerts in different formats. To deal with all these incidents, ISPs usually have a Security Information Management System (SIMS) [1],
which collects the event data from their network devices to manage and correlate the
information about any incident. A SIMS is useful to detect intrusions at a global level,
centralizing the alarms from several security devices.
A step forward in this type of systems would be the distribution of alerts among
SIMS from different ISPs and different vendors for an early response to network incidents. Thus, mechanisms to communicate security notifications and actions have to be
developed. These mechanisms will let the collaboration among SIMS to share information about incoming attacks. For this, it is important to homogenise the information the
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 2738, 2010.
Springer-Verlag Berlin Heidelberg 2010

28

J.E. Lpez de Vergara et al.

SIMS are going to share. A data model has to be defined to address several problems
associated with representing intrusion detection alert data: alert information is inherently heterogeneous, some alerts are defined with very little information and others
provide much more information; and intrusion detection environments are different,
the same attack can contain different information. Current solutions provide a common
XML format to represent alerts, named IDMEF (Intrusion Detection Message Exchange Format) [2]. Although this format is intended to exchange messages, it is not a
good solution in a collaborative SIMS scenario, as each SIMS would flood the other
SIMS with such messages. It would be better that a SIMS asks other SIMS about certain alerts, and later infers what is its situation based on that information. However,
IDMEF has not been defined to query for an alert set.
A way to solve this is to use ontologies [3], which have been precisely defined to
share knowledge. Ontologies have been previously proposed to formally describe and
detect complex network attacks [4, 5, 6]. In this paper we propose to define an ontology based on IDMEF, where the alerts are represented as instances of Alert classes in
that ontology. The use of an ontology language also improves the information definition, as restrictions can be specified beyond data-types (for instance, cardinality).
With this ontology, each SIMS can store a knowledge base of alerts, and share it using semantic web interfaces. Then, other SIMS can ask about alerts by querying such
knowledge bases through semantic web interfaces. As a result, a SIMS would be able
to share their knowledge with other domain SIMS. The knowledge would include
policies, incidents, actualizations, etc. In a first phase, this sharing has been constrained to share alert incidents.
The rest of the paper is structured as follows. Next section presents the architecture
of collaborative SIMs based on knowledge sharing. Then, IDMEF ontology is explained, showing the process followed in its definition, as well as how to query it.
After this, an implementation of the system that receives IDMEF alerts and stores
them in a knowledge base is described. Results obtained in the different modules are
also provided. Finally, some conclusions and future work lines are given.

2 Semantic Collaborative SIMS Architecture


The architecture we propose to share information among SIMS is based on semantic
web technologies, as shown in Fig. 1. This figure represents two SIMS but it can be
generalized to several of them. Each SIMS will contain an alert knowledge base that
contains instances of the IDMEF ontology, described in next section. Each knowledge
base can be queried by other SIMS using a semantic web interface that accepts queries about the ontology.
To implement the web service interfaces in this architecture, Joseki server [7] has
been used, based on Jena libraries [8]. Joseki is an HTTP server that implements a
query interface for SPARQL (SPARQL Protocol and RDF Query Language) [9].
Joseki provides a way to deal with RDF (Resource Description Framework) and
OWL (Web Ontology Language) data in files and databases. Jena libraries have also
been used for both the instance generator and the query generator, using the SDB
library [10] to store the ontology in a database backend. Section 4 provides a deep
explanation about how they have been implemented.

A Semantic Web Approach to Share Alerts among SIMS

29

Security Information Management Systems


SIMS1

Instance
generator

SIMS2

query
generator

IDMEF alert
IDMEF instance

Alert
knowledge
base 1

SPARQL
query

Semantic Web
interface

Alert
knowledge
base 2

Fig. 1. Semantic collaborative SIMS architecture

3 IDMEF Ontology
IDMEF format provides a common language to generate alerts about suspicious
events, which let several systems collaborate in the detection of attacks, or in the
treatment of the stored alerts. Although IDMEF has some advantages (integration of
several sources, use of a well supported format), it has also drawbacks (heterogeneous
data sources led several alerts of a same attack which do not contain the same
information).
To solve the identified problems, we have defined an alert ontology based on the
IDMEF structure. In this process it is worth remarking that IDMEF has been defined
following a model of classes and properties, making easier the ontology definition,
with a more or less direct mapping. The ontology has been defined using OWL [11],
leveraging the advantages of the semantic web (distribution, querying, inferencing,
etc.), and also the results of [12]. Several class restrictions have been defined (cardinality, data types) by analyzing the IDMEF definition contained in [2].
The following conventions have been taken to define the IDMEF ontology:
Class names start with a capital letter and it is the same as the IDMEF class name.
Property names starts with a lower-case letter and has the format domain_propertyName, where domain is the name of the class to which the property
belongs, and propertyName is the name of the property.
The following rules have also been taken:
Each class in an IDMEF message maps to a class in the IDMEF ontology.
Each attribute of an IDMEF class is mapped to a data-type property in the corresponding ontology class.
Classes that are contained in other class are mapped in general to object-type properties. An exception to this are aggregated classes that contain text, which have
been mapped to data-type properties.
A subclass of an IDMEF class is also represented as a subclass in the ontology,
inheriting all the properties of its parent class.
When an IDMEF attribute cannot contain several values, it is mapped to a functional class.

30

J.E. Lpez de Vergara et al.

When an IDMEF attribute can only have some specific values, the ontology define
them as the allowed values.
Numeric attributes are represented as numeric data-types properties, dates are represented as datetime data-type properties, and the rest as string data-type properties.
Following the rules above, the ontology has been defined. Fig. 2 shows a representation of the Alert class, its child classes (OverflowAlert, ToolAlert and CorrelationAlert), and other referred classes (Classification, AdditionalData, Target, Source,
Assessment, CreateTime, AnalyzerTime, DetectTime, Analyzer). This figure has been
generated using the Protg [13] ontology editor. The boxes represent the classes and
the arcs can be inheritance (in black, labelled isa) and aggregation (in blue, labelled
with the property names) relationships. A UML (Unified Modelling Language) representation could also be provided, using the UML profile for OWL [14].
Our definition enables a mapping from IDMEF messages to IDMEF ontology instances. In this way, the information contained on each IDMEF message is translated
to an instance of Alert, with instances of Target, Source, etc. as this information is
contained on each message. The ontology includes other additional classes, so any
IDMEF message can be represented in the ontology.
With respect to a plain XML IDMEF message, the ontology provides several advantages. For instance, the information can be restricted as defined in the IDMEF
definition [2]. Moreover, query languages such as SPARQL can be used to query all
the information contained in the knowledge base, and it is not limited to the scope of a
concrete XML document, which would be the case of IDMEF messages.
To query the knowledge base, SPARQL has been chosen, given that is has been recently recommended by the W3C as the RDF/RDFS and OWL query language [9].
Using such language a query can be defined as follows:
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX idmef: <http://www.dit.upm.es/IdmefOntology.owl#>
SELECT ?alert ?id ?target_address
WHERE {
?alert rdf:type idmef:Alert ;
idmef:alert_messageid ?id ;
idmef:alert_target ?target .
?target idmef:target_node ?tnode .
?tnode idmef:node_address ?taddress .
?taddress idmef:address_address ?target_address
}
The query starts with PREFIX clauses, to define the namespaces to be used to
identify the queried classes and properties. After this, the variables alert, id and target_address that meet a set of conditions are requested: alert variable is of type Alert,
which has the properties alert_messageid and alert_target. Then, alert_target property refers to an instance with an address value, identified with the variable
target_address.

Fig. 2. IDMEF ontology definition

A Semantic Web Approach to Share Alerts among SIMS

31

32

J.E. Lpez de Vergara et al.

4 Implementation
The architecture proposed in section 2 has been implemented. Apart from the components provided by existing semantic web implementations (mainly Joseki server), we
have implemented the module that stores the IDMEF alerts in the knowledge base
(instance generator), as well as the module that queries alerts of an external knowledge base (query generator). Subsections below present such implementations, providing later some results in section 5.
4.1 Instance Generator
A module has been developed to map the IDMEF messages to ontology instances.
This module has been developed in Java, taking advantage of the libraries that this
language provides for parsing XML documents and ontologies. Fig. 3 shows the steps
that have to be performed to generate and save instances in the knowledge base:

Open
IDMEF
message
(file)

Parse
IDMEF
message
(XML)

Create
IDMEF
ontology
instances

Save
IDMEF
ontology
instances

Fig. 3. Steps to generate and store ontology instances

1. The first step is to open the IDMEF message, contained in a file.


2. Next, the IDMEF message, formatted in XML, is parsed. This generates a tree in
memory representing the message. This tree is generated using the SAX Java API.
To reduce parsing times, we have let the file to contain several messages. With this
approach, we can continuously parse several alerts without needing to restart the
process.
3. Then, reading the generated tree, the set of instances of the IDMEF ontology are
generated, using the Jena library.
4. Once the instances have been generated, they are saved in a persistent storage,
which can be either an OWL file or preferably, a database.
Jena libraries, developed at HP Labs, help when dealing with ontologies in Java
applications. In our development we have used Jena version 2, which supports both
RDF and OWL languages, as well as a certain level of reasoning on the defined
model. Jena library enables the management of ontologies, adding, deleting or editing
tuples, storing the ontologies and querying them. For this, Jena provides classes
such as:
Resource: anything that can be described in a model. Literal is a type of resource
that represents a simple data-type, usually a string.
Property: they are characteristics, attributes or relationships used to describe a
resource.
Sentence: A resource joint with a property and an associated value.
Model: they are set of sentences. They include methods to:

A Semantic Web Approach to Share Alerts among SIMS

33

Create models.
Read and write models.
Load models in memory.
Query a model: look for information inside the model.
Operations on models: union, intersection, difference.
Models can be stored in many ways, including OWL files, as well as representations
of the ontology on a relational database. In this last case, there are several storing possibilities, depending on the library used to represent the ontology on the database. Precisely, SDB is a Jena library specifically designed to provide storage in SQL databases,
both proprietary and open source. This storage can be done through the SDB API.
4.2 Query Generator
The Knowledge base, where the alerts are stored, can be queried through semantic
web interface by other SIMS. For this, another module has been developed, which
performs SPARQL queries to a Joseki server through HTTP. This server accesses the
Knowledge Base and it obtains the results of that query. These results are then received by the query module.
To connect the query module to Joseki, it is necessary to use the ARQ library [15],
which is a query engine for Jena. The query module can execute any SPARQL query.
For most habitual queries, we have implemented a program which does the query
depending on a series of parameters. For instance:
All alerts depending on the time:
Alerts in the last week.
Alerts in the current day.
Alerts in a day.
Alerts in an interval of time.
Alerts queried using other parameters:
Source IP address.
Target IP address.
Source port.
Target port.
Alert type.
Target of the attack.
Source of the attack.
Tools of the attack.
Overflow Alert.
Analyzer.
Assessments of the attacks: impact, actions, etc.

5 Results
The implemented modules, presented above, have been tested to know their performance. All the results have been obtained in a computer equipped with an Intel Core2
Duo E8500 processor at 3.16 GHz with 6 MB L2 Cache and 2 Gbyte RAM. Previous
tests with older computers provided worse results.

34

J.E. Lpez de Vergara et al.

5.1 Instance Generator


To evaluate the generation of instances, IDMEF messages available in [2] have been
used. Table 1 shows the times measured in milliseconds.
Table 1. Time to generate instances of well known IDMEF messages

IDMEF message
Assessment
Correlated Alert
Disallowed Service
Load Module
Load Module 2
Phf
Ping of Death
Policy Violation
Scanning
Teardrop

JDBC
1235
1250
1250
1220
1250
1220
1220
1265
1235
1220

SDB
1040
1035
1050
1050
1035
1035
1035
1035
1035
1035

SPARQL/Update
640
640
625
640
610
625
640
610
610

These times are measured after the database is created and the ontology model is
represented on the database. If the database and the model have to be created, there
are two possibilities:
Use of JDBC (Java Database Connectivity), with a time of around 1.900 s.
Use of SDB library, with a time of around 1.125 s, faster than the previous case.
Both JDBC and SDB libraries facilitate the connection to databases containing ontologies from Java application independently of the operating system. These libraries
are also compatible with different databases. In addition, SDB is a Jena component
designed specifically to support SPARQL queries and it provides storage in both
proprietary and open source SQL databases.
Once the database has been created, there are three alternatives to insert the instances on the ontology database: JDBC, SDB and SPARQL/Update [16]. With respect to the last alternative, SPARQL/Update is an extension to SPARQL that lets a
programmer the definition of insert clauses, whereas JDBC and SDB can insert data
in the ontology by creating ontology data structures in memory that are later stored.
From our experiments, the best measurements are obtained if the language
SPARQL/Update is used to insert the instances. They are approximately a 60% of the
time when SDB library is used, and a 50% compared to when plain JDBC is used. In
the case of the Assessment message there is an exception, because it contains characters that cannot be used in the SPARQL/Update sentence. In this case, the SDB library should be used instead.
5.2 Query Generator
Some measurements have also been taken with respect to the time that it takes to perform a concrete query from the query module to a test knowledge base with 112 alerts

A Semantic Web Approach to Share Alerts among SIMS

35

through the Joseki server. Simplified versions of the queries used for the experiment
are shown below (they also included other variables that could be useful about other
alert properties):
Alerts depending on a time interval:
PREFIX rdf:
<http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX idmef: <http://www.dit.upm.es/IdmefOntology.owl#>
SELECT ?alert ?time
WHERE {
?alert rdf:type idmef:Alert .
?alert idmef:alert_createTime ?createTime .
?createTime idmef:createTime_time ?time .
FILTER (?time > time1).
FILTER (?time < time2)
}

where time1 and time2 are properly replaced to query for a concrete period of time.
Alerts depending on the source IP address.
PREFIX rdf:
<http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX idmef: <http://www.dit.upm.es/IdmefOntology.owl#>
SELECT ?alert ?sourceAddress
WHERE {
?alert rdf:type idmef:Alert.
?alert idmef:alert_source ?source.
?source idmef:source_node ?node.
?node idmef:node_address ?address.
?address idmef:address_address ?sourceAddress.
FILTER (?sourceAddress = ipAddr)
}

where ipAddr is replaced with a concrete IP address


Alerts depending on the target IP address.
PREFIX rdf:
<http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX idmef: <http://www.dit.upm.es/IdmefOntology.owl#>
SELECT ?alert ?sourceAddress
WHERE {
?alert rdf:type idmef:Alert.
?alert idmef:alert_target ?target.
?target idmef:target _node ?node.
?node idmef:node_address ?address.
?address idmef:address_address ?targetAddress.
FILTER (?targetAddress = ipAddr)
}

where ipAddr is replaced with a concrete IP address.

36

J.E. Lpez de Vergara et al.

Alerts depending on their type:


PREFIX rdf:
<http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX idmef: <http://www.dit.upm.es/IdmefOntology.owl#>
SELECT ?alert ?alertName
WHERE {
?alert rdf:type idmef:Alert.
?alert idmef:alert_classification ?classification.
?classification idmef:classification_text ?aName.
FILTER (?aName = alertName )
}

where alertName is replaced for a concrete alert.


Tables 2, 3, 4 and 5 show below the results obtained when querying the alert
knowledge base with these queries:
Table 2. Knowledge base query times depending on the time interval

Obtained results
23
9
32

Time (ms)
547
500
641

Table 3. Knowledge base query times depending on the source IP of an alert

Obtained results
1

Time (ms)
453

Table 4. Knowledge base query times depending on the target IP of the alerts

Obtained results
11
33
77

Time (ms)
500
625
750

Table 5. Knowledge base query times depending on the alert type

Obtained results
2
13
7

Time (ms)
468
484
468

As shown, the time to retrieve the results is dependent on the number of alerts that
match the query, but not on the query itself. Further tests have to be performed with
larger knowledge bases.

A Semantic Web Approach to Share Alerts among SIMS

37

6 Conclusions
This work has assessed the applicability of semantic web technologies in security
information management systems, providing a way to semantically share information
among different security domains. For this, an ontology based on IDMEF has been
defined, which can hold all the information of any IDMEF message. To test this ontology, we have also defined and implemented a semantic collaborative SIMS architecture, where each SIMS stores its IDMEF alerts in a knowledge base and can query
other SIMS knowledge bases using a SPARQL interface.
The test performed to store alerts showed the times to save such alerts, which can
be acceptable for a prototype but not for a production system that receives tens of
alerts per second. Thus, some approaches have been done to improve these times. On
the one hand, Jena SDB library has been used to optimize the storage of the ontology
in a database. On the other hand, the use of SPARQL/Update has been proposed, to
limit the saving time to that information contained on each alert. Another improvement has been the parsing of alerts continuously, to avoid launching a Java process
each time an IDMEF message arrives the instance generator. In this way, we could
reduce the storing time to a half from the initial approach.
With respect to the query modules, we have done preliminary tests with good results. We will generate further tests, modifying the size of the knowledge base to
check how the system performs with larger data sets. It is also important to note that
the instances of old alerts are periodically deleted from the knowledge base. This
avoids its size grow ad infinitum.
As another future work, we will study how to do inference with the information
contained in the knowledge bases.
Acknowledgements. This work has been done in the framework of the collaboration
with Telefnica I+D in the project SEGUR@ (reference CENIT-2007 2004,
https://www.cenitsegura.es), funded by the CDTI, Spanish Ministry of Science and
Innovation under the program CENIT.

References
1. Dubie, D.: Users shoring up net security with SIM. Network World (September 30, 2001)
2. Debar, H., Curry, D., Feinstein, B.: The Intrusion Detection Message Exchange Format
(IDMEF). IETF Request for Comments 4765 (March 2007)
3. Gruber, T.R.: A Translation Approach to Portable Ontology Specifications. Knowledge
Acquisition 5(2), 199220 (1993)
4. Undercoffer, J., Joshi, A., Pinkston, A.: Modeling computer attacks: an ontology for intrusion detection. In: Vigna, G., Krgel, C., Jonsson, E. (eds.) RAID 2003. LNCS, vol. 2820,
pp. 113135. Springer, Heidelberg (2003)
5. Geneiatakis, D., Lambrinoudakis, C.: An ontology description for SIP security flaws.
Computer Communications 30(6), 13671374 (2007)
6. Dritsas, S., Dritsou, V., Tsoumas, B., Constantopoulos, P., Gritzalis, D.: OntoSPIT: SPIT
management through ontologies. Computer Communications 32(1), 203212 (2009)
7. Joseki A SPARQL Server for Jena, http://www.joseki.org/
8. Jena A Semantic Web Framework for Java, http://jena.sourceforge.net/

38

J.E. Lpez de Vergara et al.

9. Prudhommeaux, E., Seaborne, A.: SPARQL Query Language for RDF. W3C Recommendation (January 15, 2008)
10. SDB - A SPARQL Database for Jena, http://jena.sourceforge.net/SDB/
11. McGuinness, D.L., van Harmelen, F.: OWL Web Ontology Language Overview. W3C
Recommendation (February 10, 2004)
12. Lpez de Vergara, J.E., Vzquez, E., Martin, A., Dubus, S., Lepareux, M.N.: Use of ontologies for the definition of alerts and policies in a network security platform. Journal of
Networks 4(8), 720733 (2009)
13. Gennari, J.H., Musen, M.A., Fergerson, R.W., Grosso, W.E., Crubzy, M., Eriksson, H.,
Noy, N.F., Tu, S.W.: The evolution of Protg: an environment for knowledge-based systems development. Int. J. Hum.-Comput. Stud. 58(1), 89123 (2003)
14. Object Management Group: Ontology Definition Metamodel Version 1.0. OMG document
number formal/2009-05-01 (May 2009)
15. ARQ - A SPARQL Processor for Jena, http://jena.sourceforge.net/ARQ/
16. Seaborne, A., Manjunath, G., Bizer, C., Breslin, J., Das, S., Davis, I., Harris, S., Idehen,
K., Corby, O., Kjernsmo, K., Nowack, B.: SPARQL Update, A language for updating
RDF graphs. W3C Member Submission (July 15, 2008)

WASAT- A New Web Authorization


Security Analysis Tool
Carmen Torrano-Gimenez, Alejandro Perez-Villegas, and Gonzalo Alvarez
Instituto de Fsica Aplicada, Consejo Superior de Investigaciones Cientficas,
Serrano 144, 28006 Madrid, Spain
{carmen.torrano,alejandro.perez,gonzalo}@iec.csic.es

Abstract. WASAT (Web Authentication Security Analysis Tool) is an intuitive


and complete application designed for the assessment of the security of different web related authentication schemes, namely Basic Authentication and
Forms-Based Authentication. WASAT is able to mount dictionary and brute
force attacks of variable complexity against the target web site. Password files
incorporate a syntax to generate different password search spaces. An important
feature of this tool is that low-signature attacks can be performed in order to
avoid detection by anti-brute-force mechanisms. This tool is platformindependent and multithreading too, allowing the user to take control of the
program speed. WASAT provides some features not included in many of the
existing similar applications and hardly any of their drawbacks, making this
tool an excellent one for security analysis.
Keywords: web authentication security, security analysis tool, password cracking.

1 Introduction
Nowadays web applications handle more and more sensitive information. As a consequence, web applications are an attractive target for attackers, who are able to perform
attacks causing devastating consequences. Therefore, the proper protection of these
systems is very important and it becomes necessary for the site administrators to assess the security of web applications.
In addition, these days, most of network-capable devices, including simple consumer electronics such as printers and photo frames, have an embedded web interface
for easy configuration [1]. These web interfaces can also suffer a large variety of
attacks, therefore they should also be protected [1].
This paper presents a tool for the assessment of the security of different web authentication schemes.
Usually, some web application areas have restricted access. Authentication allows
to verify the identity of the person accessing the web application.
Our tool is able to analyse the security of web applications using two HTTP authentication schemes, namely Basic Authentication and Form-Based Authentication.
The Basic Authentication is a challenge-response mechanism that is used by a
server to challenge a client and by a client to provide authentication information. In
this scheme the user agent authenticates itself providing a user-ID and a password
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 3949, 2010.
Springer-Verlag Berlin Heidelberg 2010

40

C. Torrano-Gimenez, A. Perez-Villegas, and G. Alvarez

when accessing to a protected space. The server will authorize the request only if it
can validate the user-ID and password for the protection space corresponding to the
URI of the request.
The Form-based Authentication is the most widely used authentication scheme.
When the client accesses a protected service or resource, the user is required to fill in
a form entering a username and a password. These credentials are submitted to the
web server, where they are validated against the database containing the usernames
and the passwords from all users registered in the web application. The access will
only be allowed if the credentials are present in the database.
Further information about these HTTP authentication schemes is presented in
Sec.2.
WASAT can be applied against any web application having an authentication
mechanism. This tool can mount dictionary and brute force attacks of varying complexity against the target web site. User and password files can be configured to be
used as search space. Variations on the passwords can be generated using an easy
special syntax in the password file, which allows to perform exhaustive searches.
Also, low-signature attacks can be developed with this tool, in order to avoid detection. Several strategies can be used to generate low-signature attacks, like distributing
the requests of a user in several time periods.
The number of threads used by the application can be configured by the user in order to improve the speed of the program. Also, a list of proxies can be specified by the
user in order to make the request anonymous.
The configuration session data can be stored in a file and opened later, making easier to initialize a new session. Moreover, the process can be paused and continued later.
WASAT also has a useful and complete help file for users.
The rest of the paper is organized as follows. Section 2 reviews different authentication schemes. In Sec. 3 several mechanisms that can be used by web servers to
detect brute force attacks are exposed. Section 4 refers to related work. In Sect.5 the
features and the behavior of WASAT are explained. Section 6 exposes the future
work and finally, in Sec.7, the conclusions of this work are captured.

2 HTTP Authentication Schemes


WASAT can assess the security of both HTTP schemes Basic Authetication and
Forms-Based Authentication. Below a short description of both schemes is included.
2.1 Basic Authentication
HTTP provides a simple challenge-response authentication mechanism which is used
by a server to challenge a client after it has made a request and by a client to provide
authentication information. It uses a token to identify the authentication scheme,
which is Basic in this case. In this scheme there are no optional authentication
parameters.
Upon receipt of an unauthorized (401) request for a URI within the protection
space, the server should challenge the authorization of the user agent. This response
must include a WWW-Authenticate header field containing the following:

WASAT- A New Web Authorization Security Analysis Tool

41

WWW-Authenticate: Basic realm=WallyWorld


where WallyWorld is the string assigned by the server to identify the protection
space of the Request-URI.
A user agent that wishes to authenticate itself with a server after receiving a 401
response includes an Authorization header field with the request. The Authorization
field value consists of credentials containing the authentication information of the
user agent for the realm of the resource being requested.
Thus, to receive authorization, the client sends the user-ID and password, separated
by a single colon (:) character, within a base64 [5] encoded string in the credentials.
For instance, if the user agent wishes to send the user-ID Aladdin and password
open sesame, it would use the following header field:
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==.
The domain over which credentials can be automatically applied by a user agent is
determined by the protection space. If a prior request has been authorized, the same
credentials may be reused for all subsequent requests within that protection space for
a period of time determined by the authentication scheme, parameters, and/or user
preference. Further details can be found in [2].
2.2 Forms-Based Authentication
This is the most common authentication scheme, used in web servers with thousands
and even millions of users. It consists of the creation of a database table storing usernames and passwords of all users. When the protected service or resource is to be
accessed, the user fills in a form with the corresponding username and password.
These credentials are submitted to the web server, where they are validated against
the database. If the username and password exist in the database, access is granted,
otherwise, the user is rejected.
The HTML web form includes at least two input text fields: the username and the
password. Additionally, many other fields, usually included in the form as hidden
fields, may be present. Moreover, the authentication process may require the presence
of certain cookies and HTTP headers, such as Referer: and User-Agent:.When the
user is successfully logged in, it can happen that a token or a cookie are issued to the
user or that a memory space is assigned in the server, which will identify the user in
future requests without asking again for validation.

3 Security Mechanisms against Brute Force Attacks


Web servers can use several security mechanisms in order to detect dictionary and
brute force attacks. This section exposes the main existing mechanisms and how
WASAT can avoid them.
Web servers can analyze the number of received requests, the time interval and the
source (username or IP address) to detect brute force attacks. Therefore, when web
servers receive big amounts of requests in a short period of time from the same user
or from the same IP address, they can assume a brute force attack is taking place.
When an attack is detected, the server can block the corresponding user or IP address
temporarily or permanently, requiring in the last case a different communication
channel for the user/IP to be admitted again.

42

C. Torrano-Gimenez, A. Perez-Villegas, and G. Alvarez

WASAT provides diverse features to avoid all of these security mechanisms. In


WASAT the user can define a list of proxies that are used to send the requests to the
server. As different proxies are used to send every request to the server, the IP address
block can be evaded.
WASAT offers two mechanisms to evade user block. The first one is defining the
inter-request time, which establish the minimum time between requests from the same
user. The second one is the reverse search: reading the passwords from the file and for
every password, trying to log in with every username. As a consequence of both
mechanisms, the requests from the same user are distributed over time.

4 Related Work
There are several popular tools similar to our application, such as Crowbar [3], Brutus
[4], Caecus [5], THC-Hydra [6] andWebSlayer [7].
All these tools have been tested and several of their features have been considered.
The importance of some of these features was explained in the section 3.
The considered features are the following:
Multi-Threading. It refers to the ability to establish different connections with the
server concurrently and speed up the process.
Proxy Connection. Using proxies make possible to establish anonymous connections to the server.
Password Generation. Automatic password generation allows the user build many
password combinations without writing a huge wordlist.
Inter-Request Time. It refers to the minimum time interval between attempts with
the same username.
Restore Sessions. The use of sessions let the user restore previous aborted sessions.
Multi-Platform. It means the tool can run in any platform, thus the application is
not platform-dependent.
Proxy Connection and inter-Request-Time make possible to avoid IP-based and
time-based anti-brute-force mechanisms respectively.
In Table 1, these tools are compared against WASAT, according to the selected
features.
Table 1. Cracking tools comparison
Feature/Tool

Hydra

Multi-Threading
Yes
Proxy Connection
Single
Password Generation No
Inter-Request Time
No
Restore Sessions
Yes
Multi-Platform
No

Caecus Brutus

Yes
List
No
No
No
No

Yes
Single
Limited
No
No
No

Crowbar WebSlayer WASAT

Yes
No
No
No
No
No

Yes
Yes
Single
List
Generator Script
No
Yes
No
Yes
No
Yes

WASAT- A New Web Authorization Security Analysis Tool

43

A experimental comparison regarding the time required for brute force attacks has
not been included in this paper as it depends on the bandwidth and the server load.

5 Application Description
WASAT offers the possibility to specify the configuration of the target web application and the desired authentication method to be used. The program preferences can
also be configured by the user. After specifying the configuration, the analyses can
start. It can also be paused or stopped. The configuration parameters of every session
can be saved in a file and a configuration file can be loaded as well.
The current version of WASAT can be downloaded from http://www.iec.csic.es/wasat.
A snapshot of the main window of WASAT is presented in Fig. 1.

Fig. 1. Main Window of WASAT after assessment

5.1 Analysis Configuration


Before starting a new analysis session, the configuration parameters have to be
defined. The parameters are filled through the following tabs:
Target Definition. The target web application is defined by the URL and the port.
The URL should refer to the login page of the web application. Usually, this
parameter corresponds to the string in the action part of the HTML <FORM>label.
It is important to note that the URL should be correct and complete for the analysis to
be done properly. The port number is used to establish the HTTP connection. This
information usually can be gathered from the form definition. Its default value is 80.
Next, the type of authentication used to protect the page is selected. The Basic or
the Form-Based authentication can be chosen. Selecting the Start/Continue from
position option, the analysis will continue from a previously interrupted session,
exactly from the point where it was paused. Checking the Stop if succeeded option,
the analysis will stop after the first correct username/password pair is guessed. Otherwise, the program will continue running until all possible usernames and passwords
have been tried.

44

C. Torrano-Gimenez, A. Perez-Villegas, and G. Alvarez

Basic Authentication. If the basic authentication was selected in the Target tab, the
error code in this tab should be chosen. There are two possible values for the HTTP
Error Code: 200 OK or 302 Object Moved.
Form-Based Authentication. If the form-authentication was chosen in the Target
tab, some parameters are to be defined in the Form-Based tab:
Request Settings
These are parameters regarding the request settings:
Method. This is the HTTP method used in the form submission. The default value
of this parameter is GET.
User ID. This parameter refers to the input text element name corresponding to the
username used in the form.
Password ID. This parameter corresponds to the input text element name corresponding to the password used in the form.
Arguments. This parameter is optional. All other input arguments used by the form
should be written here. They are usually the hidden fields in the form. The submit
button name and value should be included too. It is important that every argument
(except the first one) is preceded by the & sign. Note that this text should be
HTTP coded, thus for example, no blanks or spaces are allowed and they must be
replaced by a + sign.
Referer. This parameter is optional. The Referer header should be written here in
case the login page requires it.
User Agent. This parameter is optional. The user can enter the User-Agent
header if the login page requires it.
Cookie. This parameter is optional. The Cookie header can be established in this
parameter in case the login page needs it.
HTML Response
Some parameters should also be filled concerning the HTML response.It is necessary
to distinguish between the error page after an unsuccessful login attempt (the credentials are wrong and the request failed) and the welcome page after a successful attempt (the credentials are correct and the request succeeded). WASAT provides two
stop methods to differentiate both pages.
The first method uses words that only appears in the error page or in the welcome
page to distinguish them. The second method is based on the length of the pages in
order to differentiate both pages.
Firstly, the user should choose the method: Search for string or Content-Length
comparison.
The search for string method checks for the presence of a word or sentence
which only appears in the welcome page or for the absence of a word or sentence
which only appears in the error page. This option needs to retrieve the whole page to
search for the given string. In this case, the parameters are the following:
Succeed. It is any sentence which appears only in the page reached after a valid
username/password pair has been guessed. This parameter is optional since in
many cases it is not known in advance.

WASAT- A New Web Authorization Security Analysis Tool

45

Failure. It should contain any sentence which appears in the error login page (and
never in the correct page), after an invalid username/password pair has been
checked. This parameter is mandatory.
The content-length comparison method checks the length of the error and welcome
pages. This method does not require to retrieve the whole page, but only the headers,
thus is much faster. If this option is chosen, the parameters are these ones:
Succeed. This is an optional parameter. It refers to the length in bytes of the welcome page.
Failure. It is mandatory. It is the length in bytes of the error page.
Variation. This parameter is optional. This parameter can be supplied in order to
accommodate small variations due to banners or other changing elements in web
pages which may affect the total length of the page.
A snapshot of the Configuration window is shown in Fig. 2.
Wordlists. In this tag the wordlist files and the processing instructions are defined.
Wordlist files
The program reads a list of usernames from a file and for each username tries to log in
using every password defined in the password list file. In order to generate lowsignature attacks, the application also reads a list of passwords from a file and for
each password tries to log in using every username defined in the usernames list file.

Fig. 2. Configuration window

46

C. Torrano-Gimenez, A. Perez-Villegas, and G. Alvarez

The following parameters can be configured:


User File. It should contain the complete file path and name of the file containing
the list of usernames that must be checked. The number of names stored in the file
is unlimited.
Password File. It should contain the complete file path and name of the file
containing the list of passwords that will be tried for every username stored in the
previous file. The number of passwords stored in the file is unlimited.WASAT provides a special syntax that can be used in this file in order to generate variations on
the passwords. Later in this section details about this syntax are explained.
Processing Instructions
There are also some processing instructions that can be used to be more specific about
the use of the password file:
Do not process passwords with spaces. If this option is checked, passwords containing spaces are ignored.
Process all passwords as lowercase. In this case, all passwords in the password file
will be converted into lowercase before being used against the target web site.
Minimum password length. If this option is selected, passwords containing less
than the given number of characters are ignored.
Maximum password length. In this case, passwords containing more than the given
number of characters are not checked.
Reverse search. If this option is selected WASAT reads the passwords and for
every password, tries to log in with every username. If this option is not selected,
WASAT reads the usernames, and for every username tries to log in with every
password.
All the information entered through the four tabs can be saved in a definition file.
Opening this file simplifies the task of initializing the program for a new brute force
session.
Syntax for Password File. The program provides a special syntax to be used in the
password file, which allows to generate variations in the passwords. Using this
syntax, more than one request per username and password can be generated. This
makes the search space bigger, thus this tool is more effective and the security
analysis more precise.
Comments in the password file are inserted preceded by #. Blank lines are ignored. There are several keywords that can be used to modify the passwords:

$USER: it tries the username as password.


$REV: it tries the reversed username as password.
$BLANK: this option tries the blank password.
$Dn: this option tries all digits from 0 to 9 n times. Example: $D2 will try 00, 01,
. . ., 10, 11, . . . , 98, 99.
$Ln: it tries all lowercase letters from a to z n times. Example: $L6 will try
aaaaa, aaaaab, . . . , zzzzzy, zzzzzz.

WASAT- A New Web Authorization Security Analysis Tool

47

$Un: it tries all uppercase letters from A to Z n times. Example: $L4 will try
AAAA, AAAB, . . . , ZZZY, ZZZZ.
$Wn: this tries numbers from 0 to 9 and all letters (uppercase and lowercase) n
times. Example: $W5 will try 00000, 00001, . . ., AAAAA, AAAAB, . . . ,
ZZZZZ, aaaaa, aaaab, . . . , zzzzy, zzzzz.
The above keywords can be used in any position or even alone. The only limitation is
that several keywords cannot be used in the same password definition.
5.2 Program Preferences
The application makes possible to configure the program preferences.
General. Two general parameters can be set:
Number of sockets. This is an important parameter as it specifies the number of
sockets running in parallel. Obviously, the more sockets, the more speed. However,
the rate of speed gain is not linear with the number of sockets, but logarithmic.
This means that beyond a given value, there is little or no gain of speed. Recommended values for the sockets limit are usually below 50. For most purposes and
bandwidths, 4 is a rather fast option. It defaults to 1.
Timeout. It determines the time in milliseconds the socket waits for the server
reply. The default value is 10 milliseconds.
Other options can be defined:
Use Proxy. It determines if proxies are used. The list of proxies is defined in the
tab Proxies.
Inter-request time. It specifies the minimum time (in milliseconds) between the
requests of an specific user. This low-signature strategy allows to distribute the requests of a user over time. The default value is 10 milliseconds.
Proxies. A list of proxies can be defined when needed. The option Use Proxy in the
tab General should be checked to use the list. Specifying a list of proxies makes the
request anonymous. The following information is needed for every defined proxy:
Host. It refers the proxy server IP address or host name.
Port. It is the proxy server port number.
If authentication is needed to use the proxy, then the option Authentication required should be checked and the following parameters entered:
Username. It is a valid username.
Password. It is a valid password.
Logging. In this tab the settings about the log file can be established.
Log File. The user can check the option Log results to file if the results are to be
logged in a file, whose path and name must be specified too. When checking the
option Log activity report, general operations performed by the program, like
opening or closing files, initializing or terminating, will be logged to the file.

48

C. Torrano-Gimenez, A. Perez-Villegas, and G. Alvarez

5.3 Commands
Definition File. The button New starts a new analysis session. All the information
entered in the configuration frame can be saved in a definition file when clicking
Save. When clicking the Open button the definition file is loaded in the
configuration. The facility of opening this file simplifies the task of initializing the
program for a new brute force session.
Analysis Execution. By clicking the Start button the analysis starts, using the
parameters established in the configuration and the preferences. The analysis can be
paused and later resumed or completely stopped.

6 Future Work
These days, many web applications provide captchas [8] in order to determine
whether the user is a human or a machine. The use of captchas has become a very
popular mechanism for web applications to prevent brute force attacks. To our knowledge, none of the existing authentication security tools implements a means to bypass
this barrier.
As future work, we are working to include in WASAT an anti-captcha mechanism
using artificial intelligence techniques. This feature will let the application bypass the
captchas barrier, and permit the assessment for a wider range of web applications.

7 Conclusions
An intuitive and complete Web Authorization Security Analysis Tool has been
presented in this paper. This application is designed for the security assessment of
different web related authentication schemes, namely Basic Authentication and
Forms-Based Authentication. The configuration of the analysis process against the
target web application and the program preferences can be specified by the user.
The application is platform independent, and present several advantages compared
with other popular existing tools while it has hardly any of their drawbacks. First,
WASAT has features that make the authentication assesstment easier for the user, like
automatic password generation, wordlist variations, aborted sessions restoring, and a
complete and user friendly help. Second, WASAT has features that avoid time-based
and IP-based anti-brute-force mechanisms on the server side, like low signature attacks mounting and proxy connections. Third, the use of multithreading improves the
efficiency drastically, making possible to perform multiple authentication attempts
simultaneously.

Acknowledgements
We would like to thank the Ministerio de Industria, Turismo y Comercio, project
SEGUR@ (CENIT2007-2010), project HESPERIA (CENIT2006-2009), the Ministerio de Ciencia e Innovacion, project CUCO (MTM2008-02194), and the Spanish
National Research Council (CSIC), programme JAE/I3P.

WASAT- A New Web Authorization Security Analysis Tool

49

References
1. Bojinov, H., Bursztein, E., Lovett, E., Boneh, D.: Embedded Management Interfaces:
Emerging Massive Insecurity. In: Black Hat Technical Security Conference, Las Vegas,
NV, USA (2009)
2. Berners-Lee, T., Fielding, R., Frystyk, H.: Hypertext Transfer Protocol HTTP/1.0. (1996),
http://www.ietf.org/rfc/rfc1945.txt
3. Crowbar: Generic Web Brute Force Tool (2006),
http://www.sensepost.com/research/crowbar/
4. Hobbie: Brutus (2001), http://www.hoobie.net/index.html
5. Sentinel: Caecus. OCR Form Bruteforcer (2003),
http://sentinel.securibox.net/Caecus.php
6. Hauser, V.: THC-Hydra (2008), http://freeworld.thc.org/thc-hydra/
7. Edge-Security: WebSlayer (2008),
http://www.edge-security.com/webslayer.php
8. Carnegie Mellon University: CAPTCHA: Telling Humans and Computers Apart Automatically (2009), http://www.captcha.net/

Connection String Parameter Pollution Attacks


Chema Alonso1, Manuel Fernandez1, Alejandro Martn1, and Antonio Guzmn2
1

Informatica64, S.L.
Universidad Rey Juan Carlos
{chema,mfernandez,amartin}@informatica64.com,
antonio.guzman@urjc.es
2

Abstract. In 2007 the classification of the ten most critical vulnerabilities for
the security of a system establishes that code injection attacks are the second
type of attack behind XSS attacks. Currently the code injection attacks are
placed first in this ranking. In fact Most critical attacks are those that combine
XSS techniques to access systems and code injection techniques to access the
information.. The potential damage associated with this type of threats, the total
absence of background and the fact that the solution to mitigate this vulnerability must be implemented by systems administrators and the database vendors
justify an in-depth analysis to estimate all the possible ways of implementation
of this attack technique.
Keywords: Code injection attacks, connection strings, web application authentication delegation.

1 Introduction
SQL injection attacks are probably the most known attacks related to a web application through its database architecture. There are a lot of researches done over this kind
of vulnerability to conclude that to establish the correct filtering levels necessary to
inputs of the systems is the development team task for preventing an attack can thus
be successful.
In the case of the attack will be presented in this article, the responsibility rests not
only on developers, but it also affects the system administrator and the database vendor. This is an injection attack that affects web applications but rather than focus on
its implementation focusing on connections that are established with from the application and the database.
According to OWASP [1] in 2007 the classification of the ten most critical vulnerabilities for the security of a system establishes that code injection attacks are the
second type of attack behind XSS attacks. In 2010 code injection attacks are the ones
that occupy the first position of this ranking. Currently most used and most criticality
attacks are attacks that combine XSS techniques to access systems with code injection
techniques to access the information. This is the case for the so-called connection
string parameter pollutions attacks. Potential criticality of this type of vulnerabilities
and the total absence of background justify an in-depth analysis to estimate all vectors
of implementation relating to this attack technique.
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 5162, 2010.
Springer-Verlag Berlin Heidelberg 2010

52

C. Alonso et al.

The paper structure is presented in three main sections. The first is this short introduction where the most significant aspects of the connection strings and existing
mechanisms for the implementation of the web application authentication will be
introduced briefly below. Section 2 proposes a comprehensive study of this new attack technique, with an extensive collection of test cases. Finally, the article concludes briefly summarizing the lessons learned from the work.
1.1 Connections Strings
Connection strings [2] are used to connect applications to database engines. The syntax used on these strings depends on the database engine to be connected to and on
the provider or driver used by the programmer to establish the connection.
One way or another, the programmer must specify the server to connect to, the database name, the credentials to use, and the connection configuration parameters, such
as timeout, alternate databases, communication protocol or the encrypting options.
The following example shows a common connection string used to connect to a
Microsoft SQL Server database:
Data Source=Server,Port; Network Library=DBMSSOCN;
Initial Catalog=DataBase; User ID=Username;
Password=pwd;
As can be seen, a connection string is a collection of parameters, separated by
semicolons (;), which contains value key pairs. The attributes used in the example
correspond to the ones used in the .NET Framework Data Provider for SQL Server,
which is chosen by programmers when they use the SqlConnection class in their
.NET applications. Obviously, it is possible to connect to SQL Server using different
providers such as:
.NET Framework Data Provider for OLE DB (OleDbConnection)
.NET Framework Data Provider for ODBC (OdbcConnection)
SQL Native Client 9.0 OLE DB provider
The most common and recommended way for connections between SQL server
and .NET applications, is to use the default Framework provider, where the connection string syntax is the same to the different versions of SQL Server (7, 2000, 2005
and 2008). This is the one chosen in this article to illustrate the examples.
1.2 Web Application Authentication Delegation
There can be two ways to define an authentication system for a web application: create an own credential system, or delegate authentication to the database engine.
In most of the cases, the application developer chooses to use only one user to connect to the database. This user will represent the web application inside the database
engine. Using this connection, the web application will make queries to a custom
users table where the user credentials are managed.
As only one user can access all content of the database, it is impossible to implement a granular permission system over the different objects in the database, or to
trace the actions of each user, delegating these tasks in the web application itself. If an
attacker is able to take advantage of any vulnerabilities of the application to access the
database, this will be completely exposed. This architecture is the one used by CMS

Connection String Parameter Pollution Attacks

Database engine

53

Web application

Fig. 1. Web application authentication architecture

Fig. 2. Web application delegated authentication architecture

systems such as Joomla or Mambo among other very commonly used on the Internet.
The target of any attacker is to extract database users table rows in order to access the
users credentials.
The alternative consists in an authentication process delegation, so the connection
string is used to check the user credentials leaving all the responsibility on the database engine. This system allows applications to delegate the credential management
system to the database engine.
This alternative is mandatory to be used in all those applications that manage the
database engine. This is necessary in order to connect to a system with users who
have special permissions or roles, in order to perform administration tasks.

54

C. Alonso et al.

With this architecture, it is possible to implement the granular permission system


and to trace users actions in the database. Each one of these systems offers different
advantages and disadvantages, apart from the ones already mentioned, which are outside the coverage of this article. Attacks described in this paper focus on the second
environment: web applications with delegated authentication to the database engine.

2 Connection String Injection


In a delegated validation environment connection string injection techniques allow an
attacker to inject parameters by adding them with the semicolon (;) character.
In an example where the user is asked to enter a user and a password to create a
connection string, an attacker can quit the encrypting system by inserting, in this case
in the password something like: password; Encryption=off.
When the connection string is generated the Encryption value will be added to the
previously configured parameters.
2.1 Connection String Builder in .NET
Knowing the possibility of making these kind of injections [3] in the connection
strings, Microsoft included since Framework 2.0 the ConnectionStringBuilder [4]
classes. They allow to create secure connection strings through the base class
(DbConnectionStringBuilder) or through the specific classes for the different providers (SqlConnectionStringBuilder, OleDbConnectionStringBuilder, etc). This is just
because in these classes only value/key pairs are allowed and injection attempts are
monitored by escaping them.
The use of these classes at the time when a connection string is built dynamically
will avoid the injections. However, it is not used by all the developers and, of course,
by all the applications.
2.2 Connection String Parameter Pollution
The parameter pollution techniques are used to override values on parameters. They
are well known in the HTTP [5] environment but are applicable to other environments
too. In this example, the parameter pollution techniques can be applied to parameters
in the connection string, allowing several attacks.
2.3 Connection String Parameter Pollution (CSPP) Attacks
In order to explain these attacks the current article will use as an example a web application over a Microsoft Internet Information Services web server, running on a
Microsoft Windows Server where a user [User_Value] and a password [Password_Value] are required. These data are going to be used into a connection string to
a Microsoft SQL Server database. As shown in this example:
Data source = SQL2005; initial catalog = db1;
integrated security=no;
user id=+User_Value+; Password=+Password_Value+;

Connection String Parameter Pollution Attacks

55

As can be seen, the application is making use of Microsoft SQL Server users to access the database engine. Taking this information into account, and attacker can perform a Connection String Parameter Pollution Attack. The idea of this attack is to add
a parameter to the connection string that existed previously in it. The component used
in .NET applications set up the parameter with the last value in the connection string.
This means that having two Data Source parameters in a connection string, the one
used is the last one. Knowing this behavior and with this environment the following
CSPP attacks can be done.
2.3.1 CSPP Attack 1: Hash Stealing
An attacker can place a Rogue Microsoft SQL Server connected to the Internet with a
Microsoft SQL Server credential sniffer listening (In this sample CAIN [6] has been chosen). For the attacker it will be enough to perform a CSPP attack in the following way:
User_Value:

; Data Source = Rogue_Server

Password_Value: ; Integrated Security = true


Using these injections, the connection string built is:
Data source = SQL2005; initial catalog = db1;
integrated security=no;
user id=;Data Source=Rogue Server; Password=;
Integrated Security=true;
As can be seen, the Data Source and Integrated Security parameters are in pollution.
In the Microsoft SQL Server native drivers, the final values override the first ones, so,
the first values are lost, and the application will try to connect to the Rogue Server with
its credentials, trying a connection with the Windows user credentials, which the web
application is running on. This can be a system user, or an application pool user.

Fig. 3. CSPP in ASP.NET Enterprise Manager to steal the account information

56

C. Alonso et al.

2.3.1.1 Example 1: ASP.NET Enterprise Manager. This tool is an abandoned or


unsupported Open Source tool, which is been used by some hosting companies and
some organizations to manage, using a web interface, Microsoft SQL Server databases. The official web site, which was aspnetenterprisemanager.com, is today abandoned, but the tool can be obtained from several other web sites like SourceForge [7]
or MyOpenSource [8]. This tool is well recommended in a lot of forums as a good
ASP.NET alternative to PHPMyAdmin [9]. The last version was published on the
third of January of 2003.
The results are collected on the Rogue Server where the database connection sniffer
had been installed. As can be seen it is possible to obtain the LM Hash of the account.

Fig. 4. Hash collected in the Rogue Server with Cain

2.3.2 CSPP Attack 2: Port Scanning


Using the port configuring functionality into the connection string, an attacker, would
be able to use an application vulnerable to this technique to scan servers. It would be
enough to try to connect to different ports and see the error messages obtained. The
attack would be as shown:
User_Value:
Target_Port
Password_Value:

; Data Source =Target_Server,


; Integrated Security = true

This injection attack will result in the following connection string:


Data source = SQL2005; initial catalog = db1;
integrated security=no;
user id=;Data Source=Target Server, Target Port;
Password=; Integrated Security=true;
As can be seen, this connection string forces the web application to try a connection
against a special port on a target machine. Looking for differences into the error messages, a port scan can be performed.
2.3.2.1 Example 2: myLittleAdmin and myLittleBackup. The tools myLittleAdmin
[10] and myLittleBackup [11] are commercial tools developed by myLittleTools [12].
Both of these tools are vulnerable to CSPP attacks up to versions myLittleAdmin 3.5
and myLittleBackup 1.6.

Connection String Parameter Pollution Attacks

57

Fig. 5. A connection can be established through port 80 to www.gooogle.com

As can be seen in the Fig. 5, when the port is listening, as in the current example,
the error message obtained shows that no Microsoft SQL Server is listening on it, but
a TCP connection was established.

Fig. 6. A connection cannot be established through the XX port to www.google.com

In this second case, a TCP connection could not be completed and the error message is different. Using these error messages a complete TCP scan can be done
against a server. Of course, this technique can also be used to discover internal Servers within the DMZ in which the web application is running.

58

C. Alonso et al.

2.3.3 CSPP Attack 3: Hijacking Web Credentials


In this exploit, the attacker tries to use the system account which the web application
is running with on the system to log on to the database. He will succeed only if the
database engine allows access to that account. Therefore, it will be enough to use the
following injection:
User_Value:

; Data Source =Target_Server

Password_Value: ; Integrated Security = true


This injection attack will result in the following connection string:
Data source = SQL2005; initial catalog = db1;
integrated security=no;
user id=;Data Source=Target Server, Target Port;
Password=; Integrated Security=true;
In this attack the Integrated Security parameter is the one in pollution. As can be
seen, the last one set up a True value in it. This means that the system must try to get
connected to the database with the system account which the tool is running with. In
this case it is the system account used by the web application in the web server.
2.3.3.1 Example 3: SQL Server Web Data Administrator. This tool is a project, originally developed by Microsoft, which was made free as an Open Project. Today, it is
still possible to download the last version that Microsoft released on 2004 from Microsoft Servers [13] but the latest one, released on 2007, is hosted in the Codeplex
web site [14]. The version hosted on Codeplex is secure to this type of attacks
because it is using ConnectionStringBuilder in order to dynamically construct the
connection string.
The version published on Microsoft web site is vulnerable to CSPP attacks. As can
be seen in the following screenshots, it is possible to get access to the system using
this type of attack.

Fig. 7. Exploiting the credentials at the WEB Data Administrator

Connection String Parameter Pollution Attacks

59

In Fig. 7, the password_value is: ; integrated Security=true, as was described


previously.

Fig. 8. Console access with the server account

An attacker can log into the database engine and hence to the Web application to
manage the whole system. As can be seen in the following figure (Fig. 9), this is due
to the fact that all the users and the network services have access to the server.

Fig. 9. System account access grant

2.3.3.2
Example 4: myLittleAdmin and myLittleBackup. In mylittleAdmin and
myLittlebackup tools, it is possible to check out the connection string used to get the
access. Looking at it, the parameter pollution injected in order to obtain access to the
system can be clearly seen.

60

C. Alonso et al.

Fig. 10. CSPP in myLittleAdmin

Fig. 10 shows how the Data Source parameter, after the User ID parameter, has
been injected with the localhost value. This parameter, Data Source, is also the first
one of the connection string. In this example their values are different; however, the
one that is taken into consideration is the last one, meaning the injected one.
The same happens with the parameter Integrated Security that appears initially
with the NO value but the one that counts is the one injected in the password parameter with value YES. The result is total access to the server with the system account
which the web application is running with, as can be seen in Fig. 11.

Fig. 11. Querying the master..sysusers table

Connection String Parameter Pollution Attacks

61

2.3.3.3 Example 5: ASP.NET Enterprise Manager. The same attack also works on
the latest public version of ASP.NET Enterprise manager, so, as can be seen in the
following login form, an attacker can perform the CSPP injection to get access to the
web application.

Fig. 12. CSPP in ASP.NET Enterprise Manager login form

And as a result of it, access can be obtained, just as can be seen in the following
screenshot.

Fig. 13. Administration console in ASP.NET Enterprise Manager

3 Conclusions
All these examples show the importance of filtering any user input in web applications.
Moreover, these examples are a clear proof of the importance of maintaining the software. Microsoft released ConnectionStringbuilder in order to avoid these kinds of
attacks, but not all projects were updated to use these new and secure components.

62

C. Alonso et al.

These techniques also apply to other databases such as Oracle Databases which allow administrators to set up Integrated security to the database. Besides, in Oracle
Connection Strings it is possible to change the way a user gets connected by forcing
the use of a sysdba session.
MySQL databases do not allow administrators to configure an Integrated Security
authentication process. However, it is still possible to inject code and manipulate
connection strings to try to connect against internal servers which were used by developers and not published on the Internet.
In order to avoid these attacks the semicolon must be filtered, all the parameters
sanitized and the firewall should be hardened in order to filter not only inbound connection but also outbound connection from internal servers that are sending NTLM
connection through the internet. Databases administrator should also apply a hardening process in the database engine to restrict the access permits to only the necessary
users by a minimum privilege policy.

References
1. The Open Web Application Security Project, http://www.owasp.org
2. Connection Strings.com, http://www.connectionstrings.com
3. Ryan, W.: Using the Sql Connection String Builder to guard against Connection String Injection Attacks,
http://msmvps.com/blogs/williamryan/archive/2006/01/15/
81115.aspx
4. Connection String Builder (ADO.NET),
http://msdn.microsoft.com/en-us/library/ms254947.aspx
5. Carettoni, L., di Paola, S.: HTTP Parameter Pollution,
http://www.owasp.org/images/b/ba/
AppsecEU09_CarettoniDiPaola_v0.8.pdf
6. Cain, http://www.oxid.it/cain.html
7. ASP.NET Enterprise Manager in SourceForge,
http://sourceforge.net/projects/asp-ent-man/
8. ASP.NET Enterprise Manager in MyOpenSource,
http://www.myopensource.org/internet/
asp.net+enterprise+manager/download-review
9. PHPMyAdmin, http://www.phpmyadmin.net/
10. myLittleAdmin, http://www.mylittleadmin.com
11. myLittleBackup, http://www.mylittlebackup.com
12. myLittleTools, http://www.mylittletools.net
13. Microsoft SQL Server Web Data Administrator,
http://www.microsoft.com/downloads/details.aspx?
FamilyID=c039a798-c57a-419e-acbc-2a332cb7f959&displaylang=en
14. Microsoft SQL Server Web Data Administrator in Codeplex project,
http://www.codeplex.com/SqlWebAdmin

Web Applications Security Assessment in the Portuguese


World Wide Web Panorama
Nuno Teodoro1 and Carlos Serro2
1

ISCTE Lisbon University Institute/DCTI, Ed. ISCTE, Av. Foras Armadas,


1649-026 Lisboa, Portugal
2
ISCTE Lisbon University Institute/DCTI/Adetti, Ed. ISCTE, Av. Foras Armadas,
1649-026 Lisboa, Portugal
nuno.filipe.teodoro@gmail.com, carlos.serrao@iscte.pt

Abstract. Following the EU Information and Communication Technologies


agenda, the Portuguese Government has started the creation of many applications, enabling electronic interaction between individuals, companies and the
public administration the e-Government. Due to the Internet open nature and
the sensitivity of the data that those applications have to handle, it is important
to ensure and assess their security. Financial institutions, such as banks, that
nowadays use the WWW as a communication channel with their customers,
face the same challenges.
The main objective of this paper is to introduce a work that will be performed to assess the security of the financial and public administration sectors
web applications. In this paper the authors provide a description of the rationale
behind this work that involves the selection of a set of key financial and public
administration web applications, the definition and application of a security assessment methodology, and the evaluation the assessment results.
Keywords: Security, web, E-government, bank, vulnerabilities, penetration and
testing.

1 Introduction
One of the current computing trends is the information systems distribution, in particular using the Internet. Critical systems are constantly deployed on the World Wide
Web, where crucial and confidential information crosses the bit waves of the information highway or it is stored in an unsecure remotely located database.
Most of these critical systems are used on a daily basis, and there is an inherent
sense of security in each of the web applications that may not correspond to their real
security status and real needs. Andrey Petukhov and Dmitry Kozlov [1] make a reference to a survey, which states that 60% of vulnerabilities actually affect web applications, emphasizing even more the concerns in the relation between web applications
and classified information. The objective of this paper is to focus in the Portuguese
web applications security panorama, which will be divided it in two major areas: government online public services and online banking web applications. Although these
two main areas differ from each other, they have a common front-end to communicate
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 6373, 2010.
Springer-Verlag Berlin Heidelberg 2010

64

N. Teodoro and C. Serro

with people - web applications - which will allow the entire testing process and subsequent methodologies to be the same, or very similar.
These assessments are mostly motivated by the perception of the lack of investment
in security and the everyday growth of new attacks in the web and into critical web
applications [2]. The aim of this paper is to check for vulnerabilities/exploits and to
produce a report for each of the tested web applications in order to communicate the
testing methods, the vulnerabilities found and the necessary corrective measures that
need to be taken to establish mitigate those flaws and ultimately benefit the end-user.
This paper proposes the usage of a webapplications security analysis methodology
to determine their security level, against some of the most identified security threats,
based on the best practices on the web application security market. This work will
make use of freely available security assessment frameworks and automated tools, to
conduct these security evaluation tests. The work to be conducted will take advantage
of the large set of tools and documents produced by the Open Web Application Security Project (OWASP) and other similar initiatives.
The Portuguese government public services web applications were mostly created
after the 2006s Simplex [3] program launch. Simplex is a strategic priority of the
Portuguese Government, launched with the ambitious objective of decentralizing
most of the public services, reducing the gap between citizen and public administration, reinforce the idea of a great investment in the technologies sector and make the
public sector more and more efficient. As a result, most public services offered by the
government are now supported by information systems available over the World
Wide Web. From a citizen point of view, political interests generate the preconceived
idea that these programs often overcome the usual and recommended processes for
planning their components and introduction in the market, choosing deployment
speed in detriment of quality.
The financial sector, in particular banks, have always been the target of an enormous amount of effort by attackers in order to compromise clients assets as well as
banks credibility, either from rival entities or individual attackers.
This work will be conducted in the context of a MSc work, which will conduct the
necessary security assessment and present the results and conclusions at the end of the
work. In the context of this paper, it will be impossible to present any results and
therefore it will mostly focus on the web applications selection, the methodologies
identification and the results processing mechanism.

2 Accessing the Security of the Portuguese Web Applications


The following section of this paper will provide a description of the approach and the
different methodologies that will be used to conduct the assessment work. Web applications security assessment cannot be made without first defining a set of steps to be
followed. It is important to create a guideline to follow in this kind of work mostly
because the amount of information and techniques can be overwhelming. Thus 6 steps
were defined:
1. Web application security assessment methodologies analysis
2. Vulnerabilities identification
3. Selection of the Web applications to be tested

Web Applications Security Assessment in the Portuguese World Wide Web Panorama

4.
5.
6.

65

Web applications security assessment methodology


Apply the methodology to the web applications
Tests results

2.1 Web Application Security Assessment Methodologies Analysis


The first approach to this work will be to perform an analysis of the different methodologies to conduct web application security assessments. A Web application security
assessment full analysis would involve not only an inspection to the web application
itself but also to documentation, manuals or other relevant documents and the whole
network structure where the web applications are integrated.
Thus web applications security assessment can be based on several techniques such
as manual inspection & reviews, threat modeling, code review and penetration testing.
Due to the fact that access to these web applicationss code, life cycle, and other backend material is not available, the focus has to be directed to penetration testing [4][5].
Penetration testing will allow the evaluation of the security mechanisms of these web
applications by simulating an attack, and capture the way the web applications react to
the attacks. The entire process, which flow is described in Fig. 1, will involve an active
analysis of the applications for any weaknesses like technical flaws or software vulnerabilities, having as a result, reports where the tests data will be documented.

Fig. 1. Testing process flow design

2.2 Vulnerabilities Identification


At this stage, it will be identified which are the most common threats to web applications security. OWASP Top 10 [6] is extremely important as it provides the necessary
guidelines to identify the most exploited vulnerabilities in web applications. The latest
ten more common vulnerabilities identified by OWASP are the following:
1. Cross Site Scripting (XSS)
2. Injection Flaws
3. Malicious File Execution

66

N. Teodoro and C. Serro

4.
5.
6.
7.
8.
9.
10.

Insecure Direct Object Reference


Cross Site Request Forgery (CSRF)
Information Leakage and Improper Error Handling
Broken Authentication and Session Management
Insecure Cryptographic Storage
Insecure Communications
Failure to Restrict URL Access

These vulnerabilities are not equally distributed in web applications, and so, the
tests will be conducted in such a way that the most common vulnerabilities will be
more intensely tested, validating the web application for the most common flaws.
Although only these vulnerabilities are identified, it will not invalidate testing for
new vulnerabilities, which are not described in the previous list.
2.3 Selection of the Web Applications to Be Tested
To conduct a serious evaluation work, a pre-selection of target entities will be done.
This is an important step since it will allow choosing a representative set of entities
belonging to each domain. The Portuguese WWW panorama can therefore be assessed in these two domains (public services and bank institutions) without testing
every possible entity or web application, especially regarding bank institutions. It
must be pointed out that not every entities in each domain has to be present in this list,
if the most important and representative ones are present and tested, it will give a
pretty conclusive information on the overall state of the Portuguese WWW panorama.
To represent the government public services, the chosen web applications are the
ones that allow citizens to perform crucial operations on behalf of individual and
collective entities. The second area represents Portuguese bank entities, and is composed of different banks, private and public, which can also make an interesting comparison between security implemented in private and public sectors.
Table 1. Web application set to be tested

Public administration services


online web applications
Portal das Finanas
http://www.portaldasfinancas.gov.pt/
ADSE
http://www.adse.pt/
Segurana Social
http://www.seg-social.pt/
Portal do Cidado
http://www.portaldocidadao.pt/

Financial banking services


Millenium BCP
http://www.millenniumbcp.pt/
BES
http://www.bes.pt
Caixa Geral de Depsitos
http://www.cgd.pt
BPI
http://www.bpi.pt
Banif
http://www.banif.pt
Santander Totta
http://www.santandertotta.pt

Web Applications Security Assessment in the Portuguese World Wide Web Panorama

67

The public administration services portals list as the work presented in this paper
progresses, can and most probably will, be extended. Through the web applications
described in this section, can be performed many critical actions, and more services
are available inside these portals, so the main entry point will be one of the portals
described here. Although, with the process of testing and further investigation, different portals/web applications can appear, and if sufficiently relevant, will be included
in the results of this assessment.
2.4 Web-Applications Security Assessment Methodology
This section provides a description of the security assessment methodology, which
will be followed to conduct the tests on the selected web applications, and how the
tests are going to be structured. As it was previously stated in this paper, these tests
will be performed in web applications where all the documentation about them, including source code, software and network infrastructure is not available, and all the
access to it is not at all possible. This situation forces the methodology to be based on
a black box approach, which fits in one particular testing method for web applications: Penetration Testing.
The penetration testing process is divided into two main phases:

Passive mode: Information Gathering, where it is precisely going to be gathered some information about the web applications mostly through discovery,
reconnaissance and enumeration techniques. This is the first step in penetration testing because as we know nothing about the web application we are
going to test, this is the way to know its surrounding environment.
Active mode: Vulnerability Analysis, where more specific tests will be done
in order to assess particular vulnerabilities related to the web applications
business logic, session management, data validation, among others.

The high level view of the whole project methodology is described below:
1.
2.
3.
4.
5.
6.
7.

Discovery;
Document and analysis of the Discovery results;
Create attack simulations on the target entity;
Analysis of each attack;
Document the results of the Attacks;
Solutions to mitigate the problems (when possible);
Presentation of the results to the entity (if required by them).

Documentation plays a very important role in penetration testing. In this particular


case where a considerable large set of web applications is going to be tested, this
assumes a special relevance. It is impracticable to perform every test and in the end
produce a report for each one of them without proper documentation along the process. Thus, documenting tests, results and problems along the assessment process will
be crucial in the chosen methodology, and it will allow the achievement of more reliable results.

68

N. Teodoro and C. Serro

2.5 Apply the Methodology to the Web-Applications


In order to be applied to the selected web-applications, penetration-testing methodology, according to the OWASP Testing guide [7], was divided into several phases and
processes, which will allow understanding the applications multiple security levels.
Below are described the different phases of the penetration tests, which will be applied to the selected web applications. Only an high level view of these phases is
described since the more detailed view is not the central point of this paper and would
fall in the scope of a deep analysis to the testing processes.
Two main testing methodologies are used to assess the web applications. OWASP
and WASC [8] both have identified main threats to web applications, producing each
one of these two organizations, guidelines to penetration testing, precisely based in
the most common vulnerabilities and critical aspects. OWASP has produced a document, which not only identifies what to test when performing penetration testing, but
what should be tested and how. This document is the OWASP Testing Guide and will
be an important reference further along this project. Analogously, WASC elaborated a
document, names Threat Classification, which states the most common threats to the
security of a web application. Simultaneously to the use of the Testing Guide, the
Threat Classification document will help to better identify every attack and threat
classes web applications might suffer.
OWASP Testing Guide
Information Gathering
Testing: Spiders, robots, and Crawlers
Search engine discovery/Reconnaissance
Identify application entry points
Testing for Web Application Fingerprint
Application Discovery
Analysis of Error Codes
Configuration Management Testing
SSL/TLS Testing
DB Listener Testing
Infrastructure configuration management testing
Application configuration management testing
Testing for File extensions handling
Old, backup and unreferenced files
Infrastructure and Application Admin Interfaces
Testing for HTTP Methods and XST
Authentication Testing
Credentials transport over an encrypted channel
Testing for user enumeration
Default or guessable (dictionary) user account
Testing For Brute Force

Web Applications Security Assessment in the Portuguese World Wide Web Panorama

69

Testing for Bypassing authentication schema


Testing for Vulnerable remember password and password reset
Testing for Logout and Browser Cache Management
Testing for Captcha
Testing for Multiple factors Authentication
Testing for Race Conditions

Session Management Testing


Testing for Session Management Schema
Testing for Cookies attributes
Testing for Session Fixation
Testing for Exposed Session Variables
Testing for CSRF
Authorization testing
Testing for path traversal
Testing for bypassing authorization schema
Testing for Privilege Escalation
Business logic testing
Testing for business logic flaws in a multi-functional dynamic web application requires thinking in unconventional ways trying to uncover business
logic flaws.
Data Validation Testing
Testing for Reflected Cross Site Scripting
Testing for Stored Cross Site Scripting
Testing for DOM based Cross Site Scripting
Testing for Cross Site Flashing
SQL Injection
LDAP Injection
ORM Injection
XML Injection
SSI Injection
XPath Injection
IMAP/SMTP Injection
Code Injection
OS Commanding
Buffers overflow Testing
Incubated vulnerability testing
Testing for HTTP Splitting/Smuggling
Denial of Service Testing
Testing for SQL Wildcard Attacks
Locking Customer Accounts

70

N. Teodoro and C. Serro

Buffer Overflows
User Specified Object Allocation
User Input as a Loop Counter
Writing User Provided Data to Disk
Failure to Release Resources
Storing too Much Data in Session

Web Services Testing


WS Information Gathering
Testing WSDL
XML Structural Testing
XML Content-level Testing
HTTP GET parameters/REST Testing
Naughty SOAP attachments
Replay Testing
AJAX Testing
AJAX Vulnerabilities
Testing For AJAX
WASC Threat Classification
Authentication
Brute Force
Insufficient Authentication
Weak Password Recovery Validation
Authorization
Credential/Session Prediction
Insufficient Authorization
Insufficient Session Expiration
Session Fixation
Client-side Attacks
Content Spoofing
Cross-site Scripting
Command Execution
Buffer Overflow
Format String Attack
LDAP Injection
OS Commanding
SQL Injection
SSI Injection
XPath Injection

Web Applications Security Assessment in the Portuguese World Wide Web Panorama

71

Information Disclosure
Directory Indexing
Information Leakage
Path Traversal
Predictable Resource Location
Logical Attacks
Abuse of Functionality
Denial of Service
Insufficient Anti-automation
Insufficient Process Validation
Although there may be some overlapping between both the methodologies, this
will, of course, help to cover more efficiently web applications threats since the references for penetration testing tests will be from these two major organizations which
have focused their efforts in this common purpose.
2.6 Tests Results
As a final stage, the results from the tests will be collected, including information on
how the vulnerabilities can be exploited, which the exploitation risks are and what is
the vulnerability impact on the web application, treat the data for each web application tests and draw conclusions. Any security issues that are found will be presented
to the system owner together with an assessment of their impact and with a proposal
for mitigation or a technical solution.
In the final document, as suggested by Andres Andreu [9], should be present data
important for the target entity, which should become aware of issues like:

The typical modus operandi of the attacker


The techniques and tools attackers will rely to conduct these attacks
Which exploits attackers will use
Data they are being exposed from the web application

In order to better analyze and demonstrate to the stakeholders, if needed, the results
of these tests, the document will be structured with the following sections:

Executive Summary high-level vision about the tests, presenting statistics


and targets overall standing in respect to attack susceptibility.
Risk Matrix quantify all the discovered and verified vulnerabilities, categorize all the issues discovered, identify all resources potentially affected, provide all relevant details of the discoveries and provide relevant references,
suggestions and recommendations.
Best Practices (every time it is possible) provide coding or architecture
standards.
Final Summary sum up of the entire effort and the overall state of the target
for the penetration tests.

72

N. Teodoro and C. Serro

The work presented here is not only bounded by technical constraints (as it was
presented on the last section), but it has to handle with legal considerations which can
pose themselves as a major blocking force to the success of this work. The following
section of the paper highlights these issues.

3 Legal Constraints
Besides the normal technical details that will need to be handled, one of the major
problems/challenges identified within this work are related with legal aspects/constraints. Most of the work described in this paper has to be bounded by legislation. In particular, the case of penetration testing, when not properly authorized by
the target tested entity, can have harmful legal consequences.
In the process of this work, one stage will be to ask permission to the target entities
to perform these tests, which of course, can, or cant be granted. One other issue will
be regarding the results, some entities can accept that these tests are performed,
mostly in because it is in their own interest, but demand that the results remain protected from external viewers.
From one perspective, it is somehow still a question whether this permission has to
be asked in the scope of this project. Although these tests can in some cases present a
threat to the web application itself, and consequently, to the entity holding it, the intention is not to perform any criminal or bad intentioned act against it and the tests
will only rely in actions, which any external user can perform.
Nonetheless, authorizations will be asked and measures will be taken in order to
minimize possible legal problems and functional problem to the targets, when performing these tests. These measures can be summarized in:

Getting the target entity to establish and agree with us, the testers, clear
time frames for pen testing exercise;
Getting the target entity to clearly agree that we are not liable for anything going wrong that may have been triggered by our actions;
Find if the target entity has any non disclosure agreements that have to
be signed prior to the pen tests;
Getting the target entity relevant contacts for any unexpected situation.

As a last resource, if permission is denied, the project scope can be adapted, not
invalidating the whole project, but changing targets to more receptive ones.

4 Conclusions
The work presented in this paper defines the methodologies, techniques and tools that
will be used to conduct the Portuguese web applications security assessment. These
assessments should be considered of the highest importance by the entities, which
develop and distribute those web applications, mostly because they serve the purpose
of performing high sensitive operations.
A set of Portuguese public services and financial banking services were chosen and
a methodology was drawn, defining testing phases, processes and tools that could

Web Applications Security Assessment in the Portuguese World Wide Web Panorama

73

identify the most common vulnerabilities in web applications bounded by the recommendations and best practices advocated by international organizations, such as
OWASP.
As an end result, it will be clearly identified, for each web applications, if they
have security flaws. Reports will be produced clearly explaining which and how tests
were performed, which were the identified vulnerabilities and solutions or workarounds, if found, for mitigating the problems found. It will also be provided information on how severe those flaws were and which implications they have, or could have
for the entity holding the web application.
Although full security assessments should also be based in documentation and
code reviewing, which can reveal hidden security issues, these penetration tests
should provide a very close view on the web applications security.
This work can also be the guideline for extrapolating penetrations tests to other
web applications, which can be very important and interesting from a business point
of view, especially because these tools, methodologies and frameworks, are freely
available. Penetration testing can provide to these two sectors a huge service since the
Portuguese Government and banks obviously rely in their reputation and service
availability to maintain a certain amount of trust with clients, which many times justify investments in the security area.
In particular these assessments will allow these entities to answer questions they
probably make themselves every day: What is our level of exposure?, Can our
critical applications be compromised? and What risks are we running by operating
on the Internet?.

References
1. Petukhov, A., Kozlov, D.: Detecting Security Vulnerabilities in Web Applications Using
Dynamic Analysis with Penetration Testing, Computing Systems Lab, Department of Computer Science, Moscow State University (2008)
2. Holz, T., Marechal, S., Raynal, F.: New Threats and Attacks on the World Wide Web. IEEE
Computer Society, Los Alamitos (2006)
3. Simplex Program, http://www.simplex.pt
4. Budiarto, R., Ramadass, S., Samsudin, A., Noor, S.: Development of Penetration Testing
Model for Increasing Network Security. IEEE Press, Los Alamitos (2004)
5. Arkin, B., Stender, S., MCGraw, G.: Software Penetration Testing. IEEE Press, Los
Alamitos (2005)
6. van der Stock, A., et al.: OWASP Top 10 the ten most critical web application security vulnerabilities. In: OWASP (2007)
7. Agarwwal, A., et al.: OWASP Testing Guide v3.0. In: OWASP (2008)
8. Auger, R., et al.: Web Application Security Consortium: Threat Classification. WASC Press
(2004)
9. Andreu, A.: Pen Testing for Web Applications, Wiley Publishing, Inc., 10475 Crosspoint
Boulevard Indianapolis, IN 46256 (2006)

Building Web Application Firewalls in High


Availability Environments
Juan Galiana Lara and ngel Puigvents Gracia
Internet Security Auditors. Santander, 101 A 2. 08030, Barcelona, Espaa
{jgaliana,apuigventos}@isecauditors.com

Abstract. Every day increases the number of Web applications and Web services due to migration that is occurring in this type of environments. In these
scenarios, it is very common to find all types of vulnerabilities affecting web
applications and traditional methods of protection at the network and transport
level, not enough to mitigate them. What is more, there are also situations
where the availability of information systems is vital for proper functioning. To
protect our systems from these threats, we need a component acting on the layer
7 of the OSI model, which includes the HTTP protocol that allows us to analyze
traffic and HTTPS that is easily scalable. To solve these problems, the paper
presents the design and implementation of an Open Source application firewall,
ModSecurity, emphasizing the use of the positive security model, and the deployment of high availability environments.
Keywords: application firewall, whitelist
ModSecurity, OpenBSD, Carp, Pfsync.

analysis,

high

availability,

1 Introduction
Due to the large number of threats in web applications, it is essential to protect our
information systems. In that context, it is vitally important to follow a design process
with security measures that ensure the integrity, confidentiality and availability of
these resources.
Generally, most information systems have network-level protections, sophisticated
enough, to block malicious attacks in the first four layers of TCP/IP model, while the
exploitation of vulnerabilities in the application layer increases and these existing
measures, such as firewalls or intrusion detection systems in the network layer or
transport layer, are not sufficient. The security in Web applications and Web services
is a big problem due to lack of measures to protect systems from these threats.
It is important to note that the fact of introducing an application firewall in our
network topology increases the points of failure and reduces the SLA, so important in
Web environments. Therefore techniques must be implemented to ensure high availability for business continuity.
The solution is to implement an application firewall that is scalable, and responsive
to the issues we raised. To develop the project we will use open source solutions, because they offer a low cost and great flexibility to configure and set the requirements.
The open source alternative that was chosen was ModSecurity [1] because it offers a
C. Serro, V. Aguilera, and F. Cerullo (Eds.): IBWAS 2009, CCIS 72, pp. 7582, 2010.
Springer-Verlag Berlin Heidelberg 2010

76

J.G. Lara and .P. Gracia

few advantages: includes countless security features, stability, reliability, good documentation and its free. We will perform the configurations using this free open
source software under the GNU GPLv2 license [2] in combination with Apache,
ModProfiler [3] and the OpenBSD [4] operating system.

2 ModSecurity
ModSecurity operates as an Apache module, intercepting HTTP traffic and performing a comparison process for each request. If a request is classified as an attack, ModSecurity would follow the actions specified in the configuration. The main function of
this solution is filtering requests, analyzing the content of HTTP requests, both incoming traffic and outgoing traffic. One advantage of ModSecurity over NIDS is the
ability to filter HTTPS traffic. In a scenario with a network IDS filtering requests, if
traffic was encrypted using SSL/TLS, IDS could not parse these requests, so the attacks go undetected. In this case, the use of SSL/TLS, which in most cases protect us,
would be an advantage for the attacker to hide their actions. However, ModSecurity
(working embedded in the web server) processes the data once it has been deciphered.
First, mod_ssl does the decoding of the request and once in plain text, ModSecurity
analyze it correctly.
The life cycle of requests, passes through a series of steps with the goal of optimizing the search procedure anomalies and block the attack as soon as possible. With this
we will increase the performance, because if we are sure that the request is malicious
in phase 1, there is no need to analyze this request in the rest of phases.

Fig. 1. Phases of analysis

Building Web Application Firewalls in High Availability Environments

77

The process comprises five phases, the first phase is the analysis of the HTTP
headers (REQUEST_HEADERS), then filtering is done in the body of HTTP requests
(REQUEST_BODY phase), in this last process are detected the highest number of
attacks. Then the RESPONSE_HEADERS phase and the RESPONSE_BODY phase
will be performed. Both would analyze requests for response to prevent information
leaks. Finally, LOGGING phase will be processed, it is the responsible for the registration (log) of the complete request, and it is very useful for future forensic analysis
in case of intrusion or other scenarios.

3 Security Models
There are two security models for the classification of systems, both can coexist. The
negative model can be used to generalize and the positive to particularize.
In the negative model, everything is allowed by default, except what is explicitly
prohibited, while in the positive model, everything that is not expressly permitted is
forbidden. The IDS/IPS systems used in Web applications, specifically ModSecurity
can operate in both modes.
On the negative security model, the system would require a black list of rules with
the goal of blocking malicious requests. When a request arrives, it starts a search
process in the database, which contains all known attacks, if a match is found, the
request will be locked. Some of these systems work in conjunction with scoring rules,
giving a score to each request and blocking those that exceed a certain threshold.
In the positive security model, you can create a template from the Web application
where to specify in detail the operations allowed in the application. Everything outside this template will be locked. You must carefully specify the format of all parameters, so if an attacker makes changes by sending unauthorized values the access will
be blocked.
The positive model is more appropriated and provides more safety in critical environments, than the negative model. The first mentioned, helps to protect the systems
from not known attacks or 0day exploits. These 0day exploits are programs or scripts
that exploit vulnerabilities for which there is no patch or solution for correction. The
big potential of this tool makes necessary a greater effort to create a scenario of this
type.
One tool that tries to facilitate the construction of rules with a white list approach is
REMO [5] (Rule Editor for ModSecurity), which offers a graphical interface where
the process of writing rules in the positive model it is easier, but does not support
automation.
To help configure a firewall application with the guidelines of the positive model,
there is a tool called ModProfiler [3], which analyzes the traffic passing through it.
Observe what is valid and what is not, and can define the types and the maximum size
of the parameters. This system operates under the premise of denying everything that
is not known as valid. Normally the default web applications allow any HTTP
method, number and type of parameters, although in most cases work with a smaller
number of factors.

78

J.G. Lara and .P. Gracia

Follow this model and establish the correct configuration, gives us several
advantages:
Prevent attacks that attempt to exploit HTTP methods other than
those permitted, which otherwise could be used by default.

Disable the use of exploits making use of encodings not known or


not permitted by the application.

Prevent information leaks of files that are hosted on the server but
are not part of the application and kept for oblivion in the root directory of the web server.

Prevent use of enabled debug modes that provide much useful information to a potential attacker and block any operation outside of
the operation that is considered valid within the web application.

By using this approach we can specify for each web application files and interfaces
that will be used. Each of them, specifying the number of parameters, type and size
limits and other parameters such as encoding or HTTP methods allowed to use.

4 Reverse Proxy Topology


The IPS/IDS can operate mainly in two modes, the first mode is integrated with the
web server with the restrictions that would entail, as not being scalable (because it is
integrated into the web server) and that the server should be Apache.
In many environments are used other Web or application servers such as Microsoft
Internet Information Services, Lighttpd, IBM WebSphere, Oracle Application Server
or BEA WebLogic. In these environments is necessary to separate the IDS/IPS from
de web or application server, ie, using the reverse proxy mode.
The reverse proxy is a proxy that appears to be a web server to clients, but in fact
forwards the requests to one or more web servers. This topology is completely transparent to customers, because the configuration is done on the server side and provides
an additional layer of protection between the public Internet and internal Web servers,
thanks to the concealment of the systems architecture that underlies the proxy.
A reverse proxy can offer more advantages than security, such as acceleration of
SSL encryption with specific hardware for it, computational time saving, in web servers, related to SSL encryption, load balancing across multiple servers as well as can
download the workload of Web servers and optimize the bandwidth, buffering cache
static content such as images or other graphical content.

5 High Availability Environment


To have a fail safe environment we will need to use technology CARP (Common Address Redundancy Protocol) to have at least, two servers configured as "Master" and
"Slave". Also we can set up multiple servers in slave mode and move from an initial
framework of two servers (one master and one slave), to a scenario of more than one
slave server. This configuration is achieved through a system of weights to set the
priority on which should take over the next Slave Master.

Building Web Application Firewalls in High Availability Environments

79

We will need three network cards to use the pfsync functionality, which will keep
the states of all active communications in high availability, mitigating the loss of connection if the Master server falls and recovering all states in the Slave.
The operation of CARP protocol is very simple, it acts as a virtual interface with a
corresponding virtual IP and MAC address, i.e. that the O.S. has created this interface
for managing data and their respective counterparts. With the pfsync functionality we
can share the estates of pf in time with all nodes.

Fig. 2. Architecture of CARP and pfsync

To configure CARP on the external virtual interface we will use the carp0 interface
and the internal virtual interface will use carp1.
root@master:~# cat /etc/hostname.carp0
inet 172.26.0.1 255.255.255.0 172.26.0.255 vhid 1 advskew 0 pass
secretkey
root@master:~# cat /etc/hostname.carp1
inet 10.10.10.1 255.255.255.0 10.10.10.255 vhid 2 advskew 0 pass
secretkey

For the slave computer the configuration will be similar, but we will need to modify
the value "advskew" at 100 as a weight value.
root@slave:~# cat /etc/hostname.carp0
inet 172.26.0.1 255.255.255.0 172.26.0.255 vhid 1 advskew 100
pass secretkey
root@slave:~# cat /etc/hostname.carp1
inet 10.10.10.1 255.255.255.0 10.10.10.255 vhid 2 advskew 100
pass secretkey

80

J.G. Lara and .P. Gracia

Finally, we need to check "net.inet.carp.preempt" value on both computers for the


proper functioning of the failover state. This option is defined in the file
"/etc/sysctl.conf" and applies whenever the computer made the boot process.
root@master:~# cat /etc/sysctl.conf | grep net.inet.carp.preempt
net.inet.carp.preempt = 1
root@slave:~# cat /etc/sysctl.conf | grep net.inet.carp.preempt
net.inet.carp.preempt = 1

We can start the network firewall "packet filter" from the command line or by
modifying the file "/ etc / rc.conf" to start automatically every time you start the
system.
root@master:~# pfctl d
root@master:~# pfctl e
root@master:~# cat /etc/rc.conf | grep pf\=YES
pf=YES

Once we have raised our firewall we will need to configure the network interface in
each computer, in our case, are two so we will use a crossover cable. Our dedicated
interface to synchronization will be the physical interface VIC2, which will be specified in the pfsync interface, pointing to the address of the other computer.
root@master:~# cat /etc/hostname.vic2
inet 192.168.0.1 255.255.255.0 NONE
root@master:~# cat /etc/hostname.pfsync0
up syncdev vic2 syncpeer 192.168.0.2
root@slave:~# cat /etc/hostname.vic2
inet 192.168.0.2 255.255.255.0 NONE
root@slave:~# cat /etc/hostname.pfsync0
up syncdev vic2 syncpeer 192.168.0.1

6 Performance Charts
The following chart shows the time it took to serve 1, 100 and 400 requests on the
Web Server without protection, the results of testing an intermediate computer without applying any filtering and the requests made with de ModSecurity filters enabled.
The tests were performed in a LAN at Gigabyte and the web served size is 20,000
bytes.
The setup used in the performance testing is detailed below:
The Web Server Apache has been configured for reverse proxy mode, using ProxyPass and ProxyPassReverse directives:
<Location />
<IfModule security2_module
Include /path/www.example.com/modsecurity2.conf
</IfModule>
<IfModule mod_proxy.c>

Building Web Application Firewalls in High Availability Environments

81

ProxyRequests off
ProxyPass http://www.example.com:80/
ProxyPassReverse http://www.example.com:80/
</IfModule>
</Location>

Time per request


9

Web Server

Reverse Proxy

Reverse Proxy + ModSec2

Time

6
5
4
3
2
1
0
1

100

400

Hits

Fig. 3. The tests were conducted with de tool ApacheBenchmark (ab)

7 Conclusions
Security in Web applications and Web services should require more than just a Layer
3 firewall. The number of attacks in these environments has increased so dramatically
that we need a firewall on Layer 7 that understands the HTTP protocol, able to protect
us against threats.
The web application firewall, as described in this article meets the expectations and
solves the problems submitted, among other features is able to analyze SSL / TLS
traffic in both modes: black list and white list.
The design and implementation has been developed in high availability environment, where it is very important having a service always available and avoid denial of
service to legitimate users.
Security is very important throughout the life cycle of software development, and
also, in the network filtering systems in each layer.

References
1. ModSecurity Open Source Web Application Firewall,
http://www.modsecurity.org
2. GNU GPLv2 License, http://www.gnu.org/licenses/gpl-2.0.html
3. ModProfiler, http://www.modsecurity.org/projects/modprofiler/
4. OpenBSD Operating System, http://www.openbsd.org
5. OWASP, http://www.owasp.org/
6. CARP and pfsync guide, http://www.kernel-panic.it

82

J.G. Lara and .P. Gracia

7. Hansteen, P.N.M.: The book of PF


8. Ristic, I.: Apache Security. OReilly Media, Inc., Sebastopol (2005)
9. Barnett, R.: Preventing Web Attacks with Apache. Addison-Wesley Professional, Reading
(2006)
10. Mobily, T.: Hardening Apache. Apress, USA (2004)
11. Adelstein, T., Lubanovic, B.: Linux System Administration. OReilly Media, Inc., Sebastopol (2007)
12. Bowen, R.: K Coar. Apache Cookbook. OReilly Media Inc., Sebastopol (2009)
13. Zwicky, E.D., Cooper, S., Brent Chapman, D.: Building Internet Firewalls. OReilly &
Associates, Sebastopol (2000)
14. Shah, S.: Hacking Web Services. Charles River Media, Boston (2007)
15. Grossman, J., Hansen, R. (RSnake), Petkov, P.D. (pdp), Rager, A., Fogie, S.: XSS Attacks,
Cross Site Scripting Exploits and Defense. Syngress Publishing, Inc., Elsevier, Inc.,
Burlington (2007)
16. Wiley, A.: Professional Pen Testing for Web Applications. Publishing, Inc. 10475, Crosspoint Boulevard Indianapolis, IN 46246 (2006)
17. Andrews, M., Whittaker, J.A.: How to Break Web Software. Addison-Wesley, Pearson
Education, Inc., Boston (2006)
18. Stuttard, D., Pinto, M.: The Web Application Hackers Handbook. In: Discovering and
Exploiting Security Flaws., Wiley Publishing, Inc., Indianapolis (2008)
19. Bro, V.P.: A system for detecting network intruders in real-time. In: Proceedings of the 7th
USENIX Security Symposium (January 1998)

Author Index

Almeida, Miguel 15
Alonso, Chema 51
Alvarez, Gonzalo 39

Knobloch, Martin

Lara, Juan Galiana 75


L
opez de Vergara, Jorge E.

Catteddu, Daniele 17
Cerullo, Fabio E. 19, 21
Chisinevski, Marc 1
Clarke, Justin 3
Corrons, Luis 7
Cruz, Dinis 5

Martn, Alejandro

27

51

Perez-Villegas, Alejandro
Roses, Simon

de Frutos, Elena

23

39

27

Fernandez, Manuel 51
Fern
andez-Sanguino, Javier
`
Gracia, Angel
Puigvent
os
Guzm
an, Antonio 51
Harper, Dave 11
Holgado, Pilar 27

25
75

Sanz, Iv
an 27
Serr
ao, Carlos 63
Siles, Raul 13
Teodoro, Nuno 63
Torrano-Gimenez, Carmen
Villagr
a, Vctor A.

27

39