You are on page 1of 16

e-PG Pathshala

Subject : Computer Science

Paper: Computer Networks


Module: TCP – State diagram & Flow control
Module No: CS/CN/9

Quadrant 1 – e-text

Starting from the previous module, we are on a journey to understand the working
principles of TCP. You must have got an idea of what is expected from TCP, from the
overview that was presented. In this module, we focus on two aspects of TCP – the state
diagram that depicts the connection establishment and termination, and flow control. Thus,
the learning objectives for this module are given below.
Learning Objectives

 To understand the connection set-up of TCP


o State diagram

 To understand the adaptive flow control mechanism in TCP


o Silly window syndrome and its solutions

9.1 Functions of TCP – a review


In order to provide reliable service, TCP incorporates the following:

 Connection-oriented mechanism
o To ensure presence of receiver before Tx

 Sequencing & Acknowledgements


o To take care of out-of-order & duplicate packets

 Flow control
o To handle the slow-receiver problem

 Retransmission
o To take care of lost or erroneous packets

 Congestion control
o To take care of the bottleneck at the network

Of these functions we first go into the details of the connection management.


9.2 TCP Connection Establishment
We have already seen that TCP uses a three-way handshake for connection
establishment, and a 4-way or 4-step process connection termination. The requests for
connection setup and teardown, are specified to the other side by means of the SYN and
FIN flags (Figure 9.1).

Figure 9.1 TCP connection setup and teardown


These steps give the exchange of requests and responses between the two sides. But
what exactly happens at the two ends during this process ? That is what we will see now.
When we talk of TCP opening a connection, we must state the type of open operation it
performs. That is because, TCP has several options for the open operation. It can perform
a passive open or an active open.
A passive open is one in which, TCP is waiting (listening) at a specified port number for a
connection request to arrive from a remote host. Here again, there are two options:
Fully specified passive open and unspecified passive open.
In unspecified passive open, TCP is listening for a connection coming from any remote
port on any remote host. In fully specified passive open, it listens for a connection from a
specified remote host and port.
The passive open is typically performed by a server waiting for clients to contact it.
An active open is one in which TCP connects to a given remote (destination) host and
port, from a specified local port. A client typically performs an active open asking TCP to
connect to a remote server program running at a particular (often well-known) port.
9.2.1 TCP State Transition diagram
The state transition diagram during connection establishment and termination is shown in
Figure 9.2.

Figure 9.2 TCP State transition diagram

To understand this state transition diagram, let us look at a typical TCP connection
establishment scenario in a client-server setting. We will alternately examine what
happens on the client side and the server side as this process goes. Note that events that
cause transitions may be due to the application process issuing a command or due to TCP
segments received from the other side.
9.2.1.1 Connection Establishment
Normally, it is the server which first opens shop, waiting for clients to request a connection.
So, a server program does the first operation – it performs a passive open. This causes
TCP to move from the ‘closed’ state to the ‘Listen’ state. It waits in this state until it
receives some input from the client side. The connection establishment part of the state
diagram is given in Fig.9.3 with the server-side and client-side activities in different colours
(black for server side events, violet for client side, and red for the special scenario).
Figure 9.3 Connection Establishment
Now, let us say, a client wants to connect to this particular server program. So, the client
now does an active open specifying the details of the server that it wants to connect to.
This causes the client side TCP to start the 3-way handshake to open a connection with
the server. It sends a SYN segment to the server, and TCP transits from the closed state
to the ‘SYN_SENT” state.
The server which is in the “Listen” state, receives the SYN segment. This event causes it
to respond with a SYN+ACK segment, and the server now moves to the SYN_RCVD
state.
The SYN+ACK segment reaches the client which is waiting in the SYN_SENT state. On
receiving the SYN+ACK, it sends an ACK (the third step of the 3-way handshake) and
moves into the connection “ESTABLISHED” state, where it can send and receive data.
On receiving the ACK segment, the server which is now waiting in the SYN_RCVD state,
moves to the “ESTABLISHED” state, and begins to send and receive data.
In addition to this normal sequence of operation, a server-side TCP which is in the “Listen”
state, may be asked by its application to “Send” data to a specific client (shown in red in
Figure 9.3). In that case, the server side TCP sends a SYN segment to that client, and
moves to the “SYN_SENT” state. In that state, when it gets a SYN+ ACK from the other
end, it moves to the “Established” state after sending an ACK.
Now it possible that when TCP is in the SYN_SENT state, it may receive a SYN from the
other end. (A possibility where both ends are trying to connect with each other – shown in
red in Figure 9.3). In that case, it responds by sending a SYN+ACK, and moves to the
SYN_RCVD state, from which it will move to the ESTABLISHED state after receiving an
ACK.
9.2.1.2 Connection termination
When it comes to terminating the connection, remember that we have three different
scenarios : (1) One side (either client or server), say side A, closes the connection first,
and the other side, say side B closes the connection. A sends a FIN segment, gets an
ACK from B, then B sends a FIN, and gets an ACK from A.
(2) Both sides close the connection almost simultaneously, i.e ., both sides send FIN to
each other – which cross each other. The two sides then respond by sending the
respective ACKs.
(3) Side A sends a FIN. Just when it receives the FIN, and is about to send an ACK, side B
also decides to close the connection. So it sends a FIN+ACK segment to A. A responds
with an ACK.
We will look at each of these 3 scenarios in detail.
Scenario 1 : The state transitions for scenario 1 are given in Figure 9.4. Actions of side A
are shown in black, and that of B in violet.

Figure 9.4 TCP Connection Termination Scenario 1


The application on side A sends a close command to TCP, and TCP which is in the
ESTABLISHED state, sends a FIN segment to side B, and moves to the FIN_WAIT_1
state. It has sent its FIN, and is waiting for the other side to send the FIN. That is, it has
stopped sending data, but is still receiving data from B.
The other end, namely side B, which is in the ESTABLISHED state, receives the FIN. It
respond with an ACK, and moves to the “CLOSE_WAIT” state. This indicates that it is not
going to receive data from side A anymore, but is sending data to B if the application so
wishes. It is waiting for the application to close the connection.
Side A receives the ACK, and goes to the FIN_WAIT_2 state, where it waits for side B to
send its FIN. The difference between the two FIN_WAIT states is that in 1, it has not yet
got the ACK for its FIN, while in 2 it has got the ACK for its FIN.
Now, the application on side B asks TCP to close the connection, and the TCP of side B
sends a FIN segment, and moves to the LAST_ACK state, where it is waiting for the ACK
for the FIN it has just sent.
When side A, which is in FIN_WAIT_2 state receives the FIN, it responds by sending an
ACK, and is almost ready to close the connection. Almost ready – because it waits for a
duration of about 2 MSLs (Maximum Segment Lifetime) by going into a state called
TIMED_WAIT, before closing the connection. The purpose of this delay is to prevent
certain undesired actions. Let us say, there was no delay, and the connection was closed
as soon as the ACK is sent, but the ACK is delayed or lost. Since it is lost, side B will
retransmit its FIN. But there is no TCP on side A to receive that FIN. That is not a problem.
But suppose, in the meanwhile, another TCP connection was established between the
same two endpoints, the old FIN could come and terminate the new connection, which is
clearly not desired. Hence a delay is introduced to take care of delayed or lost ACKs and
retransmitted FINs before closing the connection.
Side B on receiving the ACK, closes the connection. Note that side B does not need any
delay, because the only event that it is waiting for is this last ACK. The other side has
already closed its connection !
Scenario 2 : The state transitions for scenario 2 are given in Figure 9.5. Actions of side A
are shown in blue, and that of B in red. As we will see both sides go through the same set
of states.

Figure 9.5 TCP Connection Termination – Scenario 2


This scenario represents simultaneous closure of the connection. So both sides receive a
Close command from their applications. They respond by sending a FIN segment to the
other side, and move to the FIN_WAIT_1 state.
In this state, they will receive the FIN from the other side. On receiving the FIN, they will
send an ACK to the other side, and move into the CLOSING state, where they wait for the
ACK in response to the FIN they had sent.
On receiving the ACK, they move to the CLOSED state through the TIME_WAIT state.
Scenario 3 : The state transitions for scenario 3 are given in Figure 9.6. Actions of side A
are shown in black and green. Actions of side B are not shown explicitly.

Figure 9.6 TCP Connection Termination – Scenario 3


Application on Side A gives a close command, and TCP responds by sending a FIN
segment, and moves to the FIN_WAIT_1 state.
Side B, now receives this FIN, and has also received a close command from its
application. It now sends a FIN+ACK, and moves to the CLOSING state. This transition is
not shown in the diagram.
Side A, which is in the FIN_WAIT_1 state, receives this FIN+ACK, and responds by
sending an ACK and goes to the CLOSED state through the TIME_WAIT state.
Side B which is in CLOSING, receives this ACK, and moves to CLOSED state through the
TIME_WAIT state.
All these three scenarios – and the events on both sides are depicted in a single state
diagram, as TCP is a full-duplex connection.
9.3 TCP Flow Control
Flow control as we know takes care of the fast-sender slow-receiver problem. The receiver
may be slow either because the receiving process is slow or because the buffer on the
receiver side is small. We also know that there are two well-known techniques for flow
control, namely, stop and wait and sliding window. Of these, stop and wait is too slow and
inefficient. Hence most communication systems use some form of sliding window flow
control mechanism. TCP is not exception to this. TCP also uses a variant of sliding window
flow control, which it calls as an adaptive flow control.
So the question that comes up is – what is adaptive about it ? Let us look at what normally
happens in a sliding window protocol used at the data link layer. Here, the window size is
fixed based on the Round-Trip Time (RTT) of the link to maximize efficient use of the
bandwidth, and on the amount of buffer available at the receiver. Both these factors tend
to be fixed for a given link. But if we consider applying the same idea for TCP’s flow
control, we realize that it is difficult to fix either of these factors for any TCP end-to-end
connection. The end-to-end RTT is likely to vary due to various factors in the network.
Similarly, the amount of buffer available at the receiving end, and the speed of the
receiving process are likely to vary. Hence the window size cannot be fixed. It can vary,
and the flow control mechanism has to adapt to this variation. Adapting to the varying size
of the buffer as a connection progresses is where the “adaptive” part of TCP comes in.
Thus, instead of fixing a window size before a data transfer, and sticking to that fixed
value, TCP allows the receiver to advertise the window size with every acknowledgement
or segment that it sends. The sender cannot exceed this window size. The window size is
specified in bytes. This window size is what is specified in the “Advertised Window Size”
field in the TCP header.
In addition to this window size, the sequence number, and Acknowledgment number fields
of the TCP header help to manage the overall flow control. The sender and receiver side
buffers are managed subject to this flow control mechanism. Each side specifies the
sliding window size in every TCP packet, using AdvertisedWindow field. Using this
information, and the size of the buffer on its side, each side determines how much data it
can send. That is, the effective window size that can be used by the sender is determined
by the advertised window and a few other parameters.
Let us look at how the buffer management takes place in this adaptive flow control
technique. Both the sender and receiver sides maintain certain pointers to track their
behavior as shown in Figure 9.7.
S ending a pplica tion Rece iving a pplica tion

TCP TCP
La s tByte Writte n La s tByte Rea d

La s tByte Acke d La s tByte Se nt Ne xtByteExpe cted La s tByte Rcvd

Figure(a9.7
) Sending and receiving buffer management
(b)

9.3.1 Sender Side


The sender side has the following parameters (pointers):
MaxSenderBuffer : This gives the (maximum) size of the sender side buffer.
LastByteWritten : This points to the last byte of data written by the sender application into
the TCP buffer.
LastByteSent : This points to the last byte of data sent to the other side from the buffer.
LastByteAcked : This points to the last byte of data that has been acknowledged by the
receiver.
It is easy to see that the following relations must hold on the sending side:
LastByteAcked < = LastByteSent ; (you can’t receive ack for data that has not been sent
yet !)
LastByteSent < = LastByteWritten; (you can’t send more data than what has been written
by the application!)
LastByteWritten <= MaxSenderBuffer (you can’t write more data than the buffer can hold!)
The bytes between LastByteAcked and LastByteWritten are buffered. This includes
data that has been sent to the receiver, but not yet acknowledged (lastByteSent –
lastByteAcked), as well as the data that has been written by the application, but not yet
sent to the receiver (lastByteWritten – lastByteSent).
When an advertisedWindow information is received from the receiver, the sender decides
its effective window (and the amount of data it can actually send) based on the following
relations:
LastByteSent - LastByteAcked < = AdvertisedWindow ; This is a condition that should
always hold as we never send more data than the advertised window size.
And the effective window is calculated as :
EffectiveWindow = AdvertisedWindow - (LastByteSent - LastByteAcked)
If the advertised window size is greater than unacknowledged data already sent, you can
send more data. The amount of data you can send is determined by how much bigger the
advertised window size is than the unacked data.
But this does not prevent the sender application process from writing data into the buffer. It
can write data based on the following conditions.
Since the sender side buffer is actually a circular buffer (you would wrap-around to the
beginning once you have reached the maximum buffer size), but you cannot overwrite
data that has not been acknowledged,
LastByteWritten - LastByteAcked < = MaxSendBuffer ,
Hence, if the application attempts to write “y” bytes of data, but this will end up overwriting
unacked data, then the application should not be allowed to write data. It should be
blocked. This is given as follows:
Block sender if (LastByteWritten - LastByteAcked) + y > MaxSenderBuffer
9.3.2 Receiving side
On the receiving side, the following parameters are used :
MaxRcvBuffer : Maximum size of the receive buffer.
LastByteRcvd : This points to the last byte received in the buffer.
LastByteRead: This points to the last byte that was read by the receiving application.
NextByteExpected : This points to the actual data that is expected in sequence. Note that,
since data may be received out of sequence, but buffered, TCP keeps track of the latest
byte received (LastByteRcvd), which may not be in sequence, as well as the
nextByteExpected. This nextByteExpected information is the one that corresponds to the
acknowledgment number.
Given these parameters, it is easy to see that the following relations should hold :
LastByteRcvd - LastByteRead < = MaxRcvBuffer
Again, remember that the receive buffer is a circular buffer, and the data written into the
buffer will wrap-around when the maximum buffer size is reached, but you cannot
overwrite data that has not been read (consumed) by the application.
NextByteExpected <= LastByteRcvd + 1
If data is received in order, then NextByteExpected will be equal to LastByteRcvd + 1;
Else it will be less than LastByteRcvd.
Now, the advertised window is calculated based on how much more free buffer is
available, given that the application has only read lastByteRead amount of data, and upto
NextByteExpected amount has been received in sequence and buffered. Thus it is
calculated as follows:
AdvertisedWindow = MaxRcvBuffer – (NextByteExpected - 1 - LastByteRead )
9.3.3 A flow control example
All this jargon and equations can be a little confusing. So, let us look at an example to see
how simple the whole thing is. A sends data to B
Let us assume that we have a 2500 byte SendBuffer and 2000 byte RecvBuffer. The
receiver B has less capacity than the sender A. MaxSendBuffer = 2500, MaxRcvBuffer =
2000.
Let us assume that each TCP segment can carry a maximum of 500 bytes (i.e., Maximum
Segment Size MSS = 500 bytes). To keep things simple, let the initial sequence number
be ‘1’. The actions and calculations at A and B are given below.

Action at A Action at B

Now, during the 3-way handshake, the


receiver B will specify its AdvertisedWindow
size (AWS). Since the entire buffer is free,
the AWS = 2000, the size of the buffer.
(lastByteRead = LastByteRcvd = 0).

Let us assume that the sender application


has written 2500 bytes into the SendBuffer.
On the sender side (at A), LastByteSent =
LastByteAcked = 0, and LastByteWritten =
2500.
Now since the received AWS = 2000, and
no data has been sent yet, the
EffectiveWindowSize = 2000, and ‘A’ can
start sending 4 segments of data with
sequence numbers 1, 501, 1001, and 1501
respectively. LastByteSent = 2000, and
LastByteAcked = 0.

When the first segment is received at B, it


occupies the first 500 bytes of the buffer at
B, and the LastByteRcvd = 500,
NextByteExpected = 501, lastByteRead = 0
(no data read yet). When the ACK for this
segment is sent, it will have an AWS = 2000
– (501 – 1 - 0) = 1500.

When A receives this ACK, it updates the


LastByteAcked to 500, calculates the
effectiveWindowSize as 1500 – (2000 –
500) = 0. Thus it cannot send any more
data.

Meanwhile, the second segment reaches B.


Let us assume that the receiving application
has read 200 bytes of data. So
LastByteRead = 200, and LastByteRcvd =
1000. NextByteExpected = 1001. The AWS
= 2000 – (1001 - 1 – 200) = 1200. The ACK
is sent for byte 1001, with AWS = 1200.

When the ACK reaches A, it updates


LastByteAcked to 1000.
EffectiveWindowSize = 1200 – (2000 –
1000) = 200. Thus A can send 200 bytes of
data. Let us assume that it sends this 200
bytes of data in another segment (5th
segment with sequence number 2001).
LastByteSent = 2200.

Meanwhile, the third segment reaches B,


but no data has been read by the
application. LastByteRcvd = 1500,
NextByteExpected = 1501. It sends an ACK
with AWS = 2000 – (1500 – 200) = 700.

When this ACK reaches A, LastByteAcked


= 1500, and EffectiveWindowSize = 700 –
(2200 – 1500) = 0. Thus it cannot send any
more data.

In this manner, when the fourth segment


reaches B, if 1000 bytes of data has been
read by B’s application, then, LastByteRead
= 1000, and LastByteRcvd = 2000.
NextBYteExpected = 2001. The ACK will be
sent with AWS = 2000 – (2000 – 1000) =
1000.

When this ACK reaches A, its


effectiveWindowSize = 1000 – (2200 –
2000) = 800. It can send 800 bytes of data.
But, since only 300 more bytes of data are
available to be sent, it will send a 6th
segment of 300 bytes with sequence
number 2201.
When the 5th segment reaches B,
LastByteRcvd = 2200, or effectively 200
(because of wrap-around after 2000),
NextByteExpected = 2201 (or 201)
LastByteRead = 1000,
AWS = 2000-(2201-1-1000)
=2000 –(2200-1000)=2000-1200 =800

When the sixth segment reaches B,


LastByteRcvd=2500 (or 500),
NextByteExpected= 2501 (or 501),
LastByteRead = 1000,
AWS = 500

…. ……

At this point, side A has sent all of the 2500 bytes of data written by its application. If it has
more data to be sent, it could write another 2500 bytes in the buffer, as all data transmitted
is acknowledged, but it will not be able to send more than 500 bytes. (Why ? Look at the
AWS).
You can work out different scenarios to figure out how the data flow is controlled
depending on the buffer availability at both ends. Suppose the application at A wants to
write another 1000 bytes into the buffer, (at this point, i.e., after it has sent its 500 bytes,
will it be able to do so ? (Is there space in the buffer to hold 1000 new bytes?)
If we explore these different scenarios, we will see that certain very peculiar situations can
arise, which totally affect the performance of the network. We will look at one such
scenario that was faced by TCP as it evolved, and some solutions to this. The
phenomenon that we are referring to is the Silly Window Syndrome (SWS) which is
explained below.
9.4 Silly Window Syndrome (SWS)
Consider this situation. Let us say that A wants to send 2500 bytes of data to B. B is not
interested in sending any data to A, only ACKs. B has a MaxRecvBuffer of 2000 bytes, It
gives a AWS of 2000, and A sends 2000 bytes (in 4 segments), but the application at B
does not read any data. It is so slow. Now when the ack is sent for the 2000 bytes, the
advertised window size will be “0”. No more place new data. Now, look at the situation.
A cannot send data until it gets a non-zero AWS from B. B is not going to send any ACK
( and along with it the AWS) if it does not receive any data from A. So, a deadlock occurs –
A cannot send data until it gets non-zero AWS from B, and B cannot send AWS until it
receives some data from A. So both get stuck.
A simple solution to this deadlock, is for A to maintain a small timer in such a situation (i.e.,
after it gets an AWS=0). Once this timer expires, A can send a small – one byte - “probe”
segment to B. If by this time, B has read some data, the window can be opened. There will
be place in the buffer for new data, and B can advertise a non-zero AWS, and A will now
be able to proceed sending data. We use a 1-byte probe segment to ensure that we do not
lose too much data if the buffer is still full. So this is like a feeler – just send little data and
see how B responds. If B has not read any data, this 1-byte will be dropped, and A has to
send a probe segment again after some time.
Now this is not the end of the problem. Suppose, when B receives the one-byte probe
segment, it has read exactly one byte of data from the buffer. Then, the data from A will be
stored in that available 1-byte, and an ACK will be sent with AWS = 0. Now, A again will
send a probe segment after some time, and the same scenario could repeat. B receives
the 1-byte, and sends an ACK with AWS=0, and so on. So what is happening is that data
is going from A to B, but in 1-byte segments which are sent at intervals of probe-segment
timer. The window based flow control is reduced to a 1-byte stop-and-wait flow control and
that too which triggers only periodically. This degeneration of the window-based scheme to
sending small-sized segments is called the “Silly Window Syndrome” (SWS). The
performance suffers as the overhead for every byte of data becomes very high (20bytes
minimum).
This is a case of receiver causing SWS. Of course, we gave an extreme scenario of
sending 1-byte at a time, but advertising small windows (after an AWS of zero), will also
lead to this SWS.
SWS can also be caused by the sender. This happens when the sending application writes
one byte at a time, that too slowly. Its TCP will then begin to transmit small sized
segments. This is referred to as Sender induced SWS.
Clearly, SWS needs a solution as it affects performance.
9.4.1 SWS solutions
The solution to this problem, could come either from the receiver side, or the sender side,
depending on who caused it. From the receiver side, one straight-forward solution is to not
advertise small windows after advertising a zero window. It could wait for some
considerable space, say, equal to an MSS (maximum segment size) before advertising a
non-zero window.
Alternately, the receiver could delay sending its ACKs. Delay the ACK until some space is
freed in its buffer. But this could cause other issues. For instance, if the ACK is delayed for
too long, sender may time-out and retransmit the packet – unnecessary retransmission !
So, the delay should not be too high, and it is difficult to determine this value.
On the sender side, we have a solution proposed by Nagle, called Nagle’s algorithm.
Nagle’s Algorithm
This algorithm tries to reduce transmission of small sized segments, even if data is being
written very slowly by the sending application, by delaying the sending of data. The trick is
to determine the right amount of time for which you could delay sending of data. If the
delay is too long, it will hurt interactive applications, which may send small amounts of
data, and wait for response. If the delay is too small, it obviously affects performance. So,
what Nagle’s algorithm does is, that it comes up with a self-clocking scheme to determine
when data is to be transmitted. It works like this :
When application generates additional data,
If it fills a max segment (data > 1 MSS), and if the window is open, it is sent;
else
if there is unacknowledged data in transit, (some data has been sent, but ACK not
rcvd), then it is buffered until the ACK arrives, and sent when the ACK arrives;
else (if there are no pending ACKs), it is sent even if it is < MSS.
The idea of checking for unACKed data, and using the ACK as a trigger to transmit the
data, is where the “self-clocking” idea comes in. If there is some ACK to be received, then
there is some activity going on, so you can afford to wait for some time (upto 1 RTT -
Round-trip-time) before transmitting the newly generated data !
And this helps to reduce sending small-sized segments.
Let us look at an example to better understand this.
Consider sender A generating data at the rate of 10 bytes/sec. Assume MSS = 250, RTT
= 3.5 sec.
Assume that 10 bytes are written into the buffer every 1 sec starting at t=1sec. At t=1 sec,
there are no pending ACKs, hence, a segment consisting of these 10 bytes is sent. The
ACK for this segment will arrive at 4.5 secs. You can guess what will happen now. (Upto
4.5 sec, all data written will be buffered, as per Nagle’s algorithm.)
At t=2 secs, 10 bytes arrive at the buffer. They are not sent, they are held in the buffer, as
we have an ACK pending for the first segment transmitted.
Same at t=3 secs, and t= 4secs. We now have 30 bytes buffered (still less than MSS). At
4.5 secs, when the ACK for the first segment arrives, a segment with these 30 bytes is
sent. We have avoided sending 3 10-byte segments, but there is some delay !
TCP actually allows the application to turn-on this algorithm or turn-it-off depending on
whether it can tolerate this delay or not.
That brings us to the end of the discussion on TCP’s flow control technique.
9.5 Summary
To summarize, we have discussed two major components of TCP – the TCP state
diagram with respect to Connection establishment & termination, and TCP’s flow
control mechanism including buffer management, calculation of window sizes, silly
window syndrome, and its solution Nagle’s algorithm. We will look at other components
of TCP in subsequent modules.
Thank you !

References
1. Computer Networking: A Top Down Approach Featuring the Internet, 6th edition.
Jim Kurose, Keith Ross, Addison-Wesley, 2012.
2. Computer Networks: A systems Approach, 5th edition, David Peterson, Davie,
Morgan Kauffman, 2012.

You might also like