You are on page 1of 23

Transport layer primitives

These primitives are part of the transport service interface, which varies depending on the
specific transport service being used.

1. LISTEN: This primitive is used by the server to await incoming connections from clients.
When the server executes the LISTEN primitive, it typically blocks until a client initiates a
connection.

2. CONNECT: The CONNECT primitive is executed by a client to establish a connection with


the server. When a client wants to communicate with the server, it executes the CONNECT
primitive, which involves sending a packet to the server encapsulating a transport layer
message.

3. SEND: This primitive allows application programs to send data over an established
connection. Once a connection is established between the client and server, application
programs can use the SEND primitive to transmit data.

4. RECEIVE: The RECEIVE primitive enables application programs to receive incoming data
from the other party in the connection. It allows application programs to retrieve data sent
by the other party.

5. DISCONNECT: Finally, the DISCONNECT primitive is used to terminate an established


connection. After the communication is complete or no longer needed, application programs
can execute the DISCONNECT primitive to close the connection.

Berkeley Sockets

Berkeley Sockets, introduced with the Berkeley UNIX 4.2BSD software distribution in 1983,
are a set of transport primitives used for TCP and have become widely adopted for Internet
programming across various operating systems. The socket primitives offer a model that
provides more features and flexibility compared to earlier examples.

The SOCKET primitive is the initial step for servers, creating a new endpoint and allocating
table space within the transport entity. It specifies the addressing format, service type, and
protocol, returning a file descriptor similar to an OPEN call on a file.

BIND is used to assign network addresses to newly created sockets, allowing remote clients
to connect. This separation of socket creation and address assignment allows processes that
are indifferent to their addresses and those that require specific, well-known addresses to
coexist.

LISTEN prepares the server to queue incoming connection requests, facilitating the handling
of multiple simultaneous connection attempts. Unlike previous models, LISTEN is non-
blocking.

1
ACCEPT is a blocking call that waits for an incoming connection request. Upon arrival, it
creates a new socket with the same properties as the original and returns a file descriptor for
the new connection, allowing the server to handle the connection on the new socket while
continuing to wait for additional connections on the original socket.

On the client side, SOCKET is also used to create a new socket, but BIND is optional since
the client’s address is typically not important to the server. CONNECT initiates the
connection process and blocks until completion, establishing a full-duplex connection that
allows both sides to use SEND and RECEIVE for data transmission. Standard UNIX READ
and WRITE system calls can be used if the special options of SEND and RECEIVE are not
needed.

Connection release is symmetric, with both sides executing a CLOSE primitive to terminate
the connection.

The popularity of sockets stems from their role as the de facto standard for abstracting
transport services to applications. While commonly used with TCP to provide a reliable byte
stream, the socket API is versatile and can be employed with other protocols and for different
transport services, such as connectionless transport services where CONNECT sets the
remote peer’s address and SEND and RECEIVE handle datagram communication.

Addressing

Addressing in the context of transport protocols involves specifying endpoints for


communication between application processes. Here’s an explanation using the terms and
concepts from your text:

Transport Service Access Points (TSAPs):

• TSAPs are specific endpoints in the transport layer to which application processes can
attach themselves for communication.
• They are necessary because a computer might have a single Network Service Access
Point (NSAP), and multiple transport endpoints need to be distinguished.

Network Service Access Points (NSAPs):

• NSAPs are the network layer addresses, like IP addresses, through which the TSAP
connections run.

Establishing a Transport Connection:

1. A server process attaches to a TSAP to wait for incoming connections.


2. An application process specifies both source and destination TSAPs to establish a
connection.
3. The application sends its message, and the server responds.
4. After the communication, the transport connection is released.

2
Discovering TSAPs:

• Services may have stable TSAP addresses listed in well-known places, like the
/etc/services file on UNIX systems.
• For dynamic or temporary services, a portmapper can be used to discover the TSAP
address by sending a service name to the portmapper, which then replies with the
TSAP address.

Portmapper:

• A special process that helps in finding the TSAP address for a given service name.
• New services must register with the portmapper, providing their service name and
TSAP.

In essence, addressing in transport protocols is about defining and discovering the endpoints
(TSAPs) for establishing connections between application processes over the network,
facilitated by NSAPs and, when necessary, by a portmapper.

Establishing a connection

Establishing a connection in the transport layer is a complex process due to the unpredictable
nature of network behavior. Here’s an explanation using the concepts from your text:

Challenges in Connection Establishment:

• The network can lose, delay, corrupt, and duplicate packets, which complicates the
simple idea of sending a CONNECTION REQUEST and waiting for a
CONNECTION ACCEPTED reply.
• Networks that use datagrams can cause packets to take different routes, leading to
delays and potential duplication of packets.

The Problem of Delayed Duplicates:

• Delayed duplicates can cause serious issues, such as executing the same transaction
multiple times if the duplicates are mistaken for new requests.
• The goal is to establish connections reliably, ensuring that delayed duplicates are
recognized and rejected.

Solutions to Address Delayed Duplicates:

• Throwaway Transport Addresses: Generate a new transport address for each


connection, discarding it after the connection is released. This prevents delayed
duplicates from reaching a transport process but makes initial connections more
difficult.
• Unique Connection Identifiers: Assign a unique identifier (sequence number) to
each connection. Maintain a table of obsolete connections to check incoming requests

3
against it. This helps identify duplicates but requires indefinite storage of history
information.

Flaws in the Solutions:

• Both methods have drawbacks. The throwaway address method complicates initial
connections, while the unique identifier method requires persistent history, which is
problematic if a machine crashes and loses its memory.

In essence, connection establishment in the transport layer involves managing the


complexities of network communication, ensuring that connections are made reliably, and
handling the potential issues caused by delayed and duplicated packets. The text emphasizes
the need for algorithms that can handle these challenges effectively.

The text discusses the three-way handshake protocol introduced by Tomlinson in


1975 to solve a specific problem related to connection establishment in networking.
This protocol involves one peer verifying with the other that a connection request is
current.

In the typical setup procedure, when Host 1 initiates a connection, it selects a


sequence number (x) and sends a CONNECTION REQUEST segment to Host 2. Host 2
responds with an ACK segment acknowledging x and announcing its own initial
sequence number (y). Finally, Host 1 acknowledges Host 2's choice of an initial
sequence number in the first data segment it sends.

The text explains how the three-way handshake works in the presence of delayed
duplicate control segments. If a delayed duplicate CONNECTION REQUEST arrives at
Host 2, Host 2 sends an ACK segment to Host 1, seeking verification. If Host 1 rejects
the attempt to establish a connection, Host 2 realizes it was tricked and abandons
the connection, preventing any damage.

In the worst case scenario, where both a delayed CONNECTION REQUEST and an
ACK are present, Host 2 proposes using y as the initial sequence number for traffic to
Host 1. When the second delayed segment arrives, Host 2 recognizes it as an old
duplicate, preventing accidental connection setup.

TCP utilizes the three-way handshake to establish connections, employing a


timestamp to extend the sequence number to prevent wrapping. Additionally, for
security reasons, TCP uses pseudorandom initial sequence numbers to avoid
vulnerabilities associated with predictable sequences.

4
Connecion release:

The text discusses the process of releasing a connection, which is simpler than
establishing one but still presents potential pitfalls. There are two styles of
terminating a connection: asymmetric release and symmetric release.

1. Asymmetric Release: In asymmetric release, when one party hangs up, the
connection is broken abruptly. This method may result in data loss, as illustrated in
the scenario where Host 1 sends a segment, but Host 2 issues a DISCONNECT
before receiving it, leading to data loss.

2.Symmetric Release: Symmetric release treats the connection as two separate


unidirectional connections, requiring each direction to be released independently.
This method allows a host to continue receiving data even after sending a
DISCONNECT segment.

In scenarios where each process has a fixed amount of data to send and knows
when it has sent it, symmetric release works well. However, in other situations where
determining the completion of data transmission is less obvious, a more
sophisticated release protocol may be needed. This could involve a protocol where
one host signals completion, and the other host confirms before safely releasing the
connection.

Certainly, let's delve into an explanation of UDP based solely on the provided text:

1. Introduction to UDP:
- UDP, or User Datagram Protocol, is a connectionless transport protocol within the
Internet protocol suite.
- It allows applications to send encapsulated IP datagrams without the need to
establish a connection.
- UDP segments consist of an 8-byte header followed by the payload.

2. UDP Header:
- The UDP header includes two ports to identify endpoints within the source and
destination machines.
- Ports function like mailboxes rented by applications to receive packets.
- The source port is important for specifying where a reply should be sent back to.

3. UDP Length and Checksum:


- The UDP length field covers the header and data, with a minimum length of 8
bytes.
- The maximum length is 65,515 bytes due to the size limit on IP packets.
- An optional checksum provides extra reliability by checksumming the header,
data, and a pseudoheader.

5
4. Pseudoheader for IPv4:
- The pseudoheader contains the IPv4 addresses of the source and destination
machines, the UDP protocol number (17), and the byte count for the UDP segment.

5. Functionality of UDP:
- UDP does not handle flow control, congestion control, or retransmission of bad
segments; these tasks are left to user processes.
- It provides an interface to the IP protocol and demultiplexes multiple processes
using ports, along with optional end-to-end error detection.

6. Use Cases and Advantages:


- UDP is useful for applications requiring precise control over packet flow, error
control, or timing.
- It's particularly beneficial in client-server scenarios where short requests and
replies are exchanged, as it requires simpler code and fewer messages compared to
protocols like TCP.
- An example application utilizing UDP is DNS (Domain Name System), which
efficiently handles host name lookups with minimal overhead.

In summary, UDP offers a lightweight, connectionless communication mechanism


suitable for applications where simplicity and efficiency are paramount, leaving more
complex tasks to user processes.

Certainly! Let’s explain RTP (Real-time Transport Protocol) using the information provided:

RTP’s Basic Function:

• RTP combines multiple real-time data streams into a single stream of UDP packets.
• These packets can be sent to one destination (unicasting) or many (multicasting).
• RTP doesn’t offer special delivery guarantees; packets may be lost or delayed.

Features for Multimedia:

• Sequential Numbering: Each RTP packet is numbered sequentially to identify


missing packets.
• Handling Loss: The application decides what to do if a packet is missing, like
skipping a video frame or interpolating audio.
• No Retransmissions: RTP doesn’t acknowledge packets or support retransmission
requests.

Payload and Encoding:

• Multiple Samples: An RTP payload can contain various samples with different
encodings.
• Encoding Flexibility: RTP allows for different encoding methods and specifies them
in a header field.

6
Timestamping:

• Synchronization: Timestamps help synchronize playback of samples, regardless of


network delays.
• Multiple Streams: Allows for the synchronization of separate streams, like video and
audio, for coherent playback.

RTP Header:

• Version Field: Indicates the version of RTP, which is currently 2.


• Padding (P bit): Shows if the packet is padded to a multiple of 4 bytes.
• Extension (X bit): Indicates the presence of an extension header, the details of which
are not defined but include the length.

RTP is essential for real-time applications, providing a way to handle multimedia data over
networks without strict delivery guarantees but with mechanisms to manage packet loss and
synchronization. If you need more details or have other questions, feel free to ask!

RTCP, or Real-time Transport Control Protocol, works alongside RTP (Real-time Transport
Protocol) and is detailed in RFC 3550. It’s responsible for managing feedback,
synchronization, and user interface aspects of media streaming, but it doesn’t carry the media
samples itself.

Functions of RTCP:

• Feedback: It provides feedback on network conditions like delay, jitter, bandwidth,


and congestion to the sources.
• Adaptive Quality: This feedback helps adjust the encoding process to optimize data
rates and quality based on current network performance.
• Payload Type Field: Indicates the encoding algorithm used for each packet, allowing
for dynamic adjustments.

Bandwidth Management:

• Report Scaling: To avoid excessive bandwidth usage, especially in large multicast


groups, RTCP limits its reporting rate to a small percentage of the media bandwidth.

Synchronization:

• Interstream Sync: RTCP synchronizes different streams that may have varying
clocks and drift rates.

Identification:

• Source Naming: It provides a way to identify the sources of streams, such as who is
speaking at a given time.

RTCP plays a crucial role in maintaining the quality and coordination of real-time media
streaming over networks. If you need more details or have other questions, feel free to ask!

7
TCP, or Transmission Control Protocol, is a fundamental protocol designed to deliver a
reliable byte stream between computers over an internetwork, which is a network made up of
different kinds of networks with varying parameters. Here’s an explanation based on the
provided text:

• Purpose: TCP ensures reliable transmission over networks that are inherently
unreliable, adapting dynamically to the varying conditions of the internetwork.
• Evolution: Since its initial definition in RFC 793, TCP has undergone numerous
improvements and fixes, leading to additional RFCs that enhance performance,
provide congestion control, and more.
• Functionality: A TCP transport entity, which could be part of the operating system
kernel, a library, or a process, manages the TCP streams. It breaks down data from
applications into manageable packets, often sized to fit within a single Ethernet frame,
and sends them over the network using IP.
• Delivery Management: TCP is responsible for ensuring data packets are sent at an
appropriate rate to utilize network capacity without causing congestion. It also
handles the retransmission of lost packets and reorders any out-of-sequence packets to
reconstruct the original data stream.

In essence, TCP provides the reliability and order that IP does not, ensuring that data sent
across an internetwork reaches its destination correctly and in sequence, which is crucial for
many applications. If you need more details or have other questions, feel free to ask!

The TCP model is based on the concept of creating reliable connections between endpoints,
known as sockets. Here’s an explanation using the text provided:

• Sockets and Ports: Each endpoint in a TCP connection is a socket, identified by the
host’s IP address and a 16-bit number called a port. A port is essentially a TCP
service access point (TSAP).
• Establishing Connections: A TCP connection is explicitly established between two
sockets on different machines.
• Multiple Connections: A single socket can handle multiple connections
simultaneously, identified by the pair of socket numbers at both ends.
• Port Numbers: Ports below 1024 are reserved for standard services and require
privileged access. Ports from 1024 to 49151 can be registered for general use, and
applications often choose their own ports.
• Daemon Management: Instead of having multiple daemons running and waiting for
connections, a single daemon like inetd in UNIX systems listens on multiple ports and
activates the appropriate daemon when needed.
• Full Duplex and Point-to-Point: TCP connections allow for bidirectional traffic and
are point-to-point, meaning there are exactly two endpoints for each connection.
• Byte Stream: TCP connections are treated as continuous streams of bytes without
preserving message boundaries. Data sent in chunks can be received in a different
arrangement than it was sent.

This model ensures that TCP can provide reliable, ordered, and error-checked delivery of a
stream of bytes between applications running on hosts communicating over an IP network. If
you need more details or have other questions, feel free to ask!

8
TCP, or Transmission Control Protocol, is a core network protocol that ensures reliable data
transmission between devices over an internetwork. Here’s an overview based on the
provided text:

• Sequence Numbers: Each byte in a TCP connection is assigned a unique 32-bit


sequence number, which is crucial for tracking data packets.
• Segments: Data is exchanged in segments, each with a 20-byte header and a variable
amount of data. The size of segments is limited by the IP payload capacity and the
Maximum Transfer Unit (MTU) of the network.
• MTU and Fragmentation: To prevent fragmentation, TCP uses path MTU discovery
to adjust segment sizes according to the smallest MTU on the path.
• Sliding Window Protocol: TCP uses this protocol with dynamic window sizes to
manage data flow. Senders start timers with each segment sent, and receivers send
back acknowledgements with the expected next sequence number.
• Handling Delays and Loss: If acknowledgements are delayed or lost, TCP may
retransmit segments. It keeps track of which bytes have been received to ensure data
integrity.
• Optimization: TCP has been optimized to handle various network issues efficiently,
using algorithms to maintain performance despite problems like out-of-order
segments and retransmissions.

TCP’s design allows for robust and adaptable communication, capable of handling the
complexities of varying network conditions. If you need more details or have other questions,
feel free to ask!

TCP connection establishment is a critical process that involves a three-way handshake


mechanism. Here’s how it works based on the provided text:

1. Listening for Connections:


o The server side waits for an incoming connection by executing the LISTEN
and ACCEPT primitives, which could be open to any source or a specific one.
2. Initiating a Connection:
o The client side initiates the connection using the CONNECT primitive,
specifying the destination IP address, port, and other details like the maximum
segment size it can handle.
3. SYN Segment:
o The CONNECT primitive sends out a TCP segment with the SYN bit set to 1
(on) and the ACK bit set to 0 (off), indicating a request to synchronize
sequence numbers for a new connection.
4. Response from Server:
o Upon receiving the SYN segment, the server checks if there is a process
listening on the specified port. If not, it rejects the connection with a segment
having the RST bit on.
o If a process is listening, it can accept the connection by sending back an
acknowledgment segment.
5. Sequence Number Usage:
o A SYN segment uses up one byte of sequence space to ensure it can be
acknowledged distinctly.

9
6. Simultaneous Connection Attempts:
o If two hosts try to establish a connection at the same time, the protocol ensures
that only one connection is established, identified by the socket pair (x, y).
7. Protection Against Delays:
o Initial sequence numbers are chosen to cycle slowly to protect against delayed
duplicate packets.
8. SYN Flood Attack:
o A vulnerability in the handshake process is the potential for a SYN flood
attack, where a malicious sender can overwhelm a server with SYN segments
without completing the connection.
9. SYN Cookies Defense:
o To defend against SYN floods, SYN cookies can be used. The server
generates a cryptographically secure sequence number for the SYN segment
and doesn’t need to remember it. If the handshake completes, the server can
regenerate the correct sequence number using the same cryptographic
function.

This process ensures that TCP connections are established reliably and securely, with
mechanisms in place to handle potential attacks and network issues. If you need more details
or have other questions, feel free to ask!

TCP connection release is a process that can be understood by considering a TCP connection
as two simplex connections, each being terminated independently. Here’s how the release
process works:

• Initiating Connection Release:


o Either party can initiate the release of a connection by sending a TCP segment
with the FIN (finish) bit set, indicating no more data will be sent.
• Acknowledging the FIN:
o The FIN segment is acknowledged by the receiving end. Once acknowledged,
that direction of the connection is closed to new data, but data may still be sent
in the opposite direction.
• Complete Connection Termination:
o The connection is fully terminated when both directions have been closed
down after exchanging FIN and ACK segments.
• Optimizing Segment Exchange:
o Normally, four segments are needed to close a connection (a FIN and an ACK
for each direction). However, the process can be optimized to three segments
if the first ACK and the second FIN are combined into one segment.
• Simultaneous Closure:
o Both ends of a TCP connection may send FIN segments at the same time,
similar to both parties saying goodbye on a phone call. The connection is shut
down after these are acknowledged.
• Handling Non-Response:
o To address the issue of non-response to a FIN (the two-army problem), timers
are used. If no response is received within a specified time frame, the sender
of the FIN will release the connection. The other side will also time out
eventually if it detects no response.

10
This mechanism ensures that TCP connections are terminated cleanly and resources are freed
up, even though the solution is not theoretically perfect, it works well in practice. If you need
more details or have other questions, feel free to ask!

Non-persistent connections in HTTP are characterized by the establishment of a separate


TCP connection for each request-response transaction. Here’s an explanation using the text
provided:

1. Initiation: A TCP connection is initiated by the HTTP client to the server (e.g.,
www.someSchool.edu) on the default port number 80.
2. Request: The client sends an HTTP request message into the socket associated with
the TCP connection. This message requests the object identified by the URL or path
name.
3. Response: The server receives the request, retrieves the requested object from
storage, encapsulates it in an HTTP response message, and sends it back through the
TCP connection.
4. Connection Closure: After sending the response, the server instructs TCP to close
the connection. However, the connection remains open until the client fully receives
the response.
5. Termination: Once the client receives the response, the TCP connection is
terminated. The client then processes the HTML file and identifies any additional
objects needed.
6. Repetition for Additional Objects: If there are additional objects (like JPEG
images), steps 1-5 are repeated for each object, resulting in multiple TCP connections.

In the context of a web page with a base HTML file and multiple JPEG images, this process
would result in 11 separate TCP connections, one for each object. The use of non-persistent
connections means that each connection is closed after the server sends the object, and it does
not persist for other objects. Modern browsers may open multiple parallel TCP connections
to handle these transactions, which can reduce response times by minimizing round-trip time
(RTT) and slow-start delays.

Persistent connections in HTTP are designed to overcome the limitations of non-persistent


connections. Here’s an explanation using the text provided:

• Connection Maintenance: With persistent connections, the server does not close the
TCP connection after sending a response. Instead, it keeps the connection open for
subsequent requests and responses between the same client and server.
• Efficient Use of Resources: This approach allows an entire web page, including all
its objects like images, to be sent over a single TCP connection. It also enables
multiple web pages from the same server to be sent over one connection, reducing the
need for multiple connections.
• Timeout Interval: The server may close the connection if it is not used for a certain
configurable time, known as the timeout interval.
• Pipelining: Persistent connections can be with or without pipelining. Without
pipelining, the client waits for a response before issuing a new request. With

11
pipelining, the client issues requests back-to-back, and the server sends responses
back-to-back, reducing the round-trip time (RTT) delay.
• Reduced Delays: Using persistent connections with pipelining, only one RTT is
needed for all referenced objects, as opposed to one RTT per object. This also
minimizes the time the connection is idle, thus conserving server resources.
• Slow-Start Phase: Persistent connections also reduce the slow-start delay. After
sending the first object, the server can continue sending subsequent objects at a rate
closer to the last object’s rate, rather than starting slow again.

In summary, persistent connections enhance the efficiency of HTTP by allowing multiple


requests and responses to be sent over a single connection, reducing the number of RTTs and
the impact of TCP’s slow-start phase.

An HTTP request message is a structured text message sent from a client to a server to
request a resource. It is composed of ASCII text, making it readable by humans. The
structure of the message includes:

• Request Line: This is the first line, containing three fields: the method (e.g., GET,
POST, HEAD), the URL of the requested object (e.g., /somedir/page.html), and the
HTTP version (e.g., HTTP/1.1).
• Header Lines: Following the request line are header lines that provide additional
information to the server:
o Connection: Indicates whether the connection will be closed after the
transaction or kept open for further transactions.
o User-agent: Identifies the client’s browser type to the server.
o Accept: Specifies the types of media the client can process.
o Accept-language: Indicates the preferred language for the response.

The request message may also include an “Entity Body” for methods like POST, where data
is sent to the server, such as form submissions. The message ends with an extra carriage
return and line feed, signaling the end of the message. Each line in the message is also
terminated with a carriage return and line feed.

An HTTP response message is what a server sends back to the client after receiving and
processing an HTTP request. It contains the following components:

• Status Line: Includes the HTTP version, a status code, and a phrase that describes the
status of the response. Common status codes are:
o 200 OK: The request was successful.
o 301 Moved Permanently: The requested object has been moved to a new
URL, which is provided in the response.
o 400 Bad Request: The server could not understand the request.
o 404 Not Found: The requested document does not exist on the server.
o 505 HTTP Version Not Supported: The server does not support the HTTP
protocol version used in the request.
• Header Lines: Provide metadata about the response, such as:

12
o Connection: Indicates if the server will close the TCP connection or keep it
open.
o Date: The time and date the response was created and sent by the server.
o Server: Identifies the server software that generated the message.
o Last-Modified: The time and date when the object was last created or
modified, important for caching.
o Content-Length: The size of the object being sent.
o Content-Type: The media type of the object, such as HTML text.

The response message is sent from the server to the client’s TCP connection and is used to
convey the result of the client’s request, along with any requested content or further
instructions. If the server receives an HTTP/1.0 request, it will close the TCP connection after
sending the response, even if it is capable of HTTP/1.1, to comply with the client’s
expectations.

Authentication and cookies are mechanisms used in HTTP to manage access to resources and
track user sessions.

Authentication is the process where users must provide credentials, typically a username and
password, to access protected documents on a server. The steps involved in HTTP
authentication are:

1. A client sends a request without special headers to a server requiring authorization.


2. The server responds with a 401 Authorization Required status code and a WWW-
Authenticate header, detailing how to authenticate, usually by requesting a
username and password.
3. The client then prompts the user for these credentials and resends the request with an
Authorization header containing them. Once authenticated, the client includes the
credentials in subsequent requests, which are cached until the browser is closed.

Cookies are used by websites to remember users. They work as follows:

1. A client visits a website that uses cookies, and the server’s response includes a Set-
cookie header with an identification number.
2. The client’s browser stores this number in a cookie file on the user’s machine, along
with the server’s hostname.
3. In future requests to the same server, the client sends a Cookie header with the
identification number, allowing the server to recognize the user.

Cookies enable websites to maintain user sessions and preferences across multiple visits
without requiring re-authentication each time. They are defined in RFC 2109 and can vary in
implementation across different websites. Some sites use them, while others do not.

SMTP, which stands for Simple Mail Transfer Protocol, is a protocol used for sending emails
across the Internet. It operates by transferring messages from the sender’s mail server to the
recipient’s mail server. Here’s an explanation of SMTP using the provided text:

13
• Legacy Technology: SMTP dates back to 1982 and was designed to work with
simple seven-bit ASCII text, which can be limiting in today’s multimedia-rich
environment.
• Basic Operation: When a user, like Alice, wants to send an email to Bob, she uses
her email client to compose and send a message. This message is then queued in
Alice’s mail server.
• Client-Server Interaction: The SMTP client on Alice’s mail server initiates a TCP
connection to Bob’s mail server’s SMTP server on port 25. After some initial
handshaking, where the sender and recipient email addresses are exchanged, Alice’s
message is sent through the TCP connection.
• Delivery: Bob’s mail server receives the message and places it in Bob’s mailbox,
where he can access it at his convenience.
• Direct Connection: SMTP does not use intermediate mail servers; the connection is
direct between the sender’s and recipient’s mail servers, even if they are
geographically distant.
• Reliability: SMTP relies on TCP’s reliable data transfer service to ensure the
message is delivered without errors.

In summary, SMTP is a fundamental protocol for email transmission on the Internet, enabling
direct and reliable communication between mail servers. Despite its age and some outdated
characteristics, it remains a core component of email infrastructure.

The text describes how DNS works using a distributed system instead of a
centralized server. Here's a breakdown based on the text:

Problems with a centralized design:


• Single point of failure: If the server crashes, the entire internet would be
down.
• Traffic overload: A single server wouldn't handle the vast amount of queries
from millions of users.
• Distance and Delays: Users far from the server would experience slow
queries due to long distances and potentially congested connections.
• Maintenance challenges: Updating a massive centralized database for every
new host would be difficult and require complex permission controls.
The Distributed DNS Solution:
• The solution is a hierarchical and distributed system of multiple name servers
around the world.
• No single server has all the information.
Types of Name Servers:

14
• Local Name Servers (DNS Client):
o Every ISP (University, Company, etc.) has a local name server.
o Users' queries are first directed here. (Configured manually on user's
machine)
o Local server can answer queries for hosts within the same ISP
efficiently.
• Root Name Servers:
o There are a dozen or so root name servers, mostly in North America.
o Local servers contact a root server if they can't answer a query
themselves.
o Root servers either have the answer or know the authoritative name
server for that specific host.
• Authoritative Name Servers:
o Every host is registered with an authoritative name server, typically
within their local ISP. (For redundancy, every host should have at least
two).
o Authoritative name servers hold the mapping between a hostname and
its IP address.
o Root servers or local servers query authoritative name servers for
information they lack.
How a DNS lookup works (to a first approximation):
1. User requests a website (host).
2. The query goes to the local name server.
3. If the local server has the answer (for a host within the same ISP), it provides
the IP address.
4. If not, the local server acts as a client and queries a root name server.
5. The root server either provides the IP address or directs the local server to the
authoritative name server for that host.
6. The local server queries the authoritative name server.
7. The authoritative name server provides the IP address to the local server.
8. The local server replies to the user's machine with the IP address.

15
This distributed system avoids the limitations of a centralized server and efficiently
handles DNS queries across the vast internet.

Sure, let's break down broadcasting routing using only the provided text:

1. **Definition and Purpose:**

- Broadcasting routing involves sending messages from one host to many or all
other hosts.

- It's useful for distributing information like weather reports, stock updates, or live
radio programs to interested hosts.

2. **Methods of Broadcasting:**

- One method involves sending a distinct packet to each destination, which is


inefficient and requires a complete list of all destinations.

- An improvement is multidestination routing, where packets contain a list of


destinations or a bitmap indicating desired destinations, allowing routers to efficiently
route packets to multiple destinations.

- Flooding, another method, sends packets to all links and is efficient for
broadcasting but may cause redundancy.

3. **Reverse Path Forwarding:**

- Reverse path forwarding is an elegant method where routers forward broadcast


packets onto all links except the one they arrived on.

16
- Routers check if the packet arrived on the link usually used to send packets to the
source and forward it accordingly.

4. **Illustrative Example:**

- An example of reverse path forwarding is provided, showing how packets are


forwarded based on whether they arrived on the preferred path to the source.

5. **Advantages of Reverse Path Forwarding:**

- Reverse path forwarding is efficient and easy to implement, requiring routers to


know how to reach all destinations without needing sequence numbers or listing all
destinations in the packet.

6. **Improved Broadcasting Algorithm:**

- The last broadcast algorithm improves on reverse path forwarding by explicitly


using the sink tree, a spanning tree that includes all routers without loops.

- Each router copies an incoming broadcast packet onto all spanning tree lines
except the one it arrived on, minimizing the number of packets needed.

In summary, broadcasting routing involves efficiently distributing messages to


multiple hosts, with various methods like multidestination routing, flooding, reverse
path forwarding, and using spanning trees to optimize packet transmission.

Hierarchical routing, as described in the provided text, involves dividing routers into
regions and organizing them into a hierarchical structure to manage the complexity

17
of large networks efficiently. Let's break down the explanation based on the given
text:

1. **Motivation for Hierarchical Routing:**

- As networks grow in size, router routing tables also grow, consuming more
memory and CPU time.

- To address this issue, hierarchical routing is introduced to manage large


networks more effectively.

- Hierarchical routing becomes necessary when it's no longer feasible for every
router to have an entry for every other router.

2. **Concept of Regions:**

- Routers are divided into regions, where each router knows how to route packets
to destinations within its own region.

- Routers in one region have no knowledge of the internal structure of other


regions, simplifying routing table management.

3. **Levels of Hierarchy:**

- In addition to regions, hierarchical routing can involve multiple levels of hierarchy,


such as clusters, zones, or groups.

- Each level aggregates routers into larger entities, reducing the complexity of
routing tables.

4. **Example of Multilevel Hierarchy:**

18
- An example illustrates routing from Berkeley, California, to Malindi, Kenya, using
hierarchical routing.

- Routers at different levels of the hierarchy forward packets based on their


knowledge of local and remote destinations, ultimately reaching the destination.

5. **Quantitative Example:**

- A quantitative example demonstrates routing in a two-level hierarchy with five


regions, showing how hierarchical routing reduces the routing table size while
increasing path length.

6. **Optimization Considerations:**

- Although hierarchical routing reduces routing table size, it may increase path
length, as traffic may not always take the shortest route.

- Researchers have studied the optimal number of hierarchy levels for large
networks, suggesting that the number of levels should be approximately the natural
logarithm of the number of routers.

In summary, hierarchical routing organizes routers into regions and potentially


multiple levels of hierarchy to manage large networks efficiently, reducing routing
table size while balancing path length optimization.

Multicasting, as described in the provided text, refers to the transmission of


messages to multiple receivers who are part of a well-defined group. Let's elucidate
multicasting based on the text:

19
1. **Need for Multicasting:**

- In scenarios like multiplayer games or live video streaming, sending distinct


packets to each receiver is expensive, while broadcasting is wasteful if most
receivers are not interested.

- Multicasting addresses this need by efficiently sending messages to large but


specific groups within the network.

2. **Group Management:**

- Multicasting requires mechanisms to create, destroy, and identify groups. Each


group is typically identified by a multicast address, and routers are aware of the
groups to which they belong.

3. **Multicast Routing:**

- Multicast routing schemes leverage broadcast routing schemes but aim to


efficiently deliver packets to group members while optimizing bandwidth usage.

- Efficient multicast spanning trees are constructed based on whether the group is
dense (members scattered across the network) or sparse (members concentrated in
certain areas).

4. **Pruning of Spanning Trees:**

- In dense multicast groups, broadcast spanning trees are pruned to remove links
that do not lead to group members, resulting in efficient multicast spanning trees.

- Pruning strategies differ based on the routing protocol used:

20
- With link-state routing, routers construct pruned spanning trees based on
complete topology knowledge, removing links that don't connect group members.

- With distance-vector routing, routers use reverse path forwarding and PRUNE
messages to recursively prune spanning trees based on actual group membership.

5. **Efficiency and Challenges:**

- Pruned spanning trees efficiently use only necessary links to reach group
members, reducing unnecessary traffic.

- However, the process of pruning can be resource-intensive for routers, especially


in large networks.

In summary, multicasting enables efficient transmission of messages to specific


groups within a network, utilizing optimized routing schemes to minimize bandwidth
consumption while addressing the needs of diverse applications like multiplayer
games and live video streaming.

Certainly! Let’s break down the TCP segment headers based on the text you’ve provided:

TCP Segment Structure:

• Source Port and Destination Port: These fields, each 16 bits long, identify the local
endpoints of the connection. A combination of a TCP port and its host’s IP address
forms a unique 48-bit endpoint.
• Sequence Number: A 32-bit field that represents the sequence number of the first
data byte in this segment (if the SYN flag is set, if not, it’s the accumulated sequence
number).
• Acknowledgement Number: Also a 32-bit field, it indicates the next in-order byte
that the sender of the segment is expecting from the other end.
• TCP Header Length: This 4-bit field specifies the length of the TCP header in 32-bit
words, accounting for the variable length of the Options field.
• Unused Field: A 4-bit field that remains unused, which shows the robustness of the
TCP design.
• Flags: There are eight 1-bit flags in the TCP header:

21
o CWR (Congestion Window Reduced): Used to indicate that the sender has
reduced its congestion window size.
o ECE (ECN-Echo): Signals that the network is experiencing congestion and
the sender should slow down.
o URG (Urgent): Indicates that the Urgent pointer field is significant and
there’s data that should be prioritized.
o ACK (Acknowledgement): Set to 1 to indicate that the Acknowledgement
number is valid and should be processed.
o PSH (Push Function): Requests that the receiver pass the received data to the
application immediately.
o RST (Reset): Used to reset the connection in case of errors or problems.
o SYN (Synchronize): Used during the initial handshake of the connection
establishment. A SYN with ACK=0 is a connection request, and with ACK=1
is the connection acceptance.
o FIN (Finish): Not mentioned in your text, but typically used to signal the end
of data transmission.

Connection Identifier (5-tuple):

• This consists of five pieces of information: the protocol (TCP), source IP, source port,
destination IP, and destination port.

Acknowledgements:

• The acknowledgement mechanism in TCP is cumulative, meaning it acknowledges all


received and processed data up to a certain point without going beyond any lost data.

Options Field:

• This field can vary in length and contains various optional parameters that can be
used for features like maximum segment size, timestamps, or other extensions.

Urgent Pointer:

• When the URG flag is set, this 16-bit field is used to indicate a byte offset from the
current sequence number where urgent data is located.

The TCP header is designed to provide reliable, ordered, and error-checked delivery of a
stream of bytes between applications running on hosts communicating via an IP network.
Major internet applications like the World Wide Web, email, remote administration, and file
transfer rely on TCP. The robustness and flexibility of TCP’s header structure have
contributed significantly to its widespread adoption and longevity as a core protocol of the
Internet protocol suite.

22
23

You might also like