Professional Documents
Culture Documents
Embedded Networks
Embedded Networks
Introduction:
The Internet Protocol (IP) is the backbone of modern communication, serving as the fundamental
framework for data transmission across the vast expanse of the digital landscape. It is a set of
rules and conventions that govern how data packets are sent, received, and routed through
networks. Understanding the inner workings of the Internet Protocol is crucial for
comprehending the functioning of the internet and its role in our interconnected world.
IP Addressing:
At the heart of the Internet Protocol lies the concept of IP addressing. Every device connected to
the internet is assigned a unique numerical label known as an IP address. These addresses are
essential for routing data packets from the source to the destination. There are two versions of IP
addresses currently in use: IPv4 and IPv6. While IPv4 addresses are 32-bit and becoming scarce,
IPv6 addresses are 128-bit, providing a virtually limitless pool of unique identifiers.
Data transmitted over the internet is broken down into smaller units called packets. Each packet
contains a portion of the data, along with header information that includes the source and
destination IP addresses. The Internet Protocol is responsible for guiding these packets through
the network to reach their intended destination. Routers play a pivotal role in this process,
making decisions based on the destination IP address contained in the packet header.
The Internet Protocol works in tandem with higher-layer protocols, such as TCP and UDP, to
ensure reliable and efficient data transmission. TCP is connection-oriented and provides features
like error checking and data retransmission, making it suitable for applications where data
integrity is crucial, such as web browsing and file transfers. On the other hand, UDP is
connectionless and is often used for real-time applications like video streaming and online
gaming, where a slight delay is more acceptable than potential data loss.
The scarcity of IPv4 addresses led to the development of Network Address Translation (NAT),
allowing multiple devices within a private network to share a single public IP address. This
practice helps mitigate the exhaustion of available IP addresses and enhances the security of
internal networks. Private IP addresses are reserved for use within private networks and are not
routable on the public internet.
Conclusion:
In conclusion, the Internet Protocol serves as the underlying framework that enables
communication in the digital age. From IP addressing to packetization, routing, and security
measures, the intricacies of the Internet Protocol are woven into the fabric of our connected
world. As we continue to rely on the internet for various aspects of our lives, a deeper
understanding of the workings of the Internet Protocol becomes increasingly important. It is
through this understanding that we can appreciate the resilience and complexity of the digital
realm that defines our modern era.
The Internet Protocol (IP) is a protocol, or set of rules, for routing and addressing packets of data
so that they can travel across networks and arrive at the correct destination. Data traversing the
Internet is divided into smaller pieces, called packets. IP information is attached to each packet,
and this information helps routers to send packets to the right place. Every device or domain that
connects to the Internet is assigned an IP address, and as packets are directed to the IP address
attached to them, data arrives where it is needed.
Once the packets arrive at their destination, they are handled differently depending on which
transport protocol is used in combination with IP. The most common transport protocols are TCP
and UDP.
To understand why protocols are necessary, consider the process of mailing a letter. On the
envelope, addresses are written in the following order: name, street address, city, state, and zip
code. If an envelope is dropped into a mailbox with the zip code written first, followed by the
street address, followed by the state, and so on, the post office won't deliver it. There is an
agreed-upon protocol for writing addresses in order for the postal system to work. In the same
way, all IP data packets must present certain information in a certain order, and all IP addresses
follow a standardized format.
What is an IP packet?
IP packets are created by adding an IP header to each packet of data before it is sent on its way.
An IP header is just a series of bits (ones and zeros), and it records several pieces of information
about the packet, including the sending and receiving IP address. IP headers also report:
Header length
Packet length
Time to live (TTL), or the number of network hops a packet can make before it is
discarded
In total there are 14 fields for information in IPv4 headers, although one of them is optional.
How does IP routing work?
The Internet is made up of interconnected large networks that are each responsible for certain
blocks of IP addresses; these large networks are known as autonomous systems (AS). A variety
of routing protocols, including BGP, help route packets across ASes based on their destination IP
addresses. Routers have routing tables that indicate which ASes the packets should travel
through in order to reach the desired destination as quickly as possible. Packets travel from AS to
AS until they reach one that claims responsibility for the targeted IP address. That AS then
internally routes the packets to the destination.
Packets can take different routes to the same place if necessary, just as a group of people driving
to an agreed-upon destination can take different roads to get there.
What is TCP/IP?
The Transmission Control Protocol (TCP) is a transport protocol, meaning it dictates the way
data is sent and received. A TCP header is included in the data portion of each packet that
uses TCP/IP. Before transmitting data, TCP opens a connection with the recipient. TCP ensures
that all packets arrive in order once transmission begins. Via TCP, the recipient will
acknowledge receiving each packet that arrives. Missing packets will be sent again if receipt is
not acknowledged.
TCP is designed for reliability, not speed. Because TCP has to make sure all packets arrive in
order, loading data via TCP/IP can take longer if some packets are missing.
TCP and IP were originally designed to be used together, and these are often referred to as the
TCP/IP suite. However, other transport protocols can be used with IP.
What is UDP/IP?
The User Datagram Protocol, or UDP, is another widely used transport protocol. It is faster than
TCP, but it is also less reliable. UDP does not make sure all packets are delivered and in order,
and it does not establish a connection before beginning or receiving transmissions.
Internet Protocol (IP) is the method or protocol by which data is sent from one computer to
another on the internet. Each computer -- known as a host -- on the internet has at least one IP
address that uniquely identifies it from all other computers on the internet.
IP is the defining set of protocols that enable the modern internet. It was initially defined in May
1974 in a paper titled, "A Protocol for Packet Network Intercommunication," published by the
Institute of Electrical and Electronics Engineers and authored by Vinton Cerf and Robert Kahn.
At the core of what is commonly referred to as IP are additional transport protocols that enable
the actual communication between different hosts. One of the core protocols that runs on top of
IP is the Transmission Control Protocol (TCP), which is often why IP is also referred to
as TCP/IP. However, TCP isn't the only protocol that is part of IP.
When data is received or sent -- such as an email or a webpage -- the message is divided into
chunks called packets. Each packet contains both the sender's internet address and the receiver's
address. Any packet is sent first to a gateway computer that understands a small part of the
internet. The gateway computer reads the destination address and forwards the packet to an
adjacent gateway that in turn reads the destination address and so forth until one gateway
recognizes the packet as belonging to a computer within its immediate neighborhood --
or domain. That gateway then forwards the packet directly to the computer whose address is
specified.
Because a message is divided into a number of packets, each packet can, if necessary, be sent by
a different route across the internet. Packets can arrive in a different order than the order they
were sent. The Internet Protocol just delivers them. It's up to another protocol -- the
Transmission Control Protocol -- to put them back in the right order.
IP packets
While IP defines the protocol by which data moves around the internet, the unit that does the
actual moving is the IP packet.
An IP packet is like a physical parcel or a letter with an envelope indicating address information
and the data contained within.
An IP packet's envelope is called the header. The packet header provides the information needed
to route the packet to its destination. An IP packet header is up to 24 bytes long and includes the
source IP address, the destination IP address and information about the size of the whole packet.
The other key part of an IP packet is the data component, which can vary in size. Data inside an
IP packet is the content that is being transmitted.
What is an IP address?
IP provides mechanisms that enable different systems to connect to each other to transfer data.
Identifying each machine in an IP network is enabled with an IP address.
Similar to the way a street address identifies the location of a home or business, an IP address
provides an address that identifies a specific system so data can be sent to it or received from it.
An IP address is typically assigned via the DHCP (Dynamic Host Configuration Protocol).
DHCP can be run at an internet service provider, which will assign a public IP address to a
particular device. A public IP address is one that is accessible via the public internet.
A local IP address can be generated via DHCP running on a local network router, providing an
address that can only be accessed by users on the same local area network.
The most widely used version of IP for most of the internet's existence has been Internet Protocol
Version 4 (IPv4).
IPv4 provides a 32-bit IP addressing system that has four sections. For example, a sample IPv4
address might look like 192.168.0.1, which coincidentally is also commonly the default IPv4
address for a consumer router. IPv4 supports a total of 4,294,967,296 addresses.
A key benefit of IPv4 is its ease of deployment and its ubiquity, so it is the default protocol. A
drawback of IPv4 is the limited address space and a problem commonly referred to as IPv4
address exhaustion. There aren't enough IPv4 addresses available for all IP use cases. Since
2011, IANA (Internet Assigned Numbers Authority) hasn't had any new IPv4 address blocks to
allocate. As such, Regional Internet Registries (RIRs) have had limited ability to provide new
public IPv4 addresses.
In contrast, IPv6 defines a 128-bit address space, which provides substantially more space than
IPv4, with 340 trillion IP addresses. An IPv6 address has eight sections. The text form of the
IPv6 address is xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, where each x is a hexadecimal
digit, representing 4 bits.
The massive availability of address space is the primary benefit of IPv6 and its most obvious
impact. The challenges of IPv6, however, are that it is complex due to its large address space and
is often challenging for network administrators to monitor and manage.
IP network protocols
IP is a connectionless protocol, which means that there is no continuing connection between the
end points that are communicating. Each packet that travels through the internet is treated as an
independent unit of data without any relation to any other unit of data. The reason the packets are
reassembled in the right order is because of TCP, the connection-oriented protocol that keeps
track of the packet sequence in a message.
In the OSI model (Open Systems Interconnection), IP is in layer 3, the networking layer.
There are several commonly used network protocols that run on top of IP, including:
1. TCP. Transmission Control Protocol enables the flow of data across IP address connections.
2. UDP. User Datagram Protocol provides a way to transfer low-latency process communication
that is widely used on the internet for DNS lookup and voice over Internet Protocol.
3. FTP. File Transfer Protocol is a specification that is purpose-built for accessing, managing,
loading, copying and deleting files across connected IP hosts.
4. HTTP. Hypertext Transfer Protocol is the specification that enables the modern web. HTTP
enables websites and web browsers to view content. It typically runs over port 80.
5. HTTPS. Hypertext Transfer Protocol Secure is HTTP that runs with encryption via Secure
Sockets Layer or Transport Layer Security. HTTPS typically is served over port 443.
Ethernet network controllers typically support 10 Mbit/s, 100 Mbit/s, and 1000 Mbit/s Ethernet
varieties. They usually have an 8P8C socket for connecting the network cable. Older NICs may
also have BNC or AUI connections.
Ethernet controllers designed for the PC ISA bus usually present the full 20 bit address pins to
the outside world. However, only four to five address lines are used by the chip, with the rest
hardwired to an internal address enable decoder.
File Transfer Protocol (FTP)
File transfer protocol (FTP) is an Internet tool provided by TCP/IP. The first feature of FTP
is developed by Abhay Bhushan in 1971. It helps to transfer files from one computer to another
by providing access to directories or folders on remote computers and allows software, data,
text file to be transferred between different kinds of computers. The end-user in the connection
is known as localhost and the server which provides data is known as the remote host.
The goals of FTP are:
It encourages the direct use of remote computers.
It shields users from system variations (operating system, directory structures, file
structures, etc.)
It promotes sharing of files and other types of data.
Why FTP?
FTP is a standard communication protocol. There are various other protocols like HTTP which
are used to transfer files between computers, but they lack clarity and focus as compared to
FTP. Moreover, the systems involved in connection are heterogeneous systems, i.e. they differ
in operating systems, directory, structures, character sets, etc the FTP shields the user from
these differences and transfer data efficiently and reliably. FTP can transfer ASCII, EBCDIC,
or image files. The ASCII is the default file share format, in this, each character is encoded by
NVT ASCII. In ASCII or EBCDIC the destination must be ready to accept files in this mode.
The image file format is the default format for transforming binary files.
FTP Clients
FTP works on a client-server model. The FTP client is a program that runs on the user’s
computer to enable the user to talk to and get files from remote computers. It is a set of
commands that establishes the connection between two hosts, helps to transfer the files, and
then closes the connection. Some of the commands are: get filename(retrieve the file from
server), mget filename(retrieve multiple files from the server ), ls(lists files available in the
current directory of the server). There are also built-in FTP programs, which makes it easier to
transfer files and it does not require remembering the commands.
Some sites can enable anonymous FTP whose files are available for public access. So, the user
can access those files without any username or password. Instead, the username is set to
anonymous and the password to the guest by default. Here, the access of the user is very
limited. For example, the user can copy the files but not allowed to navigate through
directories.
The FTP connection is established between two systems and they communicate with each other
using a network. So, for the connection, the user can get permission by providing the
credentials to the FTP server or can use anonymous FTP.
When an FTP connection is established, there are two types of communication channels are
also established and they are known as command channel and data channel. The command
channel is used to transfer the commands and responses from client to server and server to
client. FTP uses the same approach as TELNET or SMTP to communicate across the control
connection. It uses the NVT ASCII character set for communication. It uses port number 21.
Whereas the data channel is used to actually transfer the data between client and server. It uses
port number 20.
The FTP client using the URL gives the FTP command along with the FTP server address. As
soon as the server and the client get connected to the network, the user logins using User ID
and password. If the user is not registered with the server, then also he/she can access the files
by using the anonymous login where the password is the client’s email address. The server
verifies the user login and allows the client to access the files. The client transfers the desired
files and exits the connection. The figure below shows the working of FTP.
FTP client contacts FTP server at port 21 specifying TCP as transport protocol.
Client obtain authorization over control connection.
Client browse remote directory by sending commands over control connection.
When server receives a command for a file transfer, the server open a TCP data connection
to client.
after transferring one file, server closes connection.
server opens a second TCP data connection to transfer another file.
FTP server maintains state i.e. current directory, earlier authentication.
Transmission mode
FTP Commands
Sr.
Command Meaning
no.
Applications of FTP
Advantages
Multiple transfers: FTP helps to transfer multiple large files in between the systems.
Efficiency: FTP helps to organize files in an efficient manner and transfer them efficiently
over the network.
Security: FTP provides access to any user only through user ID and password. Moreover,
the server can create multiple levels of access.
Continuous transfer: If the transfer of the file is interrupted by any means, then the user
can resume the file transfer whenever the connection is established.
Simple: FTP is very simple to implement and use, thus it is a widely used connection.
Speed: It is the fastest way to transfer files from one computer to another.
Disadvantages
Less security: FTP does not provide an encryption facility when transferring files.
Moreover, the username and passwords are in plain text and not a combination of symbols,
digits, and alphabets, which makes it easier to be attacked by hackers.
Old technology: FTP is one of the oldest protocols and thus it uses multiple TCP/IP
connections to transfer files. These connections are hindered by firewalls.
Virus: The FTP connection is difficult to be scanned for viruses, which again increases the
risk of vulnerability.
Limited: The FTP provides very limited user permission and mobile device access.
Memory and programming: FTP requires more memory and programming efforts, as it is
very difficult to find errors without the commands.
Introduction:
Communication over computer networks involves the exchange of messages between devices,
and two widely used protocols for this purpose are UDP (User Datagram Protocol) and TCP
(Transmission Control Protocol). While both serve the same fundamental purpose, they differ
significantly in their approach to data transmission, reliability, and connection management. In
this essay, we will explore the characteristics of UDP and TCP, and the scenarios where each
protocol is best suited for exchanging messages.
Exchanging messages using TCP involves a three-step process: connection establishment, data
transfer, and connection termination. During the connection establishment phase, a handshake
occurs between the communicating devices to establish a reliable connection. Once the
connection is established, data can be sent, and acknowledgments are exchanged to ensure data
integrity. Finally, the connection is terminated when the data exchange is complete.
TCP is suitable for applications that require high reliability and accurate data delivery, such as
web browsing, file transfers, and email communication. However, the overhead associated with
connection management and error checking may introduce latency, making it less suitable for
real-time applications.
UDP, in contrast, is a connectionless protocol that prioritizes simplicity and minimal overhead. It
does not establish a connection before transmitting data and does not guarantee reliable or
ordered delivery. UDP packets are sent independently of each other, and the protocol does not
track whether they reach their destination.
Exchanging messages using UDP is a more straightforward process. Data is encapsulated into
UDP packets and sent to the destination. Since there is no connection establishment or
acknowledgment process, UDP is faster than TCP but lacks the reliability features of TCP.
Applications that can tolerate some degree of data loss, such as live streaming, online gaming,
and real-time communication, often leverage UDP for its lower latency.
Comparative Analysis:
The choice between UDP and TCP depends on the specific requirements of the application. TCP
is favored when data integrity and reliability are critical, and the application can tolerate some
latency introduced by connection management. UDP is preferred when low latency is essential,
and the application can handle occasional data loss without severe consequences.
In scenarios where real-time communication is paramount, such as VoIP (Voice over Internet
Protocol) or online gaming, UDP is often the protocol of choice due to its lower overhead and
faster transmission. However, for applications like file transfers or web browsing, where accurate
data delivery is crucial, TCP remains the more suitable option.
Conclusion:
In conclusion, the choice between UDP and TCP for exchanging messages depends on the
specific needs of the application. TCP provides reliability and ordered data delivery through
connection-oriented communication, while UDP prioritizes speed and simplicity with a
connectionless approach. The trade-offs between these protocols underscore the importance of
selecting the right one based on the requirements of the communication task at hand.
What are dynamic web pages?
Dynamic web pages are websites that change their content or layout with each request to the
webserver. They are an essential part of web-based commerce, as every kind of interaction and
personalization requires dynamic content.
Wix
Dynamic content servers generate content on the fly, in response to requests from clients. They
typically use server-side scripting languages such as PHP, Python, Ruby, or JavaScript to
generate dynamic content.
To serve dynamic content with a traditional server-based web app, a server script or application
fetches the results from a database and renders the page.
Dynamic content can be generated and delivered from a cache by running scripts in a CDN cache
instead of in a distant origin server. This reduces the response time to client requests and speeds
up dynamic webpages.
Examples of dynamic websites include Facebook and Twitter, which generate unique,
personalized content for their users.
The modern web is characterized by dynamic and interactive content, enabling users to
experience personalized and up-to-date information. Serving web pages with dynamic data
involves the use of server-side technologies to generate content on-the-fly based on user
requests. In this essay, we'll explore the key concepts and technologies involved in serving
dynamic web pages.
1. Server-Side Programming:
Server-side programming languages, such as PHP, Python, Ruby, Java, and Node.js, enable
developers to create dynamic web pages by processing data on the server before sending it to the
client's browser. These languages allow you to interact with databases, handle user input, and
perform complex calculations, producing dynamic content tailored to user requests.
2. Database Integration:
Dynamic web pages often rely on databases to store and retrieve information. Popular database
management systems include MySQL, PostgreSQL, MongoDB, and SQLite. Server-side scripts
use these databases to query, insert, update, and delete data, providing the necessary information
to generate dynamic content.
3. Server-Side Frameworks:
Frameworks like Django (Python), Ruby on Rails (Ruby), Laravel (PHP), and Express.js
(Node.js) provide structured environments for building web applications. These frameworks
streamline common tasks, such as routing, templating, and database interactions, allowing
developers to focus on the business logic of their applications.
To enhance user experience, web developers often employ AJAX for asynchronous
communication between the client and server. This enables the browser to request and receive
data from the server in the background, updating specific parts of the page without requiring a
full page reload. JavaScript frameworks like jQuery, React, and Angular facilitate AJAX
implementation.
5. RESTful APIs:
REST (Representational State Transfer) is an architectural style that uses HTTP requests for
communication. RESTful APIs (Application Programming Interfaces) allow web applications to
interact with each other, enabling the exchange of data in a standardized manner. Developers can
leverage RESTful APIs to integrate third-party services or enable communication between
different components of their applications.
6. Templating Engines:
Server-side templating engines, such as Jinja2 (Python), Twig (PHP), and EJS (JavaScript), help
streamline the process of embedding dynamic data into HTML templates. These engines enable
developers to create reusable templates and inject data dynamically, enhancing code organization
and maintainability.
7. Caching Mechanisms:
To optimize performance and reduce server load, caching mechanisms can be implemented.
Caching stores previously generated content and serves it directly to users without re-executing
the entire server-side process. Popular caching strategies include browser caching, server-side
caching, and content delivery network (CDN) integration.
Conclusion:
Serving web pages with dynamic data involves a combination of server-side programming,
database integration, client-side scripting, and other technologies. As web development
continues to evolve, staying informed about the latest tools and best practices is crucial for
creating dynamic, responsive, and engaging web applications. The synergy of these technologies
empowers developers to deliver personalized and real-time content to users, enhancing the
overall web browsing experience.
In Computer Network ,there are various ways through which different components are
connected to one another. Network Topology is the way that defines the structure, and how
these components are connected to each other.
Types of Network Topology
The arrangement of a network that comprises nodes and connecting lines via sender and
receiver is referred to as Network Topology. The various network topologies are:
Point to Point Topology
Mesh Topology
Star Topology
Bus Topology
Ring Topology
Tree Topology
Hybrid Topology
Point-to-Point Topology is a type of topology that works on the functionality of the sender and
receiver. It is the simplest communication between two nodes, in which one is the sender and
the other one is the receiver. Point-to-Point provides high bandwidth.
Point to Point Topology
Mesh Topology
In a mesh topology, every device is connected to another device via a particular channel.
In Mesh Topology, the protocols used are AHCP (Ad Hoc Configuration Protocols), DHCP
(Dynamic Host Configuration Protocol), etc.
Mesh Topology
Figure 1: Every device is connected to another via dedicated channels. These channels are
known as links.
Suppose, the N number of devices are connected with each other in a mesh topology, the
total number of ports that are required by each device is N-1. In Figure 1, there are 5
devices connected to each other, hence the total number of ports required by each device is
4. The total number of ports required = N * (N-1).
Suppose, N number of devices are connected with each other in a mesh topology, then the
total number of dedicated links required to connect them is NC2 i.e. N(N-1)/2. In Figure 1,
there are 5 devices connected to each other, hence the total number of links required is
5*4/2 = 10.
Advantages of Mesh Topology
Communication is very fast between the nodes.
Mesh Topology is robust.
The fault is diagnosed easily. Data is reliable because data is transferred among the devices
through dedicated channels or links.
Provides security and privacy.
Drawbacks of Mesh Topology
Installation and configuration are difficult.
The cost of cables is high as bulk wiring is required, hence suitable for less number of
devices.
The cost of maintenance is high.
A common example of mesh topology is the internet backbone, where various internet service
providers are connected to each other via dedicated channels. This topology is also used in
military communication systems and aircraft navigation systems.
For more, refer to the Advantages and Disadvantages of Mesh Topology .
Star Topology
In Star Topology, all the devices are connected to a single hub through a cable. This hub is the
central node and all other nodes are connected to the central node. The hub can be passive in
nature i.e., not an intelligent hub such as broadcasting devices, at the same time the hub can be
intelligent known as an active hub. Active hubs have repeaters in them. Coaxial cables or RJ-
45 cables are used to connect the computers. In Star Topology, many popular Ethernet LAN
protocols are used as CD(Collision Detection), CSMA (Carrier Sense Multiple Access), etc.
Star Topology
Figure 2: A star topology having four systems connected to a single point of connection i.e.
hub.
Advantages of Star Topology
If N devices are connected to each other in a star topology, then the number of cables
required to connect them is N. So, it is easy to set up.
Each device requires only 1 port i.e. to connect to the hub, therefore the total number of
ports required is N.
It is Robust. If one link fails only that link will affect and not other than that.
Easy to fault identification and fault isolation.
Star topology is cost-effective as it uses inexpensive coaxial cable.
Drawbacks of Star Topology
If the concentrator (hub) on which the whole topology relies fails, the whole system will
crash down.
The cost of installation is high.
Performance is based on the single concentrator i.e. hub.
A common example of star topology is a local area network (LAN) in an office where all
computers are connected to a central hub. This topology is also used in wireless networks
where all devices are connected to a wireless access point.
For more, refer to the Advantages and Disadvantages of Star Topology.
Bus Topology
Bus Topology is a network type in which every computer and network device is connected to a
single cable. It is bi-directional. It is a multi-point connection and a non-robust topology
because if the backbone fails the topology crashes. In Bus Topology, various MAC (Media
Access Control) protocols are followed by LAN ethernet connections like TDMA, Pure Aloha,
CDMA, Slotted Aloha, etc.
Bus Topology
Figure 3: A bus topology with shared backbone cable. The nodes are connected to the channel
via drop lines.
Advantages of Bus Topology
If N devices are connected to each other in a bus topology, then the number of cables
required to connect them is 1, known as backbone cable, and N drop lines are required.
Coaxial or twisted pair cables are mainly used in bus-based networks that support up to 10
Mbps.
The cost of the cable is less compared to other topologies, but it is used to build small
networks.
Bus topology is familiar technology as installation and troubleshooting techniques are well
known.
CSMA is the most common method for this type of topology.
Drawbacks of Bus Topology
A bus topology is quite simpler, but still, it requires a lot of cabling.
If the common cable fails, then the whole system will crash down.
If the network traffic is heavy, it increases collisions in the network. To avoid this, various
protocols are used in the MAC layer known as Pure Aloha, Slotted Aloha, CSMA/CD, etc.
Adding new devices to the network would slow down networks.
Security is very low.
A common example of bus topology is the Ethernet LAN, where all devices are connected to a
single coaxial cable or twisted pair cable. This topology is also used in cable television
networks. For more, refer to the Advantages and Disadvantages of Bus Topology .
Ring Topology
In a Ring Topology, it forms a ring connecting devices with exactly two neighboring devices.
A number of repeaters are used for Ring topology with a large number of nodes, because if
someone wants to send some data to the last node in the ring topology with 100 nodes, then the
data will have to pass through 99 nodes to reach the 100th node. Hence to prevent data loss
repeaters are used in the network.
The data flows in one direction, i.e. it is unidirectional, but it can be made bidirectional by
having 2 connections between each Network Node, it is called Dual Ring Topology. In-Ring
Topology, the Token Ring Passing protocol is used by the workstations to transmit the data.
Ring Topology
Figure 4: A ring topology comprises 4 stations connected with each forming a ring.
The most common access method of ring topology is token passing.
Token passing: It is a network access method in which a token is passed from one node to
another node.
Token: It is a frame that circulates around the network.
Operations of Ring Topology
1. One station is known as a monitor station which takes all the responsibility for performing
the operations.
2. To transmit the data, the station has to hold the token. After the transmission is done, the
token is to be released for other stations to use.
3. When no station is transmitting the data, then the token will circulate in the ring.
4. There are two types of token release techniques: Early token release releases the token
just after transmitting the data and Delayed token release releases the token after the
acknowledgment is received from the receiver.
Advantages of Ring Topology
The data transmission is high-speed.
The possibility of collision is minimum in this type of topology.
Cheap to install and expand.
It is less costly than a star topology.
Drawbacks of Ring Topology
The failure of a single node in the network can cause the entire network to fail.
Troubleshooting is difficult in this topology.
The addition of stations in between or the removal of stations can disturb the whole
topology.
Less secure.
For more, refer to the Advantages and Disadvantages of Ring Topology .
Tree Topology
This topology is the variation of the Star topology. This topology has a hierarchical flow of
data. In Tree Topology, protocols like DHCP and SAC (Standard Automatic Configuration )
are used.
Tree Topology
Figure 5: In this, the various secondary hubs are connected to the central hub which contains
the repeater. This data flow from top to bottom i.e. from the central hub to the secondary and
then to the devices or from bottom to top i.e. devices to the secondary hub and then to the
central hub. It is a multi-point connection and a non-robust topology because if the backbone
fails the topology crashes.
Advantages of Tree Topology
It allows more devices to be attached to a single central hub thus it decreases the distance
that is traveled by the signal to come to the devices.
It allows the network to get isolated and also prioritize from different computers.
We can add new devices to the existing network.
Error detection and error correction are very easy in a tree topology.
Drawbacks of Tree Topology
If the central hub gets fails the entire system fails.
The cost is high because of the cabling.
If new devices are added, it becomes difficult to reconfigure.
A common example of a tree topology is the hierarchy in a large organization. At the top of the
tree is the CEO, who is connected to the different departments or divisions (child nodes) of the
company. Each department has its own hierarchy, with managers overseeing different teams
(grandchild nodes). The team members (leaf nodes) are at the bottom of the hierarchy,
connected to their respective managers and departments.
For more, refer to the Advantages and Disadvantages of Tree Topology .
Hybrid Topology
This topological technology is the combination of all the various types of topologies we have
studied above. Hybrid Topology is used when the nodes are free to take any form. It means
these can be individuals such as Ring or Star topology or can be a combination of various types
of topologies seen above. Each individual topology uses the protocol that has been discussed
earlier.
Hybrid Topology
Figure 6: The above figure shows the structure of the Hybrid topology. As seen it contains a
combination of all different types of networks.
Advantages of Hybrid Topology
This topology is very flexible.
The size of the network can be easily expanded by adding new devices.
Drawbacks of Hybrid Topology
It is challenging to design the architecture of the Hybrid Network.
Hubs used in this topology are very expensive.
The infrastructure cost is very high as a hybrid network requires a lot of cabling and
network devices.
A common example of a hybrid topology is a university campus network. The network may
have a backbone of a star topology, with each building connected to the backbone through a
switch or router. Within each building, there may be a bus or ring topology connecting the
different rooms and offices. The wireless access points also create a mesh topology for
wireless devices. This hybrid topology allows for efficient communication between different
buildings while providing flexibility and redundancy within each building.
S-MAC (Sensor MAC) is a low-power, duty-cycled MAC (medium access
control) protocol designed for wireless sensor networks. It tries to save energy
by reducing the time a node spends in the active (transmitting) state and
lengthening the time it spends in the low-power sleep state. S-MAC achieves
this by implementing a schedule-based duty cycling mechanism. In this system,
nodes coordinate their sleeping and waking times with their neighbors and send
the data only at predetermined time slots. As a result of this mechanism, there
are fewer collisions and idle listening events, which leads to low energy usage.
SMAC (Sensor MAC) is a wireless sensor network(WSNs) protocol that is
designed to reduce the overhead and power consumption of
MAC protocols.
The term “S-MAC” refers to the entire S-MAC protocol, which contains every
component of our new system. A unique MAC protocol specifically created for
wireless sensor networks is called sensor-MAC (S-MAC). This protocol has
good scaling and collision avoidance capabilities, even if reducing energy
consumption is the main objective. By applying a hybrid scheduling and
contention-based approach, it achieves good scalability and collision
avoidance. We must determine the main causes of inefficient energy usage, as
well as the trade-offs that can be made to lower the usage of energy, in order to
achieve the primary goal of energy efficiency.
S-MAC saves energy mostly by preventing overhearing and effectively sending
a lengthy message. Periodic sleep is crucial for energy conservation when
inactive listening accounts for the majority of total energy usage. S-MAC’s
energy usage is mostly unaffected by the volume of traffic. To reduce the
capacity of transmissions and data transmitted in the network, S-MAC also has
capabilities like packet aggregation and route discovery. This improves the
network’s scalability and also helps to reduce overhead.
Due to its abundance to offer low-power and energy-efficient communication in
wireless sensor networks, S-MAC is widely employed in a variety of
applications, including environmental monitoring, industrial automation, and
military sensings.
Application of S-MAC
Limitation of S-MAC
Despite its many advantages, there are some limitations to S-MAC that should
be considered when evaluating its suitability for a particular application:
Complexity: It seems complex because it requires better understanding and
a higher level of technical knowledge for its implementation and working,
which also makes it costly for its fulfillment.
Scalability: When embedded in large-scale networks, its performance gets
reduced for high-speed communication, and hence it is expensive to build
and not affordable by all.
Latency: It focuses more on the duty-cycling mechanism for energy
consumption, due to which there is a reduction in both latency and per-hop
fairness, so some of the real-time applications get affected, which require
low latency.
Interference: Although it has the mechanism to avoid interference, it fails to
do so due to high levels of interference coming from the outside surrounding
the sensing nodes.
Overhead: Due to its communication mechanism, it has an increased
overhead in comparison with other MAC protocols.
Overhearing: Here nodes receive a packet that is destined for another node,
and it is kept silent until it meets its requirement.
Security: It doesn’t have its own in-built security mechanism, so it is prone to
hacking by hackers.
RTS/CTS/ACK overhead.
Challenges in S-MAC