Professional Documents
Culture Documents
INTERNET TECHNOLOGY
Unit – I
Introduction to Internet:
1969 • ARPANET launches, creating the core of what will become the internet. The
project will grow from an initial four-node network to gradually connect an increasing number of
computer science projects at universities and government institutions.
1972 • Electrical engineer Robert Kahn gives the first public demonstration of the
ARPANET at the International Computer Communication Conference
1974 • Kahn enlists computer scientist Vinton Cerf to help expand ARPANET by
integrating other packet-switching networks into a unified “internetwork.”
1978 • Kahn and Cerf develop the internet’s core communications protocol suite,
TCP/IP (Transmission Control Protocol/Internet Protocol), providing standards for error-free
data transmission (TCP) and identifying networked destinations (IP).
1988 • A Cornell researcher’s network malware — the first computer worm — shows
the inherent vulnerability of the internet.
1989 • America Online (AOL), CompuServe, and Prodigy begin to emerge as the Big
Three online service providers. They introduce millions of users to dial-up messaging, email, and
web portals over the coming years, but try to confine them within proprietary, closed platforms.
1990 • Tim Berners-Lee, a scientist at the European physics lab CERN, develops a
prototype for the World Wide Web, an application layer running on top of the internet that’s
based on hypertext.
1993 • CERN officially puts the World Wide Web in the public domain. The Mosaic
web browser, created at the National Center for Supercomputing at the University.
1995 • The consumer web takes shape as the digital land grab begins. Amazon, Yahoo,
and eBay launch. Microsoft introduces Internet Explorer, sparking the first browser war.
1997 • The phrase “the Great Firewall of China” first appears in a Wired article in
reference to the Chinese government’s desire to control internet access.
1998 • Netscape releases the source code of its browser suite, creating the Mozilla
project and inspiring the open-source software movement. Later that year, Google is founded.
1999 • Napster, a peer-to-peer file-sharing service, sparks debate and lawsuits about
intellectual property and digital property rights.
2000 • The dot-com bubble begins to pop. Triggered in part by Netscape’s wildly
successful IPO in 1995.
2001 • Bit Torrent, a decentralized communication protocol for peer-to-peer file sharing,
is released.
2004 • Facebook is created, signaling a new era of social media on the internet.
2006 • Amazon Web Services begins marketing IT infrastructure to businesses, and the
term “cloud computing” gains traction, referring to the storage and processing of data and
applications on remote (and typically proprietary) servers.
2007 • Apple introduces the iPhone, which will quickly evolve into a dominant platform
of the mobile web.
2009 • Satoshi Nakamoto launches the Bitcoin network, a digital cash system on a
decentralized, cryptographically-secure peer-to-peer protocol — the first blockchain.
2010 • The Federal Communications Commission asserts the principles of net neutrality
and an open internet, holding that internet service providers (ISPs).
2013 • CIA subcontractor Edward Snowden leaks classified files that show how the
National Security Agency is monitoring the communications of US citizens with help from
telecom firms, as well as surveilling online communications by tapping into the servers of
internet companies like Google, Facebook, Microsoft, and Yahoo.
2015 • The Ethereum Virtual Machine, an open-source, block chain computing platform,
launches.
2016 • The DFINITY Foundation is founded in Zurich to build the Internet Computer.
By 1985, 100 networks, both public domain and commercial utilizing TCP/IP protocol
suite became available. By 1987, the number had grown to two hundred.
In 1989, it exceeded five hundred and by the end of 1991, the Internet had grown to
include some 5,000 networks in over 36 countries, serving over 700,000 host computers used by
over 4,000,000 people.
Over the years, there has been wave of commercialization of the Internet. Originally,
commercial efforts mainly comprised vendors providing the basic networking products, and
service providers offering the connectivity and basic Internet services.
The Internet has now become almost a "commodity" service, and much of the latest
attention has been on the use of this global information infrastructure for support of other
commercial services.
This has been tremendously accelerated by the widespread and rapid adoption of
browsers and the World Wide Web technology, allowing users easy access to information linked
throughout the globe.
New products developments in technology are readily accessible as downloads that are
providing increasingly sophisticated information services on top of the basic Internet data
communications.
Traffic and capacity of the public Internet grew at rates of about 100% per year in the
early 1990s. There was then a brief period of explosive growth in 1995 and 1996. During those
two years, traffic grew by a factor of about 100, which is about 1,000% per year.
No one
Everyone
The internet is more of a concept than an actual tangible entity, and it relies on a physical
infrastructure that connects networks to other networks.
The internet is essentially that a system that allows different computer networks to
communicate with one another using a standardized set of rules. No one entity owns these rules,
they are there to help facilitate and standardize communication.
The internet is a global collection of inter-networked systems that depend on sets of rules
known as protocols. These protocols allow computers to communicate across networks. It relies
on an expansive infrastructure of routers, Network Access Points, and computer systems.
It’s one giant system made up of many much smaller systems. While the smaller systems
can be owned, the all-encompassing giant system cannot.
The physical networks that carry internet traffic between different systems is the internet
backbone. In the beginning days of the internet, ARPANET made up this backbone. Today,
several large corporations provide the routers and cable that make it up. Some of these
corporations include:
UUNET
Level 3
Verizon
AT&T
Lumen Technologies
Sprint
IBM
These companies are Internet Service Providers (ISPs), which means that anyone wanting to
access the internet must ultimately work with these companies.
There are also smaller ISPs, such as Cable and DSL companies. These companies are not
part of the internet’s backbone, but rather they negotiate with the larger ISP companies
mentioned above for internet access.
Every ISP has its own network. Many companies have Local Area Networks that link to the
internet. Each of these networks is both a part of the internet and its own separate entity. If you
own a device that connects to the internet, that means your device is part of the enormous inter-
network system, making you part-owner of the internet.
Web Services
Web services allow exchange of information between applications on the web. Using
web services, applications can easily interact with each other. The web services are offered using
concept of Utility Computing.
Video Conferencing
Video conferencing or Video teleconferencing is a method of communicating by two-
way video and audio transmission with help of telecommunication technologies.
It began in the 1960s as a US-army-funded research project, then evolved into a public
infrastructure in the 1980s with the support of many public universities and private companies.
The various technologies that support the Internet have evolved over time, but the way it
works hasn't changed that much: Internet is a way to connect computers all together and ensure
that, whatever happens, they find a way to stay connected.
Point to Point:
Point-to-point networks contains exactly two hosts such as computer, switches or routers,
servers connected back to back using a single piece of cable. Often, the receiving end of one
host is connected to sending end of the other and vice-versa.
If the hosts are connected point-to-point logically, then may have multiple intermediate
devices. But the end hosts are unaware of underlying network and see each other as if they are
connected directly.
Bus Topology
In case of Bus topology, all devices share single communication line or cable. Bus
topology may have problem while multiple hosts sending data at the same time. Therefore, Bus
topology either uses CSMA/CD technology or recognizes one host as Bus Master to solve the
issue. It is one of the simple forms of networking where a failure of a device does not affect the
other devices. But failure of the shared communication line can make all other devices stop
functioning.
Both ends of the shared channel have line terminator. The data is sent in only one
direction and as soon as it reaches the extreme end, the terminator removes the data from the line.
Star Topology
All hosts in Star topology are connected to a central device, known as hub device, using a
point-to-point connection. That is, there exists a point to point connection between hosts and
hub. The hub device can be any of the following:
Ring Topology
In ring topology, each host machine connects to exactly two other machines, creating a
circular network structure. When one host tries to communicate or send message to a host which
is not adjacent to it, the data travels through all intermediate hosts. To connect one more host in
the existing structure, the administrator may need only one more extra cable.
Failure of any host results in failure of the whole ring.Thus, every connection in the ring
is a point of failure. There are methods which employ one more backup ring.
Mesh Topology
In this type of topology, a host is connected to one or multiple hosts.This topology has
hosts in point-to-point connection with every other host or may also have hosts which are in
point-to-point connection to few hosts only.
Hosts in Mesh topology also work as relay for other hosts which do not have direct
point-to-point links.
Tree Topology
Also known as Hierarchical Topology, this is the most common form of network
topology in use presently. This topology imitates as extended Star topology and inherits
properties of bus topology.
This topology divides the network in to multiple levels/layers of network. Mainly in
LANs, a network is bifurcated into three types of network devices.
Hybrid Topology
A network structure whose design contains more than one topology is said to be hybrid
topology. Hybrid topology inherits merits and demerits of all the incorporating topologies.
The combining topologies may contain attributes of Star, Ring, Bus, and Daisy-chain
topologies. Most WANs are connected by means of Dual-Ring topology and networks
connected to them are mostly Star topology networks.
1. Twisted wire: two insulated copper wires twisted into pairs for ordinary telephone
communications, and 4 pairs of copper cabling for Internet networks. Transmission speeds range
from 2 Mbps to 100 Mbps.
2. Coaxial cables: copper or aluminum wire wrapped with an insulating and flexible material:
widely used for cable television systems, office buildings, and for local area networks generally.
Transmission speeds range from 200 Gbps to over 500 Gbps.
3. Optical fiber cable: one or more filaments of glass fiber wrapped in protective layers: not
affected by electromagnetic radiation. Transmission speeds may exceed 1000 Gbps.
1. Terrestrial microwave transmitters and receivers placed on 'line of sight' locations on tops of
buildings and elevated ground, usually assisted by relay stations spaced approximately 30 miles
apart.
3. Cellular and PCS systems using radio communications technologies, which are often specific
to individual countries. Each area or cell employs a low-power transmitter or radio relay antenna
device to relay calls from one cell to the next.
4. Wireless LANs using both high- and low-frequency technologies to enable communication
between several devices in a limited area (e.g. Wi-Fi, BlueTooth, WiMax, UWB and ZigBee).
Networks are commonly designated as LAN ( local area network) WAN (wide area
network), MAN (metropolitan area network), PAN (personal area network), VPN ( virtual
private network), CAN (campus area network) and SAN (storage area network).
Wireless communication spans the electromagnetic spectrum from 9 kHz to 300 GHz.
Satellite signals travel at the speed of light, but the distances involved induce a time-delay called
'latency'. A 71,000 km separation of transmitter and receiver, for example, will induce a latency
of 473 ms, often noticeable on international calls.
- Physical addressing (MAC), which identifies a hardware, specifying the constructor and
the product serial. It is unique and can theoretically be used to communicate. But as it does not
include any networking information, communications based only on MAC addresses would be
very un scalable (huge routing tables). This is the reason why another kind of addressing (logical)
has been chosen.
- Logical addressing (IP), which identifies a logical host (not always the same hardware)
of a given network. Every address is buit through the concatenation of a netid (network identifier)
and a hostid (host identifier). Consequently, routing tables can be highly reduced (one entry per
network instead of one entry per host).
- Friendly addressing (DNS), introduced to help users remembering (or even guessing)
the name of a host. Whereas the previous addressings were digital, DNS is alpha-digital. In other
words, DNS addresses look like understandable names. They are built through the hierarchical
concatenation of different domains and a hostname.
Note that the two first addressings are mandatory for Internet communications. IP
addresses are used to communicate between networks. Inside local networks, they are converted
to MAC adresses (thanks to ARP for IPv4).
Intranet:
An intranet, on the other hand, is a local or restricted network that enables people to
store, organize, and share information within an organization. Intranet is the type of internet that
is used privately. It is a private network therefore anyone can’t access the intranet. On the
intranet, there is a limited number of users and it provides a piece of limited information to its
users.
From there onwards, its use has diffused rapidly throughout the world with there being
around 7 billion users of wireless devices currently that employ internet technology. With about
7.7 billion people in this world and with limited use among those under 5 years of age, it’s
almost safe to say that the entire humanity is now connected to the internet! There are however
variations in the bandwidths available, the efficiency and cost of its use.
It’s been postulated that about 95% of all information available has been digitized and
made accessible via the internet. The internet has also led to a complete transformation in
communication, availability of knowledge as well as social interaction. However, as with all
major technological changes, there are positive and negative effects of the internet on the society
too.
It provides effective communication using emailing and instant messaging services to any
part of the world.
It improves business interactions and transactions, saving on vital time.
Banking and shopping online have made life less complicated.
You can access the latest news from any part of the world without depending on the TV
or newspaper.
Education has received a huge boost as uncountable books and journals are available
online from libraries across the world. This has made research easier. Students can now
opt for online courses using the internet.
Application for jobs has also become easier as most vacancies are advertised online with
online applications becoming the norm.
Professionals can now exchange information and materials online, thus enhancing
research.
How the Internet is governed has been a question of considerable debate since its earliest
days. Indeed, how diverse sets of stakeholders collaborate to manage this important global
resource has an impact on the nature of the Internet as a trusted global platform for innovation,
creativity, and freedom of expression. The Internet is a decentralized network of networks and
those who rely on it help to define its policies.
Over the years, this decentralized and community-driven management approach has
supported the tremendous growth and innovation that has defined Internet’s success and reflects
the early design choices of the technical community in the adoption and implementation of
Internet standards. By 2005, what had traditionally been referred to as private, bottom-up
coordination evolved into the “multi stakeholder” model of Internet governance that exists today.
Unit I : Anatomy of Internet
The Internet is a vast collection of computers linked by cable and satellites, not controlled by
any one authority, but all operating under common network protocols. The term 'Internet'
includes both the hardware (satellites, cable, routing devices and computers) and the software
(programs and network protocols) that enable computers to communicate with each other.
Many types of hardware help the packets on their way. These are:
Routers
Routers ensure that all data gets sent to its intended destination by the most efficient
route. They open the IP packets of data to read the destination address, calculate the
best route, either to its final destination, or to another router closer to that destination,
repeating this until the destination is reached.
Servers
Equally important is the server, a powerful computer (or often groups of computers)
that handle requests for web pages, email data, and an increasing variety of services.
Repeaters
Repeaters maintain the signal strength and use technologies appropriate to the
transmission medium. Even backbone fiber-optical cables may carry optical amplifier
repeaters in the form of erbium-doped amplifiers spaced several tens of kilometers
apart.
Hubs
Transmission step-downs at Internet Exchange Points are achieved by the use of hubs,
an electronic device with multiple ports. Transmission rates vary considerably across
these hubs: as this Wikipedia listing indicates.
Gateways
Technically, a gateway is a network node designed to interface with another network
that uses a different protocol. Not only must the gateway contain protocols translators,
but also impedance matching devices, rate converters, fault isolators, and/or signal
translators. Mutually acceptable administrative procedures have also to be agreed
between the two networks.
Bridges
A bridge connects numerous local area networks for the purpose of collaboration
and/or exchange of information. All networks have to be using the same network
protocols.
Client Computers
Client computers are those used by the general public, on which they run applications,
or make requests for Internet services.
************