You are on page 1of 20

1

 LESSON PROPER

INTRODUCTION TO INFORMATION TECHNOLOGY AND THE WEB.

PRELIM

1. Digital revolution – digital convergence

Computing Evolution
✔ 1642 Blaise Pascal – mechanical adding machine

✔ Early 1800’s Joseph Jacquard – uses punch cards to control the pattern of the weaving
loom.
✔ 1832 Charles Babbage - invents the Difference Engine

✔ 1890 Herman Hollerith – invents a machine using punch card to tabulate info for the
Census. He starts the company that would later be IBM.
✔ 1946 – Mauchly and Eckert created the ENIAC computer, first electronic computer is
unveiled at University of Pennsylvania (shown on next slide)
✔ ENIAC Computer
o Miles of wiring
o 18,000 vacuum tubes
o Thousands of resistors and switches
o No monitor
o 3,000 blinking lights
o Cost $486,000
o 100,000 additions per second
o Weighed 30 tons
o Filled a 30x50 foot room
o Lights of Philadelphia would dim when it booted up
✔ 1943
o Base codes develop by Grace Hopper while working on the Mark I programming
project.
o She invented the phrase “bug” – an error in a program that causes a program to
malfunction.
✔ 1950s
o Vacuum Tubes were the components for the electronic circuitry
o Punch Cards main source of input
o Speeds in milliseconds (thousands/sec)
o 100,000 additions/sec.
o Used for scientific calculations
o New computers were the rule, cost effectiveness wasn’t’
✔ 1960s
1

o Transistors were electronic circuitry (smaller, faster, more reliable than vacuum
tubes)
o Speeds in microseconds (millionth/sec)
o 200,000 additions/sec.
o Computers In Businesses: Emphasis on marketing of computers to businesses
o Data files stored on magnetic tape
o Computer Scientists controlled operations
✔ Late 60’s Early 70’s
o Integrated circuit boards
o New input methods such as plotters, scanners
o Software became more important
o Sophisticated operating systems
o Improved programming languages
o Storage capabilities expanded (disks)
✔ 1970’s Integrated circuits and silicone chips lead to smaller microprocessors

✔ Late 80’s to Current


o Improved circuitry – several thousand transistors placed on a tiny silicon chip.
o Pentium chip named by Intel
o Modems – communication along telephone wires
o Portable computers: laptops
o Increased storage capabilities: gigabytes
o Emphasis on information needed by the decision maker.

Computer Generation
First Generation (1940-1956) Vacuum Tubes
● The first computers used vacuum tubes for circuitry and magnetic
drums for memory, and were often enormous, taking up entire rooms. They were
very expensive to operate and in addition to using a great deal of electricity,
generated a lot of heat, which was often the cause of malfunctions.
● First generation computers relied on machine language, the lowest-level
programming language understood by computers, to perform operations, and they
could only solve one problem at a time. Input was based on punched cards and
paper tape, and output was displayed on printouts.
● The UNIVAC and ENIAC computers are examples of first-generation computing
devices. The UNIVAC was the first commercial computer delivered to a business
client, the U.S. Census Bureau in 1951.

Second Generation (1956-1963) Transistors


● Transistors replaced vacuum tubes and ushered in the second generation of
computers. The transistor was invented in 1947 but did not see widespread use in
computers until the late 1950s. The transistor was far superior to the vacuum tube,
allowing computers to become smaller, faster, cheaper, more energy-efficient and
more reliable than their first-generation predecessors. Though the transistor still
generated a great deal of heat that subjected the computer to damage, it was a vast
1

improvement over the vacuum tube. Second-generation computers still relied on


punched cards for input and printouts for output.
● Second-generation computers moved from cryptic binary machine language to
symbolic, or assembly, languages, which allowed programmers to specify
instructions in words. High-level programming languages were also being
developed at this time, such as early versions of COBOL and FORTRAN. These were
also the first computers that stored their instructions in their memory, which moved
from a magnetic drum to magnetic core technology.
● The first computers of this generation were developed for the atomic energy
industry.

Third Generation (1964-1971) Integrated Circuits


● The development of the integrated circuit was the hallmark of the third generation
of computers. Transistors were miniaturized and placed on silicon chips,
called semiconductors, which drastically increased the speed and efficiency of
computers.
● Instead of punched cards and printouts, users interacted with third generation
computers through keyboards and monitors and interfaced with an operating
system, which allowed the device to run many different applications at one time
with a central program that monitored the memory. Computers for the first time
became accessible to a mass audience because they were smaller and cheaper than
their predecessors.

Fourth Generation (1971-Present) Microprocessors


● The microprocessor brought the fourth generation of computers, as thousands of
integrated circuits were built onto a single silicon chip. What in the first generation
filled an entire room could now fit in the palm of the hand. The Intel 4004 chip,
developed in 1971, located all the components of the computer—from the central
processing unit and memory to input/output controls—on a single chip.
● In 1981 IBM introduced its first computer for the home user, and in
1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm
of desktop computers and into many areas of life as more and more everyday
products began to use microprocessors.
● As these small computers became more powerful, they could be linked together to
form networks, which eventually led to the development of the Internet. Fourth
generation computers also saw the development of GUIs, the mouse and handheld
devices.

Fifth Generation (Present and Beyond) Artificial Intelligence


● Fifth generation computing devices, based on artificial intelligence, are still in
development, though there are some applications, such as voice recognition, that are
being used today. The use of parallel processing and superconductors is helping to
make artificial intelligence a reality. Quantum computation and molecular
and nanotechnology will radically change the face of computers in years to come.
1

The goal of fifth-generation computing is to develop devices that respond to natural


language input and are capable of learning and self-organization.

DIGITAL DEVICES

Evolution of Computer Devices


● Abacus – a simple counting aid, may have been invented in Babylonia (Iraq) in
the fourth century B.C.
● Antikythera mechanism – used for registering and predicting the motion of the
stars and planets
● John Napier’s Bones – John Napier, Baron of Merchiston, Scotland, invents logs in
1614. Logs allow multiplication and division to be reduced to addition and
subtraction
● William Oughtred’s Slide Ruler – After John Napier invented logarithms, and
Edmund Gunter created the logarithmic scales (lines, or rules) upon which slide
rules are based, it was Oughtred who first used two such scales sliding by one
another to perform direct multiplication and division; and he is credited as the
inventor of the slide rule in 1622
● Wilhelm Schickard’s Mechanical Calculator – He built the first mechanical
calculator in 1623. It can work with six digits, and carries digits across columns.
It works, but never makes it beyond the prototype stage.
● Blaise Pascal’s Mechanical Calculator – He built a mechanical calculator in 1642.
It has the capacity for eight digits, but has trouble carrying and its gears tend to
jam.
● Joseph-Marie Jacquard’s Loom – Invented an automatic loom controlled by
punch cards played an important role in the development of other
programmable machines, such as computers
● Charles Xavier Thomas’ Arithmometer – The Arithmometer was the first mass-
produced calculator invented by Charles Xavier Thomas de Colmar
● Charles Babbage’s Difference Engine – He invented a massive steam-powered
mechanical calculator designed to print astronomical tables
o Analytical Engine – a mechanical computer that can solve any
mathematical problem
o It uses punch-cards similar to those used by Jacquard Loom and can
perform simple conditional operations
● Augusta Ada Byron – Described the Analytical Engine as weaving “algebraic
patterns just as the Jacquard loom weaves flowers and leaves” making her the
first programmer.
o Her published analysis of the Analytical Engine is the best record of
programming potential.
● Herman Hollerith’s Tabulating Machine
1

o An American statistician who developed a mechanical tabulator based on


punched cards to rapidly tabulate statistics from millions of pieces of data
for the US Census.
o He was the founder of one of the companies that later merged and
becmaee IBM.
● Vacuum Tubes
o Lee De Forest’s 1906 “audion” was also developed as a radio detector,
and soon led to the development of the triode tube
o The electronics revolution of the 20th century arguably began with the
invention of the triode vacuum tube.
● Konrad Zuse’s Electronic Calculator
o A German engineer, completed the first general purpose of
programmable calculator in 1941. He pioneers the use of binary math and
Boolean logic in electronic calculation.
● Atanasoff-Berry Computer
o The first electronic digital computing device
o Conceived in 1937, the machine was not programmable, being designed
only to solve systems of linear equations
o The ABC pioneered important elements of modern computing, including
binary arithmetic and electronic switching elements
● Enigma
o An electro-mechanical rotor machines used for the encryption and
decryption of secret messages. Enigma was invented by German engineer
Arthur Scherbius at the end of World Was I.
● Colossus
o Colossus, a British computer used for code breaking, was operational by
December 1943. This was developed by the guidance Alan Turing, as
counter measure for Hitler’s Enigma ciphers.
● Harvard Mark I
o Howard Aiken and Grace Hopper designed the MARK series of computers
at Harvard University
o The MARK series of computers began with the Mark I in 1944. August 7,
1944.
● Electronic Numerical Integrator and Computer (ENIAC)
o Developed by the Ballistics Research Laboratory in Maryland to assist in
the preparation of firing tables for artillery.
o It is built at the University of Pennsylvania’s Moore School of Electrical
Engineering and completed in November 1945 by John Presper Eckert
and John W. Mauchly
● Williams’ Tube
o The Williams tube, better called the William-Kilburn tube (after inventors
Freddie Williams and Tom Kilburn) developed in 1946 and 1947, was a
cathode ray tube used to electronically store binary data.
1

o It was the first random-access digital storage device and was used
successfully in several early computers.
● Pilot ACE
o It was built in the United Kingdom at the National Physical Laboratory
(NPL) in the early 1950s.
o It was a preliminary version of the full ACE, which had been designed by
Alan Turing.
o It had approximately 800 vacuum tubes, and used mercury delay lines for
its main memory.
● Transistors
o William Bradford Shockley Jr. was an Americal physicist and inventor.
o Along with John Bardeen and Walter Houser Brattain, Shockly co-
invented the transistor for which all three awarded the 1956 Nobel Prize
in Physics.
o Shockley’s attempts to commercialize a new transistor design in the
1950s and 1960s led to California’s “Silicon Valley” becoming a hotbed of
electronics innovation
● Universal Automatic Computer (UNIVAC)
o The first commercial computer made in the United States
o Used to predict the presidential election in 1953.
● Electronic Discrete Variable Automatic Computer (EDVAC)
o Completed under contract for the Ordinance Department in 1952
o Unlike the ENIAC, it was binary rather than decimal
o Eckert and Mauchly and the other ENIAC designers were joined by John
von Neumann in a consulting role
● Integrated Circuit
o The first integrated circuit, or silicon chip, was produced by Jack Kilby
and Robert Noyce
● Computer Gaming
o Steve “Slug” Russell is a programmer and computer scientist most famous
for creating Spacewar!, one of the earliest videogames in 1961 with the
fellow members of the Tech Model Railroad Club at MIT
o It is a two-player game, with each player taking control of a spaceship and
attempting to destroy the other
● Mouse
o Douglas Engelbart invents and patents the first computer mouse
(nicknamed the mouse because the tail came out the end)

● Personal Computer
o The Xerox Alto was one of the first computers designed for individual use
(though not as a home computer). Making it arguably what is now called a
personal computer.
o It was developed at Xerox PARC in 1973.
1

o It was the first computer to use the desktop metaphor and mouse-driven
graphical user interface (GUI)
● Altair 8800 Computer
o Altair produces the first portable computer
● Microsoft and Apple
o Microsoft Corporation
▪ The Microsoft Corporation was founded April 4, 1975 by Bill Gates
and Paul Allen to develop and sell BASIC interpreters for the Altair
8800
o Apple
▪ Apple Computers was founded Steve Wozniak and Steve Jobs

▪ Apple II

● Present
o Massive parallel processing
o Still in development, computer engineers are working toward developing
a functional Artificial Intelligence
o Voice activated and controlled computers
● Future
o Quantum computer
▪ Direct use of distinctively quantum mechanical phenomena, such
as superposition and entanglement, to perform operations on data
o Chemical computer
▪ An unconventional computer based on a semi-solid chemical
“soup” where data is represented by varying concentrations of
chemicals
o DNA computer
▪ A form of computing which uses DNA, biochemistry and molecular
biology, instead of the traditional silicon-based computer
technologies
o Optical computer
▪ A computer that uses light instead of electricity (i.e. protons rather
than electrons) to manipulate, store and transmit data.

Computer and Network Concepts


2. Computer and Network Fundamentals
1

A network is a collection of computers, servers,


mainframes, network devices, peripherals, or other
devices connected to one another to allow the sharing
of data. An excellent example of a network is the
Internet, which connects millions of people all over the
world. Below is an example image of a home network
with multiple computers and other network devices all
connected to each other and the Internet.
A computer network is a system in which multiple
computers are connected to each other to share
information and resources.
Characteristics of a Computer Network

● Share resources from one computer to another.

● Create files and store them in one computer,


access those files from the other computer(s) connected over the network.
● Connect a printer, scanner, or a fax machine to one computer within the network
and let other computers of the network use the machines available over the
network.
Following is the list of hardware's required to set up a computer network.

● Network Cables

● Distributors

● Routers

● Internal Network Cards

● External Network Cards

Communication and Network Hardware


Network Cables
Network cables are used to connect computers. The
most commonly used cable is Category 5 cable RJ-45.

Distributors
A computer can be connected to another one
via a serial port but if we need to connect many
computers to produce a network, this serial
connection will not work.
The solution is to use a central body to which
other computers, printers, scanners, etc. can be
connected and then this body will manage or
distribute network traffic.
Router
1

A router is a type of device which acts as the central point among computers and other
devices that are a part of the network. It is equipped with holes called ports. Computers
and other devices are connected to a router using network cables. Now-a-days router
comes in wireless modes using which computers can be connected without any physical
cable.

Network Card
Network card is a necessary component of a computer without which a computer
cannot be connected over a network. It is also known as the network adapter or Network
Interface Card (NIC). Most branded computers have network card pre-installed. Network
cards are of two types: Internal and External Network Cards.
Internal Network Cards
Motherboard has a slot for internal network card
where it is to be inserted. Internal network cards are of
two types in which the first type uses Peripheral
Component Interconnect (PCI) connection, while the
second type uses Industry Standard Architecture (ISA).
Network cables are required to provide network access.
External Network Cards
External network cards are of two types: Wireless
and USB based. Wireless network card needs to be
inserted into the motherboard, however no network
cable is required to connect to the network.
Universal Serial Bus (USB)
USB card is easy to use and connects via USB port.
Computers automatically detect USB card and can install
the drivers required to support the USB network card
automatically.

AREA NETWORKS

LAN, MAN and WAN are the three major types of the
network designed to operate over the area they cover. There
are some similarities and dissimilarities between them. One of the major differences is the
geographical area they cover, i.e. LAN covers the smallest area; MAN covers an area larger
than LAN and WAN comprises the largest of all.
There are other types of Computer Networks also, like :
● PAN (Personal Area Network)

● SAN (Storage Area Network)

● EPN (Enterprise Private Network)

● VPN (Virtual Private Network)

Local Area Network (LAN) –


1

LAN or Local Area Network connects network devices in such a way that personal
computer and workstations can share data, tools and programs. The group of computers
and devices are connected together by a switch, or stack of switches, using a private
addressing scheme as defined by the TCP/IP protocol. Private addresses are unique in
relation to other computers on the local network. Routers are found at the boundary of a
LAN, connecting them to the larger WAN.
Metropolitan Area Network (MAN) –
MAN or Metropolitan area Network covers a larger area than that of a LAN and smaller
area as compared to WAN. It connects two or more computers that are apart but resides in
the same or different cities. It covers a large geographical area and may serve as an ISP
(Internet Service Provider). MAN is designed for customers who need a high-speed
connectivity. Speeds of MAN ranges in terms of Mbps. It’s hard to design and maintain a
Metropolitan Area Network.
Wide Area Network (WAN) –
WAN or Wide Area Network is a computer network that extends over a large
geographical area, although it might be confined within the bounds of a state or country. A
WAN could be a connection of LAN connecting to other LAN’s via telephone lines and radio
waves and may be limited to an enterprise (a corporation or an organization) or accessible
to the public. The technology is high speed and relatively expensive.
Examples of network devices
● Desktop computers, laptops, mainframes, and servers.

● Consoles and thin clients.

● Firewalls

● Bridges

● Repeaters

● Network Interface cards

● Switches, hubs, modems, and routers.

● Smartphones and tablets.

● Webcams

Network topologies and types of networks


The term network topology describes the relationship of connected devices in terms of
a geometric graph. Devices are represented as vertices, and their connections are
represented as edges on the graph. It describes how many connections each device has, in
what order, and it what sort of hierarchy.
Typical network configurations include the bus topology, mesh topology, ring topology,
star topology, tree topology and hybrid topology.
1

Most home networks are configured in a tree topology that is connected to the Internet.
Corporate networks often use tree topologies, but they typically incorporate star topologies
and an Intranet.

3. INTERNET HISTORY TIMELINE: ARPANET TO THE WORLD WIDE WEB

Credit for the initial concept that developed into the World Wide Web is typically given
to Leonard Kleinrock. In 1961, he wrote about ARPANET, the predecessor of the Internet,
in a paper entitled "Information Flow in Large Communication Nets." Kleinrock, along with
other innnovators such as J.C.R. Licklider, the first director of the Information Processing
Technology Office (IPTO), provided the backbone for the ubiquitous stream of emails,
media, Facebook postings and tweets that are now shared online every day. Here, then, is a
brief history of the Internet:
The precursor to the Internet was jumpstarted in the early days of computing history,
in 1969 with the U.S. Defense Department's Advanced Research Projects Agency Network
(ARPANET). ARPA-funded researchers developed many of the protocols used for Internet
communication today. This timeline offers a brief history of the Internet’s evolution:
1965: Two computers at MIT Lincoln Lab communicate with one another using packet-
switching technology.
1

1968: Beranek and Newman, Inc. (BBN) unveils the final version of the Interface
Message Processor (IMP) specifications. BBN wins ARPANET contract.
1969: On Oct. 29, UCLA’s Network Measurement Center, Stanford Research Institute
(SRI), University of California-Santa Barbara and University of Utah install nodes. The first
message is "LO," which was an attempt by student Charles Kline to "LOGIN" to the SRI
computer from the university. However, the message was unable to be completed because
the SRI system crashed.
1972: BBN’s Ray Tomlinson introduces network email. The Internetworking Working
Group (INWG) forms to address need for establishing standard protocols.
1973: Global networking becomes a reality as the University College of London
(England) and Royal Radar Establishment (Norway) connect to ARPANET. The term
Internet is born.
1974: The first Internet Service Provider (ISP) is born with the introduction of a
commercial version of ARPANET, known as Telenet.
1974: Vinton Cerf and Bob Kahn (the duo said by many to be the Fathers of the
Internet) publish "A Protocol for Packet Network Interconnection," which details the
design of TCP.
1976: Queen Elizabeth II hits the “send button” on her first email.
1979: USENET forms to host news and discussion groups.
1981: The National Science Foundation (NSF) provided a grant to establish the
Computer Science Network (CSNET) to provide networking services to university
computer scientists.
1982: Transmission Control Protocol (TCP) and Internet Protocol (IP), as the protocol
suite, commonly known as TCP/IP, emerge as the protocol for ARPANET. This results in the
fledgling definition of the Internet as connected TCP/IP internets. TCP/IP remains the
standard protocol for the Internet.
1983: The Domain Name System (DNS) establishes the
familiar .edu, .gov, .com, .mil, .org, .net, and .int system for naming websites. This is easier to
remember than the previous designation for websites, such as 123.456.789.10.
1984: William Gibson, author of "Neuromancer," is the first to use the term
"cyberspace."
1985: Symbolics.com, the website for Symbolics Computer Corp. in Massachusetts,
becomes the first registered domain.
1986: The National Science Foundation’s NSFNET goes online to connected
supercomputer centers at 56,000 bits per second — the speed of a typical dial-up computer
modem. Over time the network speeds up and regional research and education networks,
supported in part by NSF, are connected to the NSFNET backbone — effectively expanding
the Internet throughout the United States. The NSFNET was essentially a network of
networks that connected academic users along with the ARPANET.
1987: The number of hosts on the Internet exceeds 20,000. Cisco ships its first router.
1989: World.std.com becomes the first commercial provider of dial-up access to the
Internet.
1990: Tim Berners-Lee, a scientist at CERN, the European Organization for Nuclear
Research, develops HyperText Markup Language (HTML). This technology continues to
have a large impact on how we navigate and view the Internet today.
1991: CERN introduces the World Wide Web to the public.
1992: The first audio and video are distributed over the Internet. The phrase "surfing
the Internet" is popularized.
1993: The number of websites reaches 600 and the White House and United Nations go
online. Marc Andreesen develops the Mosaic Web browser at the University of Illinois,
Champaign-Urbana. The number of computers connected to NSFNET grows from 2,000 in
1

1985 to more than 2 million in 1993. The National Science Foundation leads an effort to
outline a new Internet architecture that would support the burgeoning commercial use of
the network.
1994: Netscape Communications is born. Microsoft creates a Web browser for Windows
95.
1994: Yahoo! is created by Jerry Yang and David Filo, two electrical engineering
graduate students at Stanford University. The site was originally called "Jerry and David's
Guide to the World Wide Web." The company was later incorporated in March 1995.
1995: Compuserve, America Online and Prodigy begin to provide Internet access.
Amazon.com, Craigslist and eBay go live. The original NSFNET backbone is
decommissioned as the Internet’s transformation to a commercial enterprise is largely
completed.
1995: The first online dating site, Match.com, launches.
1996: The browser war, primarily between the two major players Microsoft and
Netscape, heats up. CNET buys tv.com for $15,000.
1996: A 3D animation dubbed "The Dancing Baby" becomes one of the first viral videos.
1997: Netflix is founded by Reed Hastings and Marc Randolph as a company that sends
users DVDs by mail.
1997: PC makers can remove or hide Microsoft’s Internet software on new versions of
Windows 95, thanks to a settlement with the Justice Department. Netscape announces that
its browser will be free.
1998: The Google search engine is born, changing the way users engage with the
Internet.
1998: The Internet Protocol version 6 introduced, to allow for future growth of Internet
Addresses. The current most widely used protocol is version 4. IPv4 uses 32-bit addresses
allowing for 4.3 billion unique addresses; IPv6, with 128-bit addresses, will allow 3.4 x
1038 unique addresses, or 340 trillion trillion trillion.
1999: AOL buys Netscape. Peer-to-peer file sharing becomes a reality as Napster arrives
on the Internet, much to the displeasure of the music industry.
2000: The dot-com bubble bursts. Web sites such as Yahoo! and eBay are hit by a large-
scale denial of service attack, highlighting the vulnerability of the Internet. AOL merges
with Time Warner
2001: A federal judge shuts down Napster, ruling that it must find a way to stop users
from sharing copyrighted material before it can go back online.
2003: The SQL Slammer worm spread worldwide in just 10 minutes. Myspace, Skype
and the Safari Web browser debut.
2003: The blog publishing platform WordPress is launched.
2004: Facebook goes online and the era of social networking begins. Mozilla unveils the
Mozilla Firefox browser.
2005: YouTube.com launches. The social news site Reddit is also founded.
2006: AOL changes its business model, offering most services for free and relying on
advertising to generate revenue. The Internet Governance Forum meets for the first time.
2006: Twitter launches. The company's founder, Jack Dorsey, sends out the very first
tweet: "just setting up my twttr."
2009: The Internet marks its 40th anniversary.
2010: Facebook reaches 400 million active users.
2010: The social media sites Pinterest and Instagram are launched.
2011: Twitter and Facebook play a large role in the Middle East revolts.
2012: President Barack Obama's administration announces its opposition to major
parts of the Stop Online Piracy Act and the Protect Intellectual Property Act, which would
have enacted broad new rules requiring internet service providers to police copyrighted
1

content. The successful push to stop the bill, involving technology companies such as
Google and nonprofit organizations including Wikipedia and the Electronic Frontier
Foundation, is considered a victory for sites such as YouTube that depend on user-
generated content, as well as "fair use" on the Internet.
2013: Edward Snowden, a former CIA employee and National Security Agency (NSA)
contractor, reveals that the NSA had in place a monitoring program capable of tapping the
communications of thousands of people, including U.S. citizens.
2013: Fifty-one percent of U.S. adults report that they bank online, according to a
survey conducted by the Pew Research Center.
2015: Instagram, the photo-sharing site, reaches 400 million users, outpacing Twitter,
which would go on to reach 316 million users by the middle of the same year.
2016: Google unveils Google Assistant, a voice-activated personal assistant program,
marking the entry of the Internet giant into the "smart" computerized assistant
marketplace. Google joins Amazon's Alexa, Siri from Apple, and Cortana from Microsoft.

OPERATION OF THE INTERNET


Where to Begin? Internet Addresses
Because the Internet is a global network of computers each computer connected to
the Internet must have a unique address. Internet addresses are in the
form nnn.nnn.nnn.nnn where nnn must be a number from 0 - 255. This address is known
as an IP address. (IP stands for Internet Protocol; more on this later.)
The picture below illustrates two computers connected to the Internet; your computer
with IP address 1.2.3.4 and another computer with IP address 5.6.7.8. The Internet is
represented as an abstract object in-between.

If you connect to the Internet through an Internet Service Provider (ISP), you are usually
assigned a temporary IP address for the duration of your dial-in session. If you connect to the
Internet from a local area network (LAN) your computer might have a permanent IP address
or it might obtain a temporary one from a DHCP (Dynamic Host Configuration Protocol)
server. In any case, if you are connected to the Internet, your computer has a unique IP
address.
Protocol Stacks and Packets
So your computer is connected to the Internet and has a unique address. How does it
'talk' to other computers connected to the Internet? An example should serve here: Let's
say your IP address is 1.2.3.4 and you want to send a message to the computer 5.6.7.8. The
message you want to send is "Hello computer 5.6.7.8!". Obviously, the message must be
transmitted over whatever kind of wire connects your computer to the Internet. Let's say
you've dialed into your ISP from home and the message must be transmitted over the
phone line. Therefore the message must be translated from alphabetic text into electronic
signals, transmitted over the Internet, then translated back into alphabetic text. How is this
accomplished? Through the use of a protocol stack. Every computer needs one to
communicate on the Internet and it is usually built into the computer's operating system
(i.e. Windows, Unix, etc.). The protocol stack used on the Internet is refered to as the
1

TCP/IP protocol stack because of the two major communication protocols used. The
TCP/IP stack looks like this:

Protocol Layer Comments


Protocols specific to applications such as WWW,
Application Protocols Layer
e-mail, FTP, etc.
TCP directs packets to a specific application on a
Transmission Control Protocol Layer
computer using a port number.
IP directs packets to a specific computer using an
Internet Protocol Layer
IP address.
Converts binary packet data to network signals
and back.
Hardware Layer
(E.g. ethernet network card, modem for phone lines,
etc.)

If we were to follow the path that the message "Hello computer 5.6.7.8!" took from
our computer to the computer with IP address 5.6.7.8, it would happen something
like this:

Diagram 2

1. The message would start at the top of the protocol stack on your computer and
work it's way downward.
2. If the message to be sent is long, each stack layer that the message passes through
may break the message up into smaller chunks of data. This is because data sent
over the Internet (and most computer networks) are sent in manageable chunks. On
the Internet, these chunks of data are known as packets.
3. The packets would go through the Application Layer and continue to the TCP layer.
Each packet is assigned a port number. Ports will be explained later, but suffice to
say that many programs may be using the TCP/IP stack and sending messages. We
need to know which program on the destination computer needs to receive the
message because it will be listening on a specific port.
4. After going through the TCP layer, the packets proceed to the IP layer. This is where
each packet receives it's destination address, 5.6.7.8.
5. Now that our message packets have a port number and an IP address, they are ready
to be sent over the Internet. The hardware layer takes care of turning our packets
containing the alphabetic text of our message into electronic signals and
transmitting them over the phone line.
6. On the other end of the phone line your ISP has a direct connection to the Internet.
The ISPs routerexamines the destination address in each packet and determines
where to send it. Often, the packet's next stop is another router. More on routers
and Internet infrastructure later.
1

7. Eventually, the packets reach computer 5.6.7.8. Here, the packets start at the bottom
of the destination computer's TCP/IP stack and work upwards.
8. As the packets go upwards through the stack, all routing data that the sending
computer's stack added (such as IP address and port number) is stripped from the
packets.
9. When the data reaches the top of the stack, the packets have been re-assembled into
their original form, "Hello computer 5.6.7.8!"

Networking Infrastructure
So now you know how packets travel from one computer to another over the Internet.
But what's in-between? What actually makes up the Internet? Let's look at another
diagram:

Diagram 3
Here we see Diagram 1 redrawn with more detail. The physical connection through the
phone network to the Internet Service Provider might have been easy to guess, but beyond
that might bear some explanation.
The ISP maintains a pool of modems for their dial-in customers. This is managed by
some form of computer (usually a dedicated one) which controls data flow from the
modem pool to a backbone or dedicated line router. This setup may be refered to as a port
server, as it 'serves' access to the network. Billing and usage information is usually
collected here as well.
After your packets traverse the phone network and your ISP's local equipment, they are
routed onto the ISP's backbone or a backbone the ISP buys bandwidth from. From here the
packets will usually journey through several routers and over several backbones, dedicated
lines, and other networks until they find their destination, the computer with address
5.6.7.8. But wouldn't it would be nice if we knew the exact route our packets were taking
over the Internet? As it turns out, there is a way...
If you use trace route, you'll notice that your packets must travel through many things
to get to their destination. Most have long names such as sjc2-core1-h2-0-0.atlas.digex.net
and fddi0-0.br4.SJC.globalcenter.net. These are Internet routers that decide where to send
your packets. Several routers are shown in Diagram 3, but only a few. Diagram 3 is meant
to show a simple network structure. The Internet is much more complex.
4. Web 1.0, Web 2.0 and Web 3.0 with their difference

Web 1.0 –
Web 1.0 refers to the first stage of the World Wide Web evolution. Earlier, there were
only few content creators in Web 1.0 with the huge majority of users who are consumers of
1

content. Personal web pages were common, consisting mainly of static pages hosted on ISP-
run web servers, or on free web hosting services.

In Web 1.0 advertisements on websites while surfing the internet is banned. Also, in
Web 1.0, Ofoto is an online digital photography website, on which user could store, share,
view and print digital pictures. Web 1.0 is a content delivery network (CDN) which enables
to showcase the piece of information on the websites. It can be used as personal websites.
It costs to user as per pages viewed. It has directories which enable user to retrieve a
particular piece of information.

Four design essentials of a Web 1.0 site include:


● Static pages.

● Content is served from the server’s file-system.

● Pages built using Server Side Includes or Common Gateway Interface (CGI).

● Frames and Tables used to position and align the elements on a page.

Web 2.0 –
Web 2.0 refers to world wide website which highlight user-generated content, usability
and interoperability for end users. Web 2.0 is also called participative social web. It does
not refer to a modification to any technical specification, but to modify in the way Web
pages are designed and used. The transition is beneficial but it does not seem that when the
changes are occurred. An interaction and collaboration with each other is allowed by Web
2.0 in a social media dialogue as creator of user-generated content in a virtual community.
Web 1.0 is enhanced version of Web 2.0.

The web browser technologies are used in Web 2.0 development and it includes AJAX
and JavaScript frameworks. Recently, AJAX and JavaScript frameworks have become a very
popular means of creating web 2.0 sites.

Five major features of Web 2.0 –


● Free sorting of information, permits users to retrieve and classify the information
collectively.
● Dynamic content that is responsive to user input.

● Information flows between site owner and site users by means of evaluation &
online commenting.
● Developed APIs to allow self-usage, such as by a software application.

● Web access leads to concern different, from the traditional Internet user base to a
wider variety of users.

Usage of Web 2.0 –


The social Web contains a number of online tools and platforms where people share
their perspectives, opinions, thoughts and experiences. Web 2.0 applications tend to
interact much more with the end user. As such, the end user is not only a user of the
application but also a participant by these 8 tools mentioned below:
1

● Podcasting

● Blogging

● Tagging

● Curating with RSS

● Social bookmarking

● Social networking

● Social media

● Web content voting

Web 3.0 –
It refers the evolution of web utilization and interaction which includes altering the
Web into a database. In enables the upgradation of back-end of the web, after a long time of
focus on the front-end (Web 2.0 has mainly been about AJAX, tagging, and another front-
end user-experience innovation). Web 3.0 is a term which is used to describe many
evolutions of web usage and interaction among several paths. In this, data isn’t owned but
instead shared, where services show different views for the same web / the same data.

The Semantic Web (3.0) promises to establish “the world’s information” in more
reasonable way than Google can ever attain with their existing engine schema. This is
particularly true from the perspective of machine conception as opposed to human
understanding. The Semantic Web necessitates the use of a declarative ontological
language like OWL to produce domain-specific ontologies that machines can use to reason
about information and make new conclusions, not simply match keywords.

Below are 5 main features that can help us define Web 3.0:

1. Semantic Web
The succeeding evolution of the Web involves the Semantic Web. The semantic web
improves web technologies in demand to create, share and connect content through search
and analysis based on the capability to comprehend the meaning of words, rather than on
keywords or numbers.

2. Artificial Intelligence
Combining this capability with natural language processing, in Web 3.0, computers can
distinguish information like humans in order to provide faster and more relevant results.
They become more intelligent to fulfil the requirements of users.

3. 3D Graphics
The three-dimensional design is being used widely in websites and services in Web 3.0.
Museum guides, computer games, ecommerce, geospatial contexts, etc. are all examples
that use 3D graphics.

4. Connectivity
1

With Web 3.0, information is more connected thanks to semantic metadata. As a result,
the user experience evolves to another level of connectivity that leverages all the available
information.

5. Ubiquity
Content is accessible by multiple applications, every device is connected to the web, the
services can be used everywhere.

Difference between Web 1.0, Web 2.0 and Web 3.0 –


WEB 1.0 WEB 2.0 WEB 3.0

Mostly Read-Only Wildly Read-Write Portable and Personal

Company Focus Community Focus Individual Focus

Home Pages Blogs / Wikis Live-streams / Waves

Owning Content Sharing Content Consolidating Content

Web Forms Web Applications Smart Applications

Directories Tagging User Behaviour

Page Views Cost Per Click User Engagement

Banner Advertising Interactive Advertising Behavioural Advertising

Britannica Online Wikipedia The Semantic Web

HTML/Portals XML / RSS RDF / RDFS / OWL

 ACTIVITY/ EXERCISE/ ASSIGNMENT

Activities/Assessments No. 1

Written exercise with 75% passing score.


1. What are the different networks according to area? Discuss each of them. (10 points)
2. Discuss and compare the evolution of computing. (10 points)

Activities/Assessments No. 2

Written exercise with 75% passing score.


1. Discuss and compare the similarities and differences of web 1.0 up to web 4.0. (20 points)
1

 EQUIPMENT OR MATERIALS TO BE USED (for Face-to-face)

Computer

 PRACTICAL EXERCISES (for Face-to-face)

 SUPPLEMENTARY LEARNING MATERIALS

 REFERENCES

3G E-learning, LLC. (2019). 3GE collection on library science: library automation. New
York: 3G E-learning, LLC.

3G E-Learning, LLC. (2018). Cloud computing for libraries. New York: 3G E-learning,
LLC.

Besueña, Jerelyn S. (2019). Introduction to information technology and computer


fundamentals. Manila: Unlimited Books Library Services & Pub., Inc.

You might also like