You are on page 1of 20

Chapter 7

The Internet Communications Revolution


No one could have accurately predicted the Internet communications revolution we
have collectively witnessed over the past three decades with any detail. Some
futurists, like Alvin Toffler and Marshall McLuhan, hinted at the coming of a
“global electronic age” which would begin in the late-1980s or 1990s. Then again,
their hallmark works, which now read more like prophetic poetry, were uninformed
as to exactly how this transformation would occur. Even our most innovative
communications technology contemporaries remain spellbound as to how far and
how fast we have come in diffusing the Internet as a medium. The World Wide Web
(Web), electronic mail (email), file transfer protocol (FTP), instant messaging,
telnet, Java, Flash, content management systems (CMS) — all of this new-fangled
vocabulary, so soon, so quickly. It truly is enough to make the head spin.
Furthermore, what’s emerging in the pipeline of future Internet developments
promises not only to make our heads spin, once again, but, it may be coming so
suddenly that it may give us all a case of whiplash!

So, a word to the wise concerning computers, the Internet, and their offspring
communications systems: realize and remember that you are always going to be a
student. Pure technologists accept that they must continuously learn new
technologies, update their skills and increasingly, become more and more
specialized. Indeed, the best online communicators also know this, and try to take
in as much information about the changing medium as possible. Your path to
effective Internet communications should be the same. Make sure that you read
books (like this one), academic journals, popular Internet magazines, and quality
online sites for your intake of technology knowledge. As well, know that to
thoroughly understand the Internet and how it works you must separate its
elements. There’s the hardware, which makes the Internet function. There’s the
software, too, which makes the hardware itself more humanly accessible. There’s
the vast amount of information stored on the hardware. And last, but certainly not
least, there are the billions of people — actual human beings with information
needs and motives — who inhabit the data online. This last concept is frequently
ignored by many. It should not be overlooked. There are actual people out there,
communicating, with informative, persuasive, and entertainment needs that are to be
gratified. Be aware, and be careful.

Before we unravel what the Internet is and how it has caused a massive
communications revolution, we must appreciate the “network of networks” by way
of a simple metaphor. Try to think of the Internet as a vast ocean of information. It
truly is immense in its reach and depth. As of a reliable June 2005 Netcraft.com
survey, there were roughly 65,000,000 “active” Web sites online; in June 2013, the
number of sites was 672,985,183, with Google’s search engine indexing at least 4.36

1
billion pages. Predictions were that the World Wide Web will reach 1 billion sites in
2013 and 2 billion sites in 2015, but there were already 2.4 billion Internet domain
names registered in 2012. The growth of Internet-based information since 1991
(when Tim Berners-Lee and Robert Cailliau wrote the first Web browser) is nothing
short of phenomenal.

This ocean of information, obviously, cannot be taken in all at once. It must be


sampled, bit by bit, byte by byte. Once you can appreciate this metaphor, and
understand the Internet as an enormous body of information, you are ready to begin
the dissection of this massive grid. To do so, this chapter delineates itself into two
major sections: (I) a brief history of the Internet and (II) how the Internet works.

I. A Brief History of the Internet

In Chapter 2, we considered the 1945 bombings of Hiroshima and Nagasaki as


watershed events that initiated the study of atomic scale force and energy
throughout the world. Likewise, these two events, and the litany of research that
ensued, produced a significant change of perspective in how we determined our
computer-based communications should operate. You see, now that “Pandora’s Box”
had been opened, the other nations of the world also sought after the prize of
becoming an atomic power. This made the threat of having a nuclear exchange all
the more real and created a significant amount of social-psychological tension
among the peoples of nations arming themselves with nuclear grade armaments.
The threat of nuclear war was no longer merely theoretical, it was made possible.
Further heightening tensions among the superpowers was the successful launching
of U.S.S.R. Sputnik I in October of 1957 and then U.S. Explorer I in January of
1958. Given the invention of artificial satellites, scientists correctly assumed that
space, itself, could become militarized with nuclear payloads. This would make
enemy strikes undetectable by radar, and almost instantaneous. The proverbial
“race for space” was more about understanding the military advantages of zero
gravity than it was for egalitarian exploration.

In 1957, two important research agencies were formed in the U.S.: the Advanced
Research Projects Agency (ARPA) and the National Aeronautics and Space
Administration (NASA). Both of these agencies worked together to understand the
dynamics of strategic satellite usage and the military advantages of space
exploration. Their collective mission was to understand the “satellite attack”
scenario. One of ARPA’s findings occurred in 1959, when Los Alamos Laboratory
physicists were blast-testing in the New Mexico desert. These researchers found
that nuclear explosions registered a shockwave of energy upon initial detonation
which rendered all electrical equipment — lights, automobile batteries, radios,
telephones, and computers — lifeless for a significant duration (2 to 3 days).
Scientists correctly concluded that the energy array effectively disrupted the orderly
flow and charge of electrons, making electrical equipment useless near the

2
epicenter. For example, a 1.4-megaton blast some 250 miles high in the atmosphere
above Johnston Island in the Pacific Ocean knocked out Hawaii’s power grid (which
was over 800 miles away) for 2 full days. This event also troubled RAND and NASA
engineers who believed that these electronmagnetic pulses (EMPs) could be
tactically used to disrupt ground communications (of all electronic forms) for aerial
bombing attacks.

▪ RAND Blueprints a Galactic Network

The question on the minds of all physicists and engineers of the 1950s and 1960s
was: how do we survive the horrors of an atomic war? For, the issue was not “if”
atomic war would occur, but “when” it would occur. While U.S. citizens were buying
rubber suits, oxygen tanks, gas masks, and stockpiling canned food for their
backyard fallout shelters, RAND was trying to solve the problem of maintaining
communications during an atomic war or EMP blast strike with the Soviets.
Specifically, they were extremely concerned as to how they could preserve Pacific
Coast to Atlantic Coast communications. Their challenge was to create a
communications network of immense proportions that would be pervasive and
nearly impossible to destroy. It was a formidable task, indeed. Instead of radio or
telephone communications, these scientists turned to computers as communications
vehicles. This is where the term computer-mediated communications first began to
appear in the research literature, and it ushered in a coterie of scientists with
interdisciplinary academic pedigrees — all focused on making computers
communicate with one another over extended distances, even in the event of a
global thermonuclear holocaust.

The scientists at RAND set out to create a distributed network that would not only
work on the ground but also to and through airspace through satellites. The name
given to the project was the “Galactic Network.” You see, previously, all computer
networks functioned off of a traditional command-and-control model which is highly
centralized. One computer served as the “master” of all communications, and the
other computers connected to it were “slave” machines. RAND’s idea was to create
an all-encompassing network that allowed for constant peer-to-peer
communications, where no single computer would be critical to the overall
functioning of the systemic network. This notion flattened the communications
hierarchy, and made all computers on the grid equals, or peers. It was a highly
unconventional idea, produced as a direct result of the threat of nuclear
annihilation. As a result, one could easily argue that the Internet was invented
because of nuclear weapons. The ultimate centralized argument made real
(smashing atoms until they release nature’s inherent energy) was the direct cause
of the ultimate decentralized argument (freeing communication until there is no
central structure).

RAND assumed that precise geographies and cities in the U.S. would be eliminated

3
in an all-out nuclear exchange. The model also assumed that no distinct computer
on the network, or communication line, was decisively critical. Communications,
therefore, could be accomplished using a series of alternative routes on the all-
encompassing network. Each computer would look for computers that were “active”
on the network, and send messages to and through them. Thus, if a city fell to an
attack, or a particular communications line went out, the remaining online systems
would route messages through the residual network through an array of routes. In
the same way, if a computer or communications line came back in operation, all of
the computers would immediately recognize it, and revive it in the strategy for
relaying data.

Let’s see how this works in actual reality, now. If you wanted to relay a message
from Los Angeles, California (A), to Boston, Massachusetts (E), it would typically go
through several routes like Dallas, Texas (B), Atlanta, Georgia (C), and possibly
Philadelphia, Pennsylvania (D). But, what if Dallas (B) and Atlanta (C) no longer
existed? How would computer-based communications get from Los Angeles to
Boston? Certainly, other cities could also carry the communiqué rather than the
Dallas-Atlanta-Philadelphia (B-C-D) path? In the RAND model, the answer is yes,
because it defies the orthodox command structure. Communications could, instead,
flow from Los Angeles (A) to Chicago, Illinois (X) and through Cleveland, Ohio (Y) to
reach Boston (E). And while this principle works on a macroscopic national scale, it
also works on a microscopic scale — within a regional geography, a state’s
boundaries, or even a city.

Textbox 6.1:
On Packets and Protocols

• Communication Protocols Debut

The Research and Development Corporation or “RAND” (based out of Santa


Monica, California) worked with the Massachusetts Institute of Technology (MIT)
and the University of California at Los Angeles (UCLA) during the 1960s to
construct a collection of computer-based protocols that would instruct data to
move around broken (or even clogged) access points on the “galactic network.” A
protocol, quite simply, is a set of rules governing the logical transfer of
information from one computer to the next. Currently, there are a wide variety of
protocols available. However, the one most commonly used to access the Internet,
today, is the TCP/IP suite. The principal theorists who developed these first
protocols in the 1960s were Americans Paul Baran (of RAND in Santa Monica)
and Lawrence Roberts (of MIT). But, their computer-mediated communication
theories were just notions in academic articles, and had not yet been thoroughly
tested by the rigors of programming.

• NPL Net and Packets

4
Intriguingly, it would be the British who would put these American ideas into
play. At the U.K. National Physical Laboratory (NPL) in 1967, Donald W. Davies
developed the NPL Network using the first packets (partitioned bundles of data)
to transfer information. Based upon the published American premises, the British
team agreed with Baran and Roberts that any data roving on such a distributed
system would also have to copy the structure of the network. In other words, if
the network were going to function on a distributed model, then so too should the
data traveling on it. As a direct result, they designed a system that would
allocate data streams into smaller packets, bundles of data that made traffic more
manageable across the Internet. This way, if a few data packets of the whole
transmission got lost, at least the general message would make it through the
network. And, if a piece or segment of the transmission were somehow
intercepted, it would make very little sense without the remainder of the data.
This system was the first attempt to design a formal network protocol.

▪ ARPANET Debuts and Sprawls into the Public Sector

On the West Coast from 1968 to 1970, the Department of Defense (through ARPA)
began building a computer “internetwork” that could communicate with all of the
preexisting computer networks. By conceptual definition, an “internetwork” is a
mother network of lesser networks, which allows computers on differing networks
to communicate with one another. This forming network was called ARPAnet,
named after the agency creating it for the budding research and development
industry in the West. In the beginning, only research and development firms were
allowed to connect to ARPAnet. But, little did ARPA realize that it was actually
creating the first links on the Internet. For, their idea was so successful, it spread
quickly from coast to coast in the U.S. and eventually exported itself overseas
through communications satellites. On September 2, 1969, a computer at the
University of California, Los Angeles (UCLA) became the first host on ARPAnet,
shortly thereafter connecting Stanford University, the University of California at
Santa Barbara (UCSB), and the University of Utah (within 2 months).
UCLA, Stanford, UCSB, and Utah were creating the first Internet and experiencing
unforeseen benefits of this miniature network they had established: (1) enormous
streams of data could be easily transferred from one system to the next, (2)
mainframe systems could be accessed remotely, allowing for precious “system time”
to be maximized on valuable computers, and (3) it mushroomed the exchange of
messages from one user to the next remote user, thus creating the first online
communications.

All of this was very exciting. People were using computers to communicate and
were doing so in unexpected and unanticipated ways. Still, the system connecting
the four test institutions would periodically crash — as a direct result of an
unreliable protocol set governing internetwork communications. So, to improve

5
ARPAnet’s reliability, UCLA, Stanford, UCSB, and Utah worked with ARPA to
make a more reliable protocol standard. That standard would be the National
Control Protocol (NCP) which emerged in 1970, allowing for an immediate
geographic expansion of the internetwork to other institutions and government
agencies. The NCP proved to be durable, producing a rule set governing
communications until the debut of the Transmission Control Protocol / Internet
Protocol (TCP/IP) fused suite in 1983. ARPAnet snaked its way eastward across the
U.S., eventually connecting West and East, and the internetwork moniker was
popularly dropped, and people using the system simply began calling it “the
Internet.”

Textbox 6.2:
The Transmission Control Protocol / Internet Protocol
(TCP/IP) “Stack”

The TCP/IP “stack” is, in fact, a bundle of two protocol sets. When analysts
discuss network protocols, they often refer to TCP/IP simply as the “stack.” The
stack refers the fact that the two protocols are organized into communications
levels. The TCP/IP stack contains 4 layers: Application, Transport, Network, and
Link:

Application Protocols: Telnet, FTP, RPC, email, chat, WWW clients


Function: Users interact with the network
Transport Protocol: TCP
Function: Ensures that packets are received in the
order sent and retransmits if there is an error
Network Protocol: IP
Function: Determines, based on the destination IP
address, how to get the data to its destination
Link Devices: Network card, device driver
Function: Communicates with the hardware, strips
header info when receives and adds header info to send

This is a very simplified view and many protocols and details have been omitted,
but it is useful for a brief overview. Let’s look at a quick example. When an
application such as email sends a message to another email address, several steps
happen very quickly.

1. The destination name is converted to an IP address, via the Domain Name


System.
2. Email sends the message to the transport layer.
3. Transmission Control Protocol (TCP) breaks the message up into packets,
and adds header information such as destination IP address, and
originating IP address.

6
4. The transport layer sends the packets to the network layer.
5. Internet Protocol (IP) takes over at the network layer and figures out which
router the packets employ to head toward their destination. The packets
are handed to the link layer.
6. The network card sends the packets out through the network, where they
head to the router, on their way to the destination.

Again this is very simplified, but HTTP, email and other clients and protocols
interact with TCP/IP at the application layer, working together to send and
receive packets of information.

Between 1972 and 1982, ARPAnet continued to cover the major research and
development institutions throughout the U.S. In fact, it did so with such success
that ARPAnet staff began using the network for personal as well as professional
purposes. Most of the network traffic occurring on ARPAnet was, at that time,
electronic mail of an personal nature. Hence, ARPAnet’s network administrators
took counteraction by declaring that the network should be used fonly or
professional exchange. The online edict did very little to change things. People
continued to communicate online with each another. So, in 1983, ARPA moved all
of its upper-echelon research and development affiliates off of ARPAnet —
physically onto another communications network altogether — called MILNET.
This new network was built with the intention of keeping the research community
insulated and isolated from the online prattle parlor that ARPAnet had become. It
did not work. Even the researchers were interpersonally communicating online.
Something major was afoot.

▪ U.S. National Science Foundation (NSF) Funds the “Public Internet”

You see, another major context was looming on the horizon, which also utilized the
TCP/IP suite: the public Internet. Private companies began selling Internet access
through dial-up modems in retail outlets, making the Internet more ubiquitous
than any network ARPA’s 1968 “Galactic Network” class could have imagined. The
Internet continued to grow progressively, access point by access point, port by port,
hub by hub. And then one day in 1984, the U.S. National Science Foundation (NSF)
obtained access to considerable governmental funding earmarked for the
development of the “public Internet” through their Office of Advanced Scientific
Computing (OASC). The mission was gargantuan but strategically possible:
cultivate the Internet throughout the entire U.S. by deploying the best high-speed
technologies available. NSF’s accompanying strategywas to fund NSF grants to
agencies willing to build their own infrastructures, employing private enterprise to
do the work. However, NSF first sought to create a high-speed backbone Internet
network running from coast to coast along ARPAnet’s old interstate path, via five
supercomputing facilities. The name of this network became NSFNET.

7
In 1988, NSF upgraded the U.S. information backbone stretching from the
northeastern corridor to the southwestern coast of California using T1 fiber optics
(very fast lines). And then, in 1991, NSF again upgraded the backbone with T3
fiber (extremely fast lines). To put the difference between T1 and T3 into
perspective, imagine a 25 times increase in data throughput from 1988 to 1991.
This “backbone” represents the fabled “information superhighway” of popular
legend. Today, most of the Internet development in the U.S. focuses around two
major initiatives: (1) Internet2, known as the Abilene Network, and (2) wireless
fidelity access points [wi-fi].

Text Box 6.3:


Internet2 or the Abilene Network

A consortium of U.S. universities, corporations, and government agencies have


created multiple partnerships (with many different organizations) to advance
Internet technology further. As it stands today, the Internet will not sustain
unlimited growth. “Internet2” development focuses on increasing bandwidth
beyond the current bundled T3 potential and building advanced technologies such
as online digital laboratories. See: http://www.internet2.edu for more information
about this intriguing U.S.-based initiative.

These are just the major developments which led to the rise of a public Internet in
the U.S. We more closely examine the international developments in Chapter 8 of
this book, titled “The Digital Divide.” Still, since many of these major
communications breakthroughs occurred in the U.S. first, this information stands
as necessary Internet history. Yet, there were certainly many developments that
happened along the way. Let’s take a look at some of them now.

▪ Internet History Timeline

1945 The U.S. dropped atomic bombs on Hiroshima and Nagasaki.


1957 The Soviets launched the world's first artificial satellite, U.S.S.R.
Sputnik I
1958 The Americans rivaled the launch of Sputnik I with Explorer I.
1958 The National Aeronautics and Space Administration (NASA) was
formed.
1958 Scientists at the Research and Development (RAND) Corporation (a
private think tank based in Santa Monica, California), were given the
task of developing a communications network that could withstand a
nuclear attack.
1959 Physicists investigating the effects of atomic test blasts in Los Alamos,
New Mexico found that the electromagnetic pulse (EMP) surging from
a nuclear explosion rendered much of their electronic equipment
inoperative.

8
The RAND worked in conjunction with the Massachusetts Institute of
1960s Technology (MIT) and the University of California at Los Angeles
(UCLA) to build a set of communications protocols that would direct
information around broken or clogged points on a network.
1967 At the National Physical Laboratory, Donald W. Davies developed the
NPL Network using the first packets to transfer data.
1968 The Advanced Research Project Agency (ARPA) began to develop an
internetwork on the West Coast of the U.S.
1969 UCLA became the first host connected to ARPANET.
1970 The National Control Protocol (NCP) was developed in 1970. The
release of NCP allowed ARPANET to begin its first expansion phase.
1972 Ray Tomlinson invented the first email program that could deliver
messages across ARPANET.
1975 Debut of a durable new network protocol called the Transmission
Control Protocol (TCP).
Late Many corporations and state agencies invested a considerable amount
1970s of money and energy into building their own private computer
networks.
1983 When ARPA announced that it would be switching from the National
Control Protocol (NCP) to TCP/IP (Transmission Control
Protocol/Internet Protocol) on its network hosts, many others jumped
on the bandwagon as well.
1984 One year after ARPA retreated from ARPANET, the U.S. National
Science Foundation (NSF) emerged to continue to spearhead Internet
development.
1986 NSF introduced 5 supercomputing facilities in 1986 that would provide
the nation's major research institutions with high speed computing
access.
1988 NSFNET upgraded its dedicated lines to T1 fiber optics.
1991 NSFNET upgraded its network to T3 fiber optics, allowing for nearly
25 times the flow of data over its T1 predecessor
1991 Commercial ISPs were allowed to join in.
1992 The National Science Foundation awarded a 5-year contract for
managing domain name registration to Network Solutions, Inc.
1992 Tim Berners-Lee invented the World Wide Web
1993 Students at the University of Illinois wrote a Web browser called
Mosaic.
1995 Web traffic surpassed all other forms of online data flow to become the
primary information transferred on the Internet.

II. How the Internet Works

To many, the Internet is a complete and utter mystery. It is some sort of magical
electronic place where information flows around on the screen and emails transfer

9
from one point to the next. However, the Internet has far less to do with “magic”
and far more to do with computer-based logic.

This next section details the working logic of the Internet, explaining exactly how
things work online. To understand how the Internet works, you really need to
understand four main issues: (1) the basics of TCP/IP operations, (2) what domain
names are, and how information is referenced on the Internet, (3) how the hardware
(e.g., servers, routers, and clients) functions to make the Internet “come alive,” and
(4) how to connect to the Internet. These four layers of information will give you a
solid appreciation for the Internet and how it operates.

▪ TCP/IP: Transmission Control Protocol / Internet Protocol

The chief means of Internet data transmission (of any type) comes from the
previously mentioned TCP/IP stack (see: Textbox 6.2 above). Whether you are
sending an email, pulling up a Web page, or transferring files through the Internet,
you are utilizing the very durable TCP/IP protocol suite. As the name suggests, this
is a combination of protocol sets. Essentially, it is the job of TCP/IP to break all
transmissions from a computer going through the Internet into small bundles of
information called “packets.” These packets are then assigned an assembly
number, sent over the Internet (through different geographical routes), and
reassembled at the receiving computer by TCP/IP. So, in short, TCP/IP has two
main purposes: (1) to segment data so that it can be expeditiously transmitted in
piecemeal fashion, and (2) to number that data by packet so that it can be reordered
once received.

As each of these packets travels along its unique path through the Internet, the
computers (routers) it encounters relay the packet to the destination computer
accordingly, much like tossing a hot potato around until it gets where it is required
to go. For example, a file moving from Johannesburg, South Africa to Moscow,
Russia could go through 10-15 different routers before it reaches its final
destination. The original RAND engineers who designed the Internet believed that
the data traveling the network should be as distributed as the network itself;
therefore, TCP/IP scrambles and reconstitutes data, thereby protecting it from a
host of security risks. Plus, doing so also helps the Internet make the most efficient
use of its open bandwidth possibilities, optimizing the overall speed of the Internet.

▪ Domain Names: Addressing the Internet

A simple question: Which of these two pieces of information is easier for you to
commit to memory? 64.236.91.23 or www.cnn.com? Without a doubt, www.cnn.com
is an easier moniker to recall than 64.236.91.23, right? Well, either one of these two
strings of data will work as an address in your Web browser. Try it… You see,
computers are programmed to recall the geographic position of an Internet

10
computer by a given numeric address, much like the one above. However, because
this string of numbers is so hard for human beings to remember for each and every
Web site or email address suffix (the last part of an email address following the @
symbol), Internet programmers have devised a system which matches alpha
characters with numeric digits, thus creating a twin system of finding and
delivering information online.

What is truly important about domain names, which are frequently referred to as
universal resource locators (URLs), is that they help people navigate the Internet
and World Wide Web (WWW) easily. When we assign a domain name address to a
server’s Internet protocol number (e.g., 64.236.91.23) we are making the address
both memorable and easily accessible, which is one of the beautiful things about
this global computer network. While computers will still seek out the IP numbers,
humans can rest assured that the domain names will function effortlessly.

▪ Hardware: The Machines that Make the Internet Work

There are three different machines you need to be able to recognize in any
discussion on the Internet: (1) servers, (2) routers, and (3) user or client terminals.
Each of these machines, of course, serves a different purpose. Servers, for example,
are the robust computers that store our data, and are connected directly to the
Internet. These machines work 24 hours a day, 7 days a week, providing constant
redundant-backup service (referred to as RAID-array redundancy) to all Internet
users. They are the machines where our data are stored for quick recall. Routers,
on the other hand, are rapid machines that have one central purpose: to send data
expeditiously to its destination point. They are not “data centers” like servers are,
by any stretch of imagination. Their purpose is to simply send data on to its final
destination address. Routers, in sum, are the heart of the Internet. They push the
data forward to its geographical destination. A user terminal or end station is
represented by the common PC. They are frequently referred to as “client stations”
or “client machines.” These machines perform the end-user’s request, by either
uploading or downloading specific data to or from a server. The client machine may
use one of many types of programs to achieve this (interestingly, these Internet
programs are also called “client” programs). However, most use a Web browser or
email application (like “MS Outlook”) to do this. Of course, user terminals come in
many types; from the technologically simple to the complex. It all depends upon the
particular user’s computing needs.

One thing not mentioned, but very important, is the massive infrastructure of
telephone wires, wireless hot spots, dedicated subscriber lines, cable, and satellite
transmissions which comprise the public telecommunications infrastructure
bringing computers together. All of these cables and conduits are a pivotal force in
the further democratic distribution of the Internet — into our homes, businesses,
organizations, schools, and the like. Without them, we would simply have a series

11
of computers without any means of collaboration or, moreover, communication. And
so, it becomes somewhat imperative for you to understand just how fast (or not so
fast) the connection you are using to your computer might be. Do you know how to
connect to the Internet? Are you really experiencing all that the Internet has to
offer?

▪ Connecting to the Internet: Getting “Online”

There are, of course, several ways that you can establish Internet connectivity for
yourself. If you are attempting to connect to the Internet from your home, you will
clearly need a PC and an Internet Service Provider (ISP) of some sort. There are a
few ISPs out there that are free (but they rely on lots of online advertising to make
their way), as well as a healthy variety of paid Internet subscription options —
including dial-up through a telephone line, high-speed cable, and bi-directional
satellite service. Your possible options will vary depending upon what is essentially
available in your area. Most locales have telephone service, some have direct
subscriber lines (DSL), and some have cable Internet. If all else fails, satellite
Internet is always an option, although not everyone can afford that option. It is
always best to ask your local computer-savvy friends what type of Internet
connection they use, and then find some good regional literature to make your final
decision.

But, one thing is clear: it is not mandatory to buy a new supercomputer to connect
to the Internet. Any computer (whether a PC-based machine or a Macintosh) built
within the last four to five years will probably be adequate for doing
straightforward Internet tasks, including email and Web browsing. In most cases, a
modem (modulator/demodulator) is usually required to connect to the Internet. A
modem translates the data from electrical signals in your computer to a signal that
can be accepted by the ISP carrier — like a telephone or cable signal — and then
translates it back again when it receives the signal. So, know that you will likely
need to acquire a modem of some kind. Your computer may already come with a
built-in telephone modem. But, if a carrier-specific modem is required, your ISP
may rent or sell one to you. Do not be concerned. This is common practice, and
completely necessary to make your computer communicate with the Internet.

If you’re linking to the Internet from work, a cyber café, or a public library then,
chances are, there are a variety of options before you. For starters, your workplace
is prone to have a local area network (LAN) — a collection of computers, printers,
and shared network drives on which to store files. If the LAN is spread-out over a
vast geography, such as several buildings on a campus or via connections among a
city (or multiple cities), it is best understood as a wide area network (WAN).
Customarily, a company will connect their LAN directly to a leased Internet trunk.
You might hear the terms “T1” and “T3” used to describe types of fiber optic lines.
Unless you are an engineer, you will not have to worry about these lines. They are

12
part of the “backbone” of the Inter-network; the main thoroughfares on the
information superhighway. Smaller corporations, and places such as a library or
cyber café, will likely subscribe to an ISP.

ISP Speeds: Choosing the Right Service for Your Needs

When shopping for your ISP, it is extremely important to appreciate how data line
speed is calculated. First, you should know that data travels in two directions: (1)
downloading and (2) uploading. Downloading refers to data transmissions received
by your computer from another computer over the Internet. Downloading is when
we “pull” data through the Internet onto our local PC computers. On the other
hand, uploading refers to data transmissions sent from your machine to another
computer through the Internet. The speed at which these two events occur is called
bandwidth.

More often than not, download transmissions come into your modem and computer
at a much faster rate than do your upload transmissions. This is because most of
our typical Internet activity is download traffic. Whether you are receiving email,
downloading Web pages, or streaming media — you are ordinarily downloading.
Data transfers are measured in Kbps (kilobits per second, one-thousand bits per
second, or roughly 125 characters per second), Mbps (megabits per second, one-
million bits per second or roughly 1,250 characters per second) or Gbps (gigabits per
second, one-billion bits per second or roughly 12,500 characters per second). So,
ISPs normally advertise the amount of bandwidth they provide in both download
and upload speeds. For connectivity to leased lines and over LAN connections,
where the bandwidth is significantly higher, the speed is measured in Gbps. Within
the decade, we will likely be measuring these transfer rates in terabits per second
(Tbps), or 125,000 characters per second, as our computational equipment improves
and new fiber lines are installed. Phenomenal, no?

According to a 2010 report from the Federal Communications Commission, 74


percent of the nation's adults had Internet access in their homes, but 6 percent were
still relying solely on dial-up Internet connections to go online. Traditional
telephone companies and other ISPs such as AmericaOnline offer basic dial-up
service for which you will need a telephone modem and standing telephone line.
Usually, they offer very inexpensive packages, for a set number of connection hours
per month. This is a much slower service, running at about 56 Kbps; and, while
connected to your ISP, your phone line is busy. Some ISPs do incorporate
“download acceleration” software to boost the connection speed higher than 56
Kbps. As there are a wide variety of computer operating systems and technical
issues associated with connecting to a dial-up ISP, ask your ISP’s technical support
to walk you through setup. There are some “for free” dial-up services out there.
But, they ordinarily require you to keep an advertising window open to stay
connected, as “banner ads” will be displayed while you work.

13
In the early 1990s, as the Internet emerged as an important communications
medium, community concerns regarding Internet access gave birth to numerous
volunteer-based freenets. There are long-established freenets in many highly
populated communities, still existing as volunteer organizations and often based at
the local public library. Some are still able to offer free Internet access through
grants and funding, but many now charge a nominal fee in order to continue
operating. Freenets normally have community discussion boards, volunteer
technical support help, and information about the community.

Broadband Internet connection services are extremely fast, straight connections to a


predetermined ISP, the option chosen by 68.24 percent of Americans in 2010
according to the U.S. Census Bureau.. You can be connected to that contracted ISP
for 24 hours a day (this is also known as a static connection). There are a range of
broadband options, including DSL/ADSL, cable, and bi-directional satellite
broadcasting. Your community telephone company and other ISPs may also offer
something called DSL (Digital Subscriber Line) or ADSL (Asymmetrical Digital
Subscriber Line) as a broadband option. This is a fast connection through your
standard telephone line which allows you to simultaneously make phone calls while
online. ADSL is an advanced form of DSL. But, often the ISP just refers to an
ADSL connection as simply “DSL.” Essentially, ADSL technology makes use of the
fact that the telephone line will convey a high range of frequencies, and the
frequencies used for voice transmission are in a very small, lower range. ADSL
transmits data over the unused higher frequencies. The transmission speed ranges
from 128 Kbps to 4 Mbps (download speed) and 64 Kbps to 800 Kbps (upload speed).
This is very fast. Some ISPs even have a modified pricing scale for additional
increased Internet speeds. If you select the DSL option, you will need to buy or rent
a DSL modem. ADSL is not available in all areas, since it requires upgraded
telephone lines, but, DSL is more widely available. Some local telephone companies
will also have bundled package pricing, including telephone service and Internet
access.

Today, most cable companies also offer Internet access as an add-on feature to their
cable comprehensive communications packages. Ordinarily, this service is offered
via two different types of connectivity: (1) light access and (2) accelerated high-
speed access. Light access is comparable in speed to a low-end DSL connection,
with speeds such as 128 Kbps download and 64 Kbps upload. High-speed cable
Internet access offers bandwidth capabilities such as 3-8 Mbps download
(depending upon the cable company’s system) and 128 Kbps to 1 Mbps upload.
Many cable companies will also have package pricing including television signal
service. As with the telephone and DSL options, you will need to have a cable
modem and an Ethernet Network Interface Card (NIC). The NIC is often already
built into newer computers, and so, you may see a female network port on the back
of your machine. All you need to do, really, is plug your cable modem line into the

14
computer, and you are ready to get online!

Satellite Internet access may be, in fact, an option for you if you have a clear view of
the sky (either the northern or southern sky, depending upon your global
positioning). Obviously, you will need to install or mount a satellite dish, and (once
again) connect your computer to a specialized satellite modem. As is the case with
the cable modem, the satellite modem connects to the NIC in your computer.
Download speeds are 500 Kbps to 1 Mbps, and upload speeds are 50 Kbps to 100
Kbps; thus, placing satellite access somewhere between DSL and cable, depending
upon the circumstances and technology being utilized. This may be a very good
option for you if living in a highly rural area, and you have no other broadband
options at your disposal. Early forms of satellite Internet service were uni-
directional or one-way for a long time, in that the data was easily downloaded from
satellite feeds, but computer signal upload was accomplished through slow dial-up
services. Consequently, people using satellite had to have two Internet services:
one for downloading (satellite) and one for uploading (telephone). Today, almost all
satellite Internet service providers offer two-way (bi-directional) service.

Wireless Internet access is becoming popular in public places such as coffee shops,
airports, libraries, and hotels. This technology, called Wi-Fi (pronounced W-eye F-
eye) is an acronym for wireless fidelity and refers to a high-frequency wireless local
area network (WLAN). You may also see this term written as WiFi. A computer is
said to be Wi-Fi enabled if it has a wireless network card installed. If a Wi-Fi
enabled computer is not already connected to a network via a NIC, it will constantly
look for hotspots, much like your cell phone will search for a signal. This signal is
sent from a wireless access point, which is connected to a computer, and typically a
network as well. Download speeds can range from 11Mbps – 54Mbps, much faster
than cable connections. Wi-Fi technology is a good solution for a home network and
would enable all computers connected to share the Internet connection and
resources such as a printer, scanner, and files. It eliminates the need to run wires
through the house and is powerful enough to provide fast connections over small
distances.

Conclusion

The future holds much promise for even greater expansion of wi-fi through the use
of White Space, which could well be a Big-bang Disruption. In November 2008, the
Federal Communication Commission issued an Order adopting “rules to allow
unlicensed radio transmitters to operate in the broadcast television spectrum at
locations where that spectrum is not being used by licensed services (this unused
TV spectrum is often termed “white spaces”). This action will make a significant
amount of spectrum available for new and innovative products and services,
including broadband data and other services for businesses and consumers.”
Essentially, this means a free, nationwide wi-fi network. While such players as

15
Microsoft, Google, and the Wireless Innovation Alliance support such a system,
traditional carriers such as AT&T, Intel, T-Mobile and Qualcomm, fearing “market
disruption” (a euphemism for loss of income to the corporations that now have
financial control of the airwaves for voice and data). The policy debate is being
carried out in Washington, with multi-million dollar corporate lobbying campaigns
but little concern for or influence by potential consumer benefits, so the outcome is
uncertain.

So much has happened so quickly in the development of computer-mediated


communications over the past two decades, that it really is hard for anyone to keep
up. And while we struggle with the creation of knowledge, we also struggle with
these new communication tools and technologies created to better manage that
knowledge, as well as the politics of regulation. Certainly, we are witnessing a
massive transformation occur before our very eyes. While people used to rely
almost exclusively on face-to-face meetings and interpersonal collaboration to best
convey information and make decisions, we are now mediating conversation
through technology over vast distances — and, in some cases, preferring to do so.
Beyond the revolution of our communication methods, however, lies a more
powerful argument: that our knowledge is being transformed by this accelerated,
global, digital communications context.

Globalization has always been an issue facing every nation-state. But, increasingly,
the Internet is challenging our assumptions and techniques of participating in the
ongoing worldwide dialogue that now surrounds us all. The Internet is the medium
that connects and binds the world, for better or worse. However, you will see in the
next chapter on communications satellites that not everyone is on equal footing for
comprehensive communications access. The next chapters of this book, thus, focus
on how the Internet is rapidly changing, and furthermore, what the Internet might
look like by year 2020. The sprawl of the Internet is not over; in fact, it has just
begun. Accordingly, globalization is not a mere ongoing process that is occurring. It
is, in every way, shape, and form, an eventuality. The strong mind will prepare for
it, and learn to embrace it, rather than sentimentally holding on to the outmoded
business symbols and systems of yesterday.

16
References and Readings

Bellaver, Richard F. Characters of the Information and Communication Industry.


Bloomington, IN: AuthorHouse, 2006.

Bentall, Mark, Chris Hobbs, and Brian Turton. ATM and Internet Protocol: A
Convergence of Technologies. London, New York: Arnold; Wiley, 1998.

Blake, Roy. Electronic Communication Systems. 2nd ed. Albany, N.Y.:


Delmar/Thomson Learning, 2002.

Borgman, Christine L. Scholarship in the Digital Age: Information, Infrastructure,


and the Internet. Cambridge, MA: MIT Press, 2007.

Corwall, Agnes S. Telecommunications: Issues in Focus. New York: Nova Science


Publishers, 2002.

Coutard, Olivier, Richard E. Hanley, and Rae Zimmerman. Sustaining Urban


Networks: The Social Diffusion of Large Technical Systems, The Networked
Cities Series. London; New York: Routledge, 2005.

Derfler, Frank J., and Les Freed. How Networks Work. 7th ed. Indianapolis, IN:
Que, 2005.

Feser, Edward, Timothy D. Green, Illinois Institute for Rural Affairs, and Illinois.
Illinois Online: Recommendations for Universal Broadband Access. Macomb,
IL: Illinois Institute for Rural Affairs, 2006.

Freeman, Roger L. Reference Manual for Telecommunications Engineering. 3rd ed.


New York: Wiley, 2002.

Gascó Hernández, Mila, Fran Equiza-López, and Manuel Acevedo-Ruiz. Information


Communication Technologies and Human Development: Opportunities and
Challenges. Hershey: Idea Group Pub., 2007.

Gaskin, James E. Broadband Bible. Desktop ed. Indianapolis, IN: Wiley Pub., 2004.

Gates, Bill, and Collins Hemingway. Business @ the Speed of Thought: Using a
Digital Nervous System. New York, NY: Warner Books, 1999.

Geihs, Kurt, Wolfgang König, and Falk von Westarp. Networks: Standardization,
Infrastructure, and Applications, Information Age Economy. Heidelberg; New
York: Physica-Verlag, 2002.

17
Hearn, Greg, T. D. Mandeville, and David Anthony. The Communication
Superhighway: Social and Economic Change in the Digital Age. St. Leonards,
NSW: Allen & Unwin, 1998.

Hine, Christine. New Infrastructures for Knowledge Production: Understanding E-


Science. Hershey, PA: Information Science Pub., 2006.

Howard, Philip N., and Steve Jones. Society Online: The Internet in Context.
Thousand Oaks, CA: Sage, 2004.

Latham, Robert, and Saskia Sassen. Digital Formations: IT and New Architectures
in the Global Realm. Princeton, N.J.: Princeton University Press, 2005.

Lightman, Alex, and William Rojas. Brave New Unwired World: The Digital Big
Bang and the Infinite Internet. New York: J. Wiley & Sons, 2002.

Ludlow, Peter. Crypto Anarchy, Cyberstates, and Pirate Utopias, Digital


Communication. Cambridge, Mass.: MIT Press, 2001.

Mansell, Robin. Inside the Communication Revolution: Evolving Patterns of Social


and Technical Interaction. Oxford; New York: Oxford University Press, 2002.

Marshall, Jonathan Paul. Living on Cybermind: Categories, Communication, and


Control. New York: Peter Lang, 2007.

Martin, Chuck. The Digital Estate: Strategies for Competing, Surviving, and
Thriving in an Internetworked World. New York: McGraw-Hill, 1997.

Nissenbaum, Helen Fay, and Monroe Edwin Price. Academy & the Internet. New
York: Peter Lang, 2004.

O'Hagan, Minako, and David Ashworth. Translation-Mediated Communication in a


Digital World: Facing the Challenges of Globalization and Localization,
Topics in Translation. Clevedon, England; Buffalo: Multilingual Matters,
2002.

Pavlik, John V., and Shawn McIntosh. Converging Media: An Introduction to Mass
Communication. Boston: Pearson: Allyn and Bacon, 2004.

Rhoton, John. The Wireless Internet Explained. Boston: Digital Press, 2001.

Silverstone, Roger. Media, Technology, and Everyday Life in Europe: From


Information to Communication. Aldershot, Hants, England ; Burlington, VT:
Ashgate, 2005.

18
Smith, Roderick W. Broadband Internet Connections: A User's Guide to DSL and
Cable. Boston: Addison-Wesley, 2002.

Stockman, Mike, and Derek Ferguson. Broadband Internet Access for Dummies, --
for Dummies. Foster City, CA: IDG Books Worldwide, 2001.

Tvede, Lars, Peter Pircher, and Jens Bodenkamp. Data Broadcasting: Merging
Digital Broadcasting with the Internet. Rev. ed. Chichester [England]; New
York: Wiley, 2001.

UCLA Center for Communication Policy and Annenberg School of Communications


(University of Southern California). Center for the Digital Future. "Surveying
the Digital Future: UCLA Internet Report." Place Published: UCLA Center
for Communication Policy, 2000.

United States. Congress. House. Committee on Science. Subcommittee on


Technology. Rural Access to Technology: Connecting the Last American
Frontier : Hearing before the Subcommittee on Technology of the Committee
on Science, House of Representatives, One Hundred Sixth Congress, Second
Session, October 5, 2000. Washington: U.S. G.P.O. : [U.S. G.P.O., Supt. of
Docs., Congressional Sales Office, distributor], 2001.

United States. Congress. Senate. Committee on Commerce Science and


Transportation. Infrastructure Needs of Minority Serving Institutions :
Hearing before the Committee on Commerce, Science, and Transportation,
United States Senate, One Hundred Eighth Congress, First Session, February
13, 2003. Washington: U.S. G.P.O. : For sale by the Supt. of Docs., U.S.
G.P.O., 2006.

———. Promoting Local Telecommunications Competition : The Means to Greater


Broadband Deployment : Hearing before the Committee on Commerce,
Science, and Transportation, United States Senate, One Hundred Seventh
Congress, Second Session, May 22, 2002. Washington: U.S. G.P.O.: For sale
by the Supt. of Docs., U.S. G.P.O., 2005.

United States. Congress. Senate. Committee on Commerce Science and


Transportation. Subcommittee on Communications. S. 2454, Wireless High
Speed Internet Access for Rural Areas : Hearing before the Subcommittee on
Communications of the Committee on Commerce, Science, and Technology,
United States Senate, One Hundred Sixth Congress, Second Session, June 14,
2000. Washington: U.S. G.P.O. : For sale by the Supt. of Docs., U.S. G.P.O.
[Congressional Sales Office], 2003.

19
United States. Congress. Senate. Committee on Commerce Science and
Transportation. Subcommittee on Science Technology and Space. S. 414,
Digital Divide and Minority Serving Institutions : Hearing before the
Subcommittee on Science, Technology, and Space of the Committee on
Commerce, Science, and Transportation, United States Senate, One Hundred
Seventh Congress, Second Session, February 27, 2002. Washington: U.S.
G.P.O.: For sale by the Supt. of Docs., U.S. G.P.O., 2005.

Wu, Chwan-Hwa, and J. David Irwin. Emerging Multimedia Computer


Communication Technologies. Upper Saddle River, NJ: Prentice Hall PTR,
1998.

20

You might also like