You are on page 1of 63

Internet

From Wikipedia, the free encyclopedia


Jump to navigationJump to search
This article is about the worldwide computer network. For other uses, see Internet (disambiguation).
Not to be confused with the World Wide Web.

Internet

An Opte Project visualization of routing pathsthrough a portion of the


Internet

General[show]

Governance[show]

Information infrastructure[show]

Services[show]

Guides[show]

 Internet portal

 v
 t
 e

Computer network types
by spatial scope
 Nanoscale

 Near-field (NFC)

 Body (BAN)

 Personal (PAN)

 Near-me (NAN)

 Local (LAN)
o Home (HAN)

o Storage (SAN)

o Wireless (WLAN)

 Campus (CAN)

 Backbone

 Metropolitan (MAN)
o Municipal wireless (MWN)

 Wide (WAN)

 Cloud (IAN)

 Internet

 Interplanetary Internet

 v
 t
 e

Internet users per 100 population members and GDP per capita for selected countries.
The Internet (portmanteau of interconnected network) is the global system of
interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices
worldwide. It is a network of networks that consists of private, public, academic, business, and
government networks of local to global scope, linked by a broad array of electronic, wireless, and
optical networking technologies. The Internet carries a vast range of information resources and
services, such as the inter-linked hypertext documents and applications of the World Wide
Web (WWW), electronic mail, telephony, and file sharing.
The origins of the Internet date back to research commissioned by the federal government of the
United States in the 1960s to build robust, fault-tolerant communication with computer networks.
[1]
 The primary precursor network, the ARPANET, initially served as a backbone for interconnection
of regional academic and military networks in the 1980s. The funding of the National Science
Foundation Network as a new backbone in the 1980s, as well as private funding for other
commercial extensions, led to worldwide participation in the development of new networking
technologies, and the merger of many networks.[2] The linking of commercial networks and
enterprises by the early 1990s marked the beginning of the transition to the modern Internet, [3] and
generated a sustained exponential growth as generations of institutional, personal,
and mobile computers were connected to the network. Although the Internet was widely used
by academia since the 1980s, commercialization incorporated its services and technologies into
virtually every aspect of modern life.
Most traditional communication media, including telephony, radio, television, paper mail and
newspapers are reshaped, redefined, or even bypassed by the Internet, giving birth to new services
such as email, Internet telephony, Internet television, online music, digital newspapers, and video
streaming websites. Newspaper, book, and other print publishing are adapting
to website technology, or are reshaped into blogging, web feeds and online news aggregators. The
Internet has enabled and accelerated new forms of personal interactions through instant
messaging, Internet forums, and social networking. Online shopping has grown exponentially both
for major retailers and small businesses and entrepreneurs, as it enables firms to extend their "brick
and mortar" presence to serve a larger market or even sell goods and services entirely
online. Business-to-business and financial services on the Internet affect supply chainsacross entire
industries.
The Internet has no single centralized governance in either technological implementation or policies
for access and usage; each constituent network sets its own policies. [4] The overreaching definitions
of the two principal name spaces in the Internet, the Internet Protocol address (IP address) space
and the Domain Name System (DNS), are directed by a maintainer organization, the Internet
Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and
standardization of the core protocols is an activity of the Internet Engineering Task Force (IETF), a
non-profit organization of loosely affiliated international participants that anyone may associate with
by contributing technical expertise. [5] In November 2006, the Internet was included on USA Today's
list of New Seven Wonders.[6]

Terminology
The Internet Messenger by Buky Schwartz, located in Holon, Israel

See also: Capitalization of "Internet"

When the term Internet is used to refer to the specific global system of interconnected Internet
Protocol (IP) networks, the word is a proper noun[7] that should be written with an initial capital letter.
In common use and the media, it is often not capitalized, viz. the internet. Some guides specify that
the word should be capitalized when used as a noun, but not capitalized when used as an adjective.
[8]
 The Internet is also often referred to as the Net, as a short form of network. Historically, as early as
1849, the word internetted was used uncapitalized as an adjective,
meaning interconnected or interwoven.[9] The designers of early computer networks
used internet both as a noun and as a verb in shorthand form of internetwork or internetworking,
meaning interconnecting computer networks.[10]
The terms Internet and World Wide Web are often used interchangeably in everyday speech; it is
common to speak of "going on the Internet" when using a web browser to view web pages.
However, the World Wide Web or the Web is only one of a large number of Internet services. The
Web is a collection of interconnected documents (web pages) and other web resources, linked
by hyperlinks and URLs.[11] As another point of comparison, Hypertext Transfer Protocol, or HTTP, is
the language used on the Web for information transfer, yet it is just one of many languages or
protocols that can be used for communication on the Internet. [12] The term Interweb is
a portmanteau of Internet and World Wide Web typically used sarcastically to parody a technically
unsavvy user.

History
Main articles: History of the Internet and History of the World Wide Web

Research into packet switching, one of the fundamental Internet technologies, started in the early
1960s in the work of Paul Baran,[13] and packet-switched networks such as the NPL
network by Donald Davies, ARPANET, the Merit Network, CYCLADES, and Telenet were developed
in the late 1960s and early 1970s.[14] The ARPANET project led to the development
of protocols for internetworking, by which multiple separate networks could be joined into a network
of networks.[15] ARPANET development began with two network nodes which were interconnected
between the Network Measurement Center at the University of California, Los
Angeles (UCLA) Henry Samueli School of Engineering and Applied Science directed by Leonard
Kleinrock, and the NLS system at SRI International (SRI) by Douglas Engelbart in Menlo Park,
California, on 29 October 1969. [16] The third site was the Culler-Fried Interactive Mathematics Center
at the University of California, Santa Barbara, followed by the University of Utah Graphics
Department. In an early sign of future growth, fifteen sites were connected to the young ARPANET
by the end of 1971.[17][18] These early years were documented in the 1972 film Computer Networks:
The Heralds of Resource Sharing.
Early international collaborations for the ARPANET were rare. European developers were concerned
with developing the X.25 networks.[19] Notable exceptions were the Norwegian Seismic Array
(NORSAR) in June 1973, followed in 1973 by Sweden with satellite links to the Tanum Earth Station
and Peter T. Kirstein's research group in the United Kingdom, initially at the Institute of Computer
Science, University of London and later at University College London.[20][21][22] In December
1974, RFC 675 (Specification of Internet Transmission Control Program), by Vinton Cerf, Yogen
Dalal, and Carl Sunshine, used the term internet as a shorthand for internetworking and
later RFCs repeated this use.[23]Access to the ARPANET was expanded in 1981 when the National
Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet
Protocol Suite(TCP/IP) was standardized, which permitted worldwide proliferation of interconnected
networks. TCP/IP network access expanded again in 1986 when the National Science Foundation
Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first
at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. [24]Commercial Internet service
providers (ISPs) emerged in the late 1980s and early 1990s. The ARPANET was decommissioned
in 1990.

T3 NSFNET Backbone, c. 1992.

The Internet rapidly expanded in Europe and Australia in the mid to late 1980s [25][26] and to Asia in the
late 1980s and early 1990s.[27] The beginning of dedicated transatlantic communication between the
NSFNET and networks in Europe was established with a low-speed satellite relay
between Princeton University and Stockholm, Sweden in December 1988.[28] Although other network
protocols such as UUCP had global reach well before this time, this marked the beginning of the
Internet as an intercontinental network.
Public commercial use of the Internet began in mid-1989 with the connection of MCI Mail
and Compuserve's email capabilities to the 500,000 users of the Internet. [29] Just months later on 1
January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the
networks that would grow into the commercial Internet we know today. In March 1990, the first high-
speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell
University and CERN, allowing much more robust communications than were capable with satellites.
[30]
 Six months later Tim Berners-Lee would begin writing WorldWideWeb, the first web browser after
two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools
necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9,[31] the HyperText Markup
Language (HTML), the first Web browser (which was also a HTML editor and could
access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN
httpd), the first web server,[32] and the first Web pages that described the project itself. In 1991
the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other
commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial
institution to offer online Internet banking services to all of its members in October 1994. [33] In
1996 OP Financial Group, also a cooperative bank, became the second online bank in the world and
the first in Europe.[34] By 1995, the Internet was fully commercialized in the U.S. when the NSFNet
was decommissioned, removing the last restrictions on use of the Internet to carry commercial
traffic.[35]

Worldwide Internet users

  2005 2010 2017a

World population[36] 6.5 billion 6.9 billion 7.4 billion

Users worldwide 16% 30% 48%


Users in the developing world 8% 21% 41.3%

Users in the developed world 51% 67% 81%

 Estimate.
a

Source: International Telecommunications Union.[37]

Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of
near instant communication by email, instant messaging, telephony (Voice over Internet Protocol or
VoIP), two-way interactive video calls, and the World Wide Web[38] with its discussion forums,
blogs, social networking, and online shopping sites. Increasing amounts of data are transmitted at
higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more. The
Internet continues to grow, driven by ever greater amounts of online information and knowledge,
commerce, entertainment and social networking.[39] During the late 1990s, it was estimated that traffic
on the public Internet grew by 100 percent per year, while the mean annual growth in the number of
Internet users was thought to be between 20% and 50%. [40] This growth is often attributed to the lack
of central administration, which allows organic growth of the network, as well as the non-proprietary
nature of the Internet protocols, which encourages vendor interoperability and prevents any one
company from exerting too much control over the network. [41] As of 31 March 2011, the estimated
total number of Internet users was 2.095 billion (30.2% of world population).[42] It is estimated that in
1993 the Internet carried only 1% of the information flowing through two-way telecommunication, by
2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information
was carried over the Internet.[43]

Governance
Main article: Internet governance

ICANN headquarters in the Playa Vista neighborhood of Los Angeles, California, United States.

The Internet is a global network that comprises many voluntarily interconnected autonomous


networks. It operates without a central governing body. The technical underpinning and
standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task
Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may
associate with by contributing technical expertise. To maintain interoperability, the principal name
spaces of the Internet are administered by the Internet Corporation for Assigned Names and
Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the
Internet technical, business, academic, and other non-commercial communities. ICANN coordinates
the assignment of unique identifiers for use on the Internet, including domain names, Internet
Protocol (IP) addresses, application port numbers in the transport protocols, and many other
parameters. Globally unified name spaces are essential for maintaining the global reach of the
Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the
global Internet.[44]
Regional Internet Registries (RIRs) allocate IP addresses:

 African Network Information Center (AfriNIC) for Africa


 American Registry for Internet Numbers (ARIN) for North America
 Asia-Pacific Network Information Centre (APNIC) for Asia and the Pacific region
 Latin American and Caribbean Internet Addresses Registry  (LACNIC) for Latin America and
the Caribbean region
 Réseaux IP Européens – Network Coordination Centre  (RIPE NCC) for Europe, the Middle
East, and Central Asia
The National Telecommunications and Information Administration, an agency of the United States
Department of Commerce, had final approval over changes to the DNS root zoneuntil the IANA
stewardship transition on 1 October 2016.[45][46][47][48] The Internet Society (ISOC) was founded in 1992
with a mission to "assure the open development, evolution and use of the Internet for the benefit of
all people throughout the world".[49] Its members include individuals (anyone may join) as well as
corporations, organizations, governments, and universities. Among other activities ISOC provides an
administrative home for a number of less formally organized groups that are involved in developing
and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet
Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task
Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United
Nations-sponsored World Summit on the Information Society in Tunis established the Internet
Governance Forum (IGF) to discuss Internet-related issues.

Infrastructure
See also: List of countries by number of Internet users and List of countries by Internet connection
speeds

2007 map showing submarine fiberoptic telecommunication cables around the world.

The communications infrastructure of the Internet consists of its hardware components and a system
of software layers that control various aspects of the architecture.
Routing and service tiers

Packet routing across the Internet involves several tiers of Internet service providers.

Internet service providers (ISPs) establish the worldwide connectivity between individual networks at
various levels of scope. End-users who only access the Internet when needed to perform a function
or obtain information, represent the bottom of the routing hierarchy. At the top of the routing
hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly
with each other via very high speed fibre optic cables and governed by peering agreements. Tier
2 and lower level networks buy Internet transit from other providers to reach at least some parties on
the global Internet, though they may also engage in peering. An ISP may use a single upstream
provider for connectivity, or implement multihoming to achieve redundancy and load
balancing. Internet exchange points are major traffic exchanges with physical connections to multiple
ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may
perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their
internal networks. Research networks tend to interconnect with large subnetworks such
as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.
Both the Internet IP routing structure and hypertext links of the World Wide Web are examples
of scale-free networks.[50][disputed  (for: unclear whether citation supports claim empricially)    –  discuss] Computers and routers use routing
tables in their operating system to direct IP packets to the next-hop router or destination. Routing
tables are maintained by manual configuration or automatically by routing protocols. End-nodes
typically use a default route that points toward an ISP providing transit, while ISP routers use
the Border Gateway Protocol to establish the most efficient routing across the complex connections
of the global Internet.
An estimated 70 percent of the world's Internet traffic passes through Ashburn, Virginia.[51][52][53][54]

Access
Common methods of Internet access by users include dial-up with a computer modem via telephone
circuits, broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular
telephone technology (e.g. 3G, 4G). The Internet may often be accessed from computers in libraries
and Internet cafes. Internet access points exist in many public places such as airport halls and
coffee shops. Various terms are used, such as public Internet kiosk, public access terminal,
and Web payphone. Many hotels also have public terminals that are usually fee-based. These
terminals are widely accessed for various usages, such as ticket booking, bank deposit, or online
payment. Wi-Fi provides wireless access to the Internet via local computer
networks. Hotspots providing such access include Wi-Fi cafes, where users need to bring their own
wireless devices such as a laptop or PDA. These services may be free to all, free to customers only,
or fee-based.
Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services covering
large city areas are in many cities, such as New York, London, Vienna, Toronto, San
Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from places,
such as a park bench.[55] Apart from Wi-Fi, there have been experiments with proprietary mobile
wireless networks like Ricochet, various high-speed data services over cellular phone networks, and
fixed wireless services. High-end mobile phones such as smartphones in general come with Internet
access through the phone network. Web browsers such as Opera are available on these advanced
handsets, which can also run a wide variety of other Internet software. Internet usage by mobile and
tablet devices exceeded desktop worldwide for the first time in October 2016. [56] An Internet access
provider and protocol matrix differentiates the methods used to get online.
Mobile communication

Number of mobile cellular subscriptions 2012–2016, World Trends in Freedom of Expression and Media
Development Global Report 2017/2018

The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of


individual users regularly connect to the Internet, up from 34% in 2012. [57] Mobile Internet connectivity
has played an important role in expanding access in recent years especially in Asia and the
Pacific and in Africa.[58] The number of unique mobile cellular subscriptions increased from 3.89
billion in 2012 to 4.83 billion in 2016, two-thirds of the world's population, with more than half of
subscriptions located in Asia and the Pacific. The number of subscriptions is predicted to rise to 5.69
billion users in 2020.[59] As of 2016, almost 60% of the world's population had access to
a 4G broadband cellular network, up from almost 50% in 2015 and 11% in 2012 [disputed  –  discuss].[59] The
limits that users face on accessing information via mobile applications coincide with a broader
process of fragmentation of the Internet. Fragmentation restricts access to media content and tends
to affect poorest users the most.[58]
Zero-rating, the practice of Internet service providers allowing users free connectivity to access
specific content or applications without cost, has offered opportunities to surmount economic
hurdles, but has also been accused by its critics as creating a two-tiered Internet. To address the
issues with zero-rating, an alternative model has emerged in the concept of 'equal rating' and is
being tested in experiments by Mozilla and Orange in Africa. Equal rating prevents prioritization of
one type of content and zero-rates all content up to a specified data cap. A study published
by Chatham House, 15 out of 19 countries researched in Latin America had some kind of hybrid or
zero-rated product offered. Some countries in the region had a handful of plans to choose from
(across all mobile network operators) while others, such as Colombia, offered as many as 30 pre-
paid and 34 post-paid plans.[60]
A study of eight countries in the Global South found that zero-rated data plans exist in every country,
although there is a great range in the frequency with which they are offered and actually used in
each.[61]. The study looked at the top three to five carriers by market share in Bangladesh, Colombia,
Ghana, India, Kenya, Nigeria, Peru and Philippines.</ref> Across the 181 plans examined, 13 per
cent were offering zero-rated services. Another study, covering Ghana, Kenya, Nigeria and South
Africa, found Facebook's Free Basics and Wikipedia Zero to be the most commonly zero-rated
content.[62]
Protocols
Internet protocol suite

Application layer

 BGP
 DHCP
 DNS
 FTP
 HTTP
 HTTPS
 IMAP
 LDAP
 MGCP
 MQTT
 NNTP
 NTP
 POP
 ONC/RPC
 RTP
 RTSP
 RIP
 SIP
 SMTP
 SNMP
 SSH
 Telnet
 TLS/SSL
 XMPP
 more...

Transport layer

 TCP
 UDP
 DCCP
 SCTP
 RSVP
 more...
Internet layer

 IP 
o IPv4
o IPv6
 ICMP
 ICMPv6
 ECN
 IGMP
 IPsec
 more...

Link layer

 ARP
 NDP
 OSPF
 Tunnels 
o L2TP
 PPP
 MAC 
o Ethernet
o Wi-Fi
o DSL
o ISDN
o FDDI
 more...

 v
 t
 e

While the hardware components in the Internet infrastructure can often be used to support other
software systems, it is the design and the standardization process of the software that characterizes
the Internet and provides the foundation for its scalability and success. The responsibility for the
architectural design of the Internet software systems has been assumed by the Internet Engineering
Task Force (IETF).[63] The IETF conducts standard-setting work groups, open to any individual, about
the various aspects of Internet architecture. Resulting contributions and standards are published
as Request for Comments (RFC) documents on the IETF web site. The principal methods of
networking that enable the Internet are contained in specially designated RFCs that constitute
the Internet Standards. Other less rigorous documents are simply informative, experimental, or
historical, or document the best current practices (BCP) when implementing Internet technologies.
The Internet standards describe a framework known as the Internet protocol suite, or in short
as TCP/IP, based on the first two components. This is a model architecture that divides methods into
a layered system of protocols, originally documented in RFC 1122and RFC 1123. The layers
correspond to the environment or scope in which their services operate. At the top is the application
layer, space for the application-specific networking methods used in software applications. For
example, a web browser program uses the client-server application model and a specific protocol of
interaction between servers and clients, while many file-sharing systems use a peer-to-
peer paradigm. Below this top layer, the transport layer connects applications on different hosts with
a logical channel through the network with appropriate data exchange methods.
Underlying these layers are the networking technologies that interconnect networks at their borders
and exchange traffic across them. The Internet layer enables computers to identify and locate each
other by Internet Protocol (IP) addresses, and routes their traffic via intermediate (transit) networks.
[64]
 At the bottom of the architecture is the link layer, which provides logical connectivity between
hosts on the same network link, such as a local area network (LAN) or a dial-up connection. The
model is designed to be independent of the underlying hardware used for the physical connections,
which the model does not concern itself with in any detail. Other models have been developed, such
as the OSI model, that attempt to be comprehensive in every aspect of communications. While many
similarities exist between the models, they are not compatible in the details of description or
implementation. Yet, TCP/IP protocols are usually included in the discussion of OSI networking.

As user data is processed through the protocol stack, each abstraction layer adds encapsulation information at
the sending host. Data is transmitted over the wire at the link level between hosts and routers. Encapsulation is
removed by the receiving host. Intermediate relays update link encapsulation at each hop, and inspect the IP
layer for routing purposes.

The most prominent component of the Internet model is the Internet Protocol (IP), which provides
addressing systems, including IP addresses, for computers on the network. IP enables
internetworking and, in essence, establishes the Internet itself. Internet Protocol Version 4 (IPv4) is
the initial version used on the first generation of the Internet and is still in dominant use. It was
designed to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has
led to IPv4 address exhaustion, which entered its final stage in 2011, [65] when the global address
allocation pool was exhausted. A new protocol version, IPv6, was developed in the mid-1990s,
which provides vastly larger addressing capabilities and more efficient routing of Internet
traffic. IPv6 is currently in growing deployment around the world, since Internet address registries
(RIRs) began to urge all resource managers to plan rapid adoption and conversion. [66]
IPv6 is not directly interoperable by design with IPv4. In essence, it establishes a parallel version of
the Internet not directly accessible with IPv4 software. Thus, translation facilities must exist for
internetworking or nodes must have duplicate networking software for both networks. Essentially all
modern computer operating systems support both versions of the Internet Protocol. Network
infrastructure, however, has been lagging in this development. Aside from the complex array of
physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral
commercial contracts, e.g., peering agreements, and by technical specifications or protocols that
describe the exchange of data over the network. Indeed, the Internet is defined by its
interconnections and routing policies.

Services
The Internet carries many network services, most prominently the World Wide Web, including social
media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file
sharing, and streaming media services.
The terms Internet and World Wide Web, or just the Web, are often used interchangeably, but the
two terms are not synonymous. The World Wide Web is a primary application program that billions
of people use on the Internet, and it has changed their lives immeasurably. [67][68]

World Wide Web

This NeXT Computer was used by Tim Berners-Lee at CERN and became the world's first Web server.

The World Wide Web is a global collection of documents, images, multimedia, applications, and


other resources, logically interrelated by hyperlinks and referenced with Uniform Resource
Identifiers (URIs), which provide a global system of named references. URIs symbolically identify
services, web servers, databases, and the documents and resources that they can
provide. Hypertext Transfer Protocol (HTTP) is the main access protocol of the World Wide
Web. Web services also use HTTP for communication between software systems for sharing and
exchanging business data and logistics.
World Wide Web browser software, such as Microsoft's Internet Explorer/Edge, Mozilla
Firefox, Opera, Apple's Safari, and Google Chrome, lets users navigate from one web page to
another via the hyperlinks embedded in the documents. These documents may also contain any
combination of computer data, including graphics, sounds, text, video, multimedia and interactive
content that runs while the user is interacting with the page. Client-side software can include
animations, games, office applications and scientific demonstrations. Through keyword-
driven Internet research using search engines like Yahoo!, Bing and Google, users worldwide have
easy, instant access to a vast and diverse amount of online information. Compared to printed media,
books, encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization
of information on a large scale.
The Web has enabled individuals and organizations to publish ideas and information to a potentially
large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or
building a website involves little initial cost and many cost-free services are available. However,
publishing and maintaining large, professional web sites with attractive, diverse and up-to-date
information is still a difficult and expensive proposition. Many individuals and some companies and
groups use web logs or blogs, which are largely used as easily updatable online diaries. Some
commercial organizations encourage staff to communicate advice in their areas of specialization in
the hope that visitors will be impressed by the expert knowledge and free information, and be
attracted to the corporation as a result.
Advertising on popular web pages can be lucrative, and e-commerce, which is the sale of products
and services directly via the Web, continues to grow. Online advertising is a form of marketing and
advertising which uses the Internet to deliver promotional marketing messages to consumers. It
includes email marketing, search engine marketing (SEM), social media marketing, many types
of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet
advertising revenues in the United States surpassed those of cable television and nearly exceeded
those of broadcast television.[69]:19 Many common online advertising practices are controversial and
increasingly subject to regulation.
When the Web developed in the 1990s, a typical web page was stored in completed form on a web
server, formatted in HTML, complete for transmission to a web browser in response to a request.
Over time, the process of creating and serving web pages has become dynamic, creating a flexible
design, layout, and content. Websites are often created using content management software with,
initially, very little content. Contributors to these systems, who may be paid staff, members of an
organization or the public, fill underlying databases with content using editing pages designed for
that purpose while casual visitors view and read this content in HTML form. There may or may not
be editorial, approval and security systems built into the process of taking newly entered content and
making it available to the target visitors.

Communication
Email is an important communications service available on the Internet. The concept of sending
electronic text messages between parties in a way analogous to mailing letters or memos predates
the creation of the Internet.[70][71] Pictures, documents, and other files are sent as email attachments.
Emails can be cc-ed to multiple email addresses.
Internet telephony is another common communications service made possible by the creation of the
Internet. VoIP stands for Voice-over-Internet Protocol, referring to the protocol that underlies all
Internet communication. The idea began in the early 1990s with walkie-talkie-like voice applications
for personal computers. In recent years many VoIP systems have become as easy to use and as
convenient as a normal telephone. The benefit is that, as the Internet carries the voice traffic, VoIP
can be free or cost much less than a traditional telephone call, especially over long distances and
especially for those with always-on Internet connections such as cable or ADSL and mobile data.
[72]
 VoIP is maturing into a competitive alternative to traditional telephone service. Interoperability
between different providers has improved and the ability to call or receive a call from a traditional
telephone is available. Simple, inexpensive VoIP network adapters are available that eliminate the
need for a personal computer.
Voice quality can still vary from call to call, but is often equal to and can even exceed that of
traditional calls. Remaining problems for VoIP include emergency telephone numberdialing and
reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally
available. Older traditional phones with no "extra features" may be line-powered only and operate
during a power failure; VoIP can never do so without a backup power source for the phone
equipment and the Internet access devices. VoIP has also become increasingly popular for gaming
applications, as a form of communication between players. Popular VoIP clients for gaming
include Ventrilo and Teamspeak. Modern video game consoles also offer VoIP chat features.

Data transfer
File sharing is an example of transferring large amounts of data across the Internet. A computer
file can be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a
website or File Transfer Protocol (FTP) server for easy download by others. It can be put into a
"shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to
many users can be eased by the use of "mirror" servers or peer-to-peer networks. In any of these
cases, access to the file may be controlled by user authentication, the transit of the file over the
Internet may be obscured by encryption, and money may change hands for access to the file. The
price can be paid by the remote charging of funds from, for example, a credit card whose details are
also passed – usually fully encrypted – across the Internet. The origin and authenticity of the file
received may be checked by digital signatures or by MD5 or other message digests. These simple
features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of
anything that can be reduced to a computer file for transmission. This includes all manner of print
publications, software products, news, music, film, video, photography, graphics and the other arts.
This in turn has caused seismic shifts in each of the existing industries that previously controlled the
production and distribution of these products.
Streaming media is the real-time delivery of digital media for the immediate consumption or
enjoyment by end users. Many radio and television broadcasters provide Internet feeds of their live
audio and video productions. They may also allow time-shift viewing or listening such as Preview,
Classic Clips and Listen Again features. These providers have been joined by a range of pure
Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected
device, such as a computer or something more specific, can be used to access on-line media in
much the same way as was previously possible only with a television or radio receiver. The range of
available types of content is much wider, from specialized technical webcasts to on-demand popular
multimedia services. Podcasting is a variation on this theme, where – usually audio – material is
downloaded and played back on a computer or shifted to a portable media player to be listened to
on the move. These techniques using simple equipment allow anybody, with little censorship or
licensing control, to broadcast audio-visual material worldwide.
Digital media streaming increases the demand for network bandwidth. For example, standard image
quality needs 1 Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the top-of-
the-line HDX quality needs 4.5 Mbit/s for 1080p.[73]
Webcams are a low-cost extension of this phenomenon. While some webcams can give full-frame-
rate video, the picture either is usually small or updates slowly. Internet users can watch animals
around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor
their own premises, live and in real time. Video chat rooms and video conferencing are also popular
with many uses being found for personal webcams, with and without two-way sound. YouTube was
founded on 15 February 2005 and is now the leading website for free streaming video with a vast
number of users. It uses a HTML5 based web player by default to stream and show video files.
[74]
 Registered users may upload an unlimited amount of video and build their own personal
profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands
of videos daily.

Social impact
The Internet has enabled new forms of social interaction, activities, and social associations. This
phenomenon has given rise to the scholarly study of the sociology of the Internet.

Users
See also: Global Internet usage and English in computing
Internet users per 100 inhabitants

Source: International Telecommunications Union.[75][76]

Internet users by language[77]

Website content languages[78]

Internet usage has grown tremendously. From 2000 to 2009, the number of Internet users globally
rose from 394 million to 1.858 billion. [79]By 2010, 22 percent of the world's population had access to
computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2
billion videos viewed daily on YouTube.[80] In 2014 the world's Internet users surpassed 3 billion or
43.6 percent of world population, but two-thirds of the users came from richest countries, with 78.0
percent of Europe countries population using the Internet, followed by 57.4 percent of the Americas.
[81]
 However, by 2018, this trend had shifted so tremendously that Asia alone accounted for 51% of all
Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world coming from that
region. The number of China's Internet users surpassed a major miletsone in 2018, when the
country's Internet regulatory authority, China Internet Network Information Centre, announced that
there were 802 million Internet users in China. [82] By 2019, China was the world's leading country in
terms of Internet users, with more than 800 million users, followed closely by India, with some 700
million users, with USA a distant third with 275 million users. However, in terms of penetration, China
has a 38.4% penetration rate compared to India's 40% and USA's 80%. [83]
The prevalent language for communication via the Internet has been English. This may be a result of
the origin of the Internet, as well as the language's role as a lingua franca. Early computer systems
were limited to the characters in the American Standard Code for Information Interchange (ASCII), a
subset of the Latin alphabet.
After English (27%), the most requested languages on the World Wide Web are Chinese (25%),
Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian
(3% each), and Korean (2%).[77] By region, 42% of the world's Internet users are based in Asia, 24%
in Europe, 14% in North America, 10% in Latin America and the Caribbean taken together, 6% in
Africa, 3% in the Middle East and 1% in Australia/Oceania. [84] The Internet's technologies have
developed enough in recent years, especially in the use of Unicode, that good facilities are available
for development and communication in the world's widely used languages. However, some glitches
such as mojibake (incorrect display of some languages' characters) still remain.
In an American study in 2005, the percentage of men using the Internet was very slightly ahead of
the percentage of women, although this difference reversed in those under 30. Men logged on more
often, spent more time online, and were more likely to be broadband users, whereas women tended
to make more use of opportunities to communicate (such as email). Men were more likely to use the
Internet to pay bills, participate in auctions, and for recreation such as downloading music and
videos. Men and women were equally likely to use the Internet for shopping and banking. [85] More
recent studies indicate that in 2008, women significantly outnumbered men on most social
networking sites, such as Facebook and Myspace, although the ratios varied with age. [86] In addition,
women watched more streaming content, whereas men downloaded more. [87] In terms of blogs, men
were more likely to blog in the first place; among those who blog, men were more likely to have a
professional blog, whereas women were more likely to have a personal blog. [88]
Forecasts predict that 44% of the world's population will be users of the Internet by 2020. [89] Splitting
by country, in 2012 Iceland, Norway, Sweden, the Netherlands, and Denmark had the
highest Internet penetration by the number of users, with 93% or more of the population with access.
[90]

Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net")[91] refers to
those actively involved in improving online communities, the Internet in general or surrounding
political affairs and rights such as free speech,[92][93] Internaut refers to operators or technically highly
capable users of the Internet, [94][95] digital citizen refers to a person using the Internet in order to
engage in society, politics, and government participation. [96]

Usage
Internet users in 2015 as a percentage of a country's population

Source: International Telecommunications Union.[90]

Main articles: Global digital divide and Digital divide

Fixed broadband Internet subscriptions in 2012


as a percentage of a country's population

Source: International Telecommunications Union.[97]

Mobile broadband Internet subscriptions in 2012


as a percentage of a country's population

Source: International Telecommunications Union.[98]

The Internet allows greater flexibility in working hours and location, especially with the spread of
unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous
means, including through mobile Internet devices. Mobile phones, datacards, handheld game
consoles and cellular routers allow users to connect to the Internet wirelessly. Within the limitations
imposed by small screens and other limited facilities of such pocket-sized devices, the services of
the Internet, including email and the web, may be available. Service providers may restrict the
services offered and mobile data charges may be significantly higher than other access methods.
Educational material at all levels from pre-school to post-doctoral is available from websites.
Examples range from CBeebies, through school and high-school revision guides and virtual
universities, to access to top-end scholarly literature through the likes of Google Scholar.
For distance education, help with homework and other assignments, self-guided learning, whiling
away spare time, or just looking up more detail on an interesting fact, it has never been easier for
people to access educational information at any level from anywhere. The Internet in general and
the World Wide Web in particular are important enablers of both formal and informal education.
Further, the Internet allows universities, in particular, researchers from the social and behavioral
sciences, to conduct research remotely via virtual laboratories, with profound changes in reach and
generalizability of findings as well as in communication between scientists and in the publication of
results.[99]
The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have
made collaborative work dramatically easier, with the help of collaborative software. Not only can a
group cheaply communicate and share ideas but the wide reach of the Internet allows such groups
more easily to form. An example of this is the free software movement, which has produced, among
other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice). Internet chat,
whether using an IRC chat room, an instant messaging system, or a social networkingwebsite,
allows colleagues to stay in touch in a very convenient way while working at their computers during
the day. Messages can be exchanged even more quickly and conveniently than via email. These
systems may allow files to be exchanged, drawings and images to be shared, or voice and video
contact between team members.
Content management systems allow collaborating teams to work on shared sets of documents
simultaneously without accidentally destroying each other's work. Business and project teams can
share calendars as well as documents and other information. Such collaboration occurs in a wide
variety of areas including scientific research, software development, conference planning, political
activism and creative writing. Social and political collaboration is also becoming more widespread as
both Internet access and computer literacy spread.
The Internet allows computer users to remotely access other computers and information stores
easily from any access point. Access may be with computer security, i.e. authentication and
encryption technologies, depending on the requirements. This is encouraging new ways of working
from home, collaboration and information sharing in many industries. An accountant sitting at home
can audit the books of a company based in another country, on a serversituated in a third country
that is remotely maintained by IT specialists in a fourth. These accounts could have been created by
home-working bookkeepers, in other remote locations, based on information emailed to them from
offices all over the world. Some of these things were possible before the widespread use of the
Internet, but the cost of private leased lines would have made many of them infeasible in practice.
An office worker away from their desk, perhaps on the other side of the world on a business trip or a
holiday, can access their emails, access their data using cloud computing, or open a remote
desktop session into their office PC using a secure virtual private network (VPN) connection on the
Internet. This can give the worker complete access to all of their normal files and data, including
email and other applications, while away from the office. It has been referred to among system
administrators as the Virtual Private Nightmare, [100] because it extends the secure perimeter of a
corporate network into remote locations and its employees' homes.

Social networking and entertainment


See also: Social networking service §  Social impact

Many people use the World Wide Web to access news, weather and sports reports, to plan and
book vacations and to pursue their personal interests. People use chat, messaging and email to
make and stay in touch with friends worldwide, sometimes in the same way as some previously
had pen pals. Social networking websites such as Facebook, Twitter, and Myspace have created
new ways to socialize and interact. Users of these sites are able to add a wide variety of information
to pages, to pursue common interests, and to connect with others. It is also possible to find existing
acquaintances, to allow communication among existing groups of people. Sites like LinkedIn foster
commercial and business connections. YouTube and Flickr specialize in users' videos and
photographs. While social networking sites were initially for individuals only, today they are widely
used by businesses and other organizations to promote their brands, to market to their customers
and to encourage posts to "go viral". "Black hat" social media techniques are also employed by
some organizations, such as spam accounts and astroturfing.
A risk for both individuals and organizations writing posts (especially public posts) on social
networking websites, is that especially foolish or controversial posts occasionally lead to an
unexpected and possibly large-scale backlash on social media from other Internet users. This is also
a risk in relation to controversial offline behavior, if it is widely made known. The nature of this
backlash can range widely from counter-arguments and public mockery, through insults and hate
speech, to, in extreme cases, rape and death threats. The online disinhibition effect describes the
tendency of many individuals to behave more stridently or offensively online than they would in
person. A significant number of feminist women have been the target of various forms
of harassment in response to posts they have made on social media, and Twitter in particular has
been criticised in the past for not doing enough to aid victims of online abuse. [101]
For organizations, such a backlash can cause overall brand damage, especially if reported by the
media. However, this is not always the case, as any brand damage in the eyes of people with an
opposing opinion to that presented by the organization could sometimes be outweighed by
strengthening the brand in the eyes of others. Furthermore, if an organization or individual gives in to
demands that others perceive as wrong-headed, that can then provoke a counter-backlash.
Some websites, such as Reddit, have rules forbidding the posting of personal information of
individuals (also known as doxxing), due to concerns about such postings leading to mobs of large
numbers of Internet users directing harassment at the specific individuals thereby identified. In
particular, the Reddit rule forbidding the posting of personal information is widely understood to imply
that all identifying photos and names must be censored in Facebook screenshots posted to Reddit.
However, the interpretation of this rule in relation to public Twitter posts is less clear, and in any
case, like-minded people online have many other ways they can use to direct each other's attention
to public social media posts they disagree with.
Children also face dangers online such as cyberbullying and approaches by sexual predators, who
sometimes pose as children themselves. Children may also encounter material which they may find
upsetting, or material which their parents consider to be not age-appropriate. Due to naivety, they
may also post personal information about themselves online, which could put them or their families
at risk unless warned not to do so. Many parents choose to enable Internet filtering, and/or supervise
their children's online activities, in an attempt to protect their children from inappropriate material on
the Internet. The most popular social networking websites, such as Facebook and Twitter, commonly
forbid users under the age of 13. However, these policies are typically trivial to circumvent by
registering an account with a false birth date, and a significant number of children aged under 13 join
such sites anyway. Social networking sites for younger children, which claim to provide better levels
of protection for children, also exist.[102]
The Internet has been a major outlet for leisure activity since its inception, with entertaining social
experiments such as MUDs and MOOs being conducted on university servers, and humor-
related Usenet groups receiving much traffic.[citation needed] Many Internet forums have sections devoted to
games and funny videos.[citation needed] The Internet pornographyand online gambling industries have
taken advantage of the World Wide Web, and often provide a significant source of advertising
revenue for other websites.[103] Although many governments have attempted to restrict both
industries' use of the Internet, in general, this has failed to stop their widespread popularity. [104]
Another area of leisure activity on the Internet is multiplayer gaming.[105] This form of recreation
creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer
games. These range from MMORPG to first-person shooters, from role-playing video
games to online gambling. While online gaming has been around since the 1970s, modern modes of
online gaming began with subscription services such as GameSpy and MPlayer.[106] Non-subscribers
were limited to certain types of game play or certain games. Many people use the Internet to access
and download music, movies and other works for their enjoyment and relaxation. Free and fee-
based services exist for all of these activities, using centralized servers and distributed peer-to-peer
technologies. Some of these sources exercise more care with respect to the original artists'
copyrights than others.
Internet usage has been correlated to users' loneliness. [107] Lonely people tend to use the Internet as
an outlet for their feelings and to share their stories with others, such as in the "I am lonely will
anyone speak to me" thread.
Cybersectarianism is a new organizational form which involves: "highly dispersed small groups of
practitioners that may remain largely anonymous within the larger social context and operate in
relative secrecy, while still linked remotely to a larger network of believers who share a set of
practices and texts, and often a common devotion to a particular leader. Overseas supporters
provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance,
and share information on the internal situation with outsiders. Collectively, members and
practitioners of such sects construct viable virtual communities of faith, exchanging personal
testimonies and engaging in the collective study via email, on-line chat rooms, and web-based
message boards."[108] In particular, the British government has raised concerns about the prospect of
young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being
persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially
committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.
Cyberslacking can become a drain on corporate resources; the average UK employee spent 57
minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business
Services.[109] Internet addiction disorder is excessive computer use that interferes with daily
life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance
improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. [110]

Electronic business
Electronic business (e-business) encompasses business processes spanning the entire value chain:
purchasing, supply chain management, marketing, sales, customer service, and business
relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance
relationships with clients and partners. According to International Data Corporation, the size of
worldwide e-commerce, when global business-to-business and -consumer transactions are
combined, equate to $16 trillion for 2013. A report by Oxford Economics adds those two together to
estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global
sales.[111]
While much has been written of the economic advantages of Internet-enabled commerce, there is
also evidence that some aspects of the Internet such as maps and location-aware services may
serve to reinforce economic inequality and the digital divide.[112] Electronic commerce may be
responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting
in increases in income inequality.[113][114][115]
Author Andrew Keen, a long-time critic of the social transformations caused by the Internet, has
recently focused on the economic effects of consolidation from Internet businesses. Keen cites a
2013 Institute for Local Self-Reliance report saying brick-and-mortar retailers employ 47 people for
every $10 million in sales while Amazon employs only 14. Similarly, the 700-employee room rental
start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which
employs 152,000 people. At that time, transportation network company Uber employed 1,000 full-
time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a
Car and The Hertz Corporation combined, which together employed almost 60,000 people. [116]
Telecommuting
Telecommuting is the performance within a traditional worker and employer relationship when it is
facilitated by tools such as groupware, virtual private networks, conference
calling, videoconferencing, and voice over IP (VOIP) so that work may be performed from any
location, most conveniently the worker's home. It can be efficient and useful for companies as it
allows workers to communicate over long distances, saving significant amounts of travel time and
cost. As broadband Internet connections become commonplace, more workers have adequate
bandwidth at home to use these tools to link their home to their corporate intranet and internal
communication networks.

Collaborative publishing
Wikis have also been used in the academic community for sharing and dissemination of information
across institutional and international boundaries. [117] In those settings, they have been found useful
for collaboration on grant writing, strategic planning, departmental documentation, and committee
work.[118] The United States Patent and Trademark Office uses a wiki to allow the public to collaborate
on finding prior art relevant to examination of pending patent applications. Queens, New York has
used a wiki to allow citizens to collaborate on the design and planning of a local park. [119] The English
Wikipedia has the largest user base among wikis on the World Wide Web[120] and ranks in the top 10
among all Web sites in terms of traffic.[121]

Politics and political revolutions


See also: Internet censorship, Mass surveillance, and Social media use in politics

Banner in Bangkok during the 2014 Thai coup d'état, informing the Thaipublic that 'like' or 'share' activities on
social media could result in imprisonment (observed June 30, 2014).

The Internet has achieved new relevance as a political tool. The presidential campaign of Howard
Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet.
Many political groups use the Internet to achieve a new method of organizing for carrying out their
mission, having given rise to Internet activism, most notably practiced by rebels in the Arab Spring.
[122][123]
 The New York Times suggested that social media websites, such as Facebook and Twitter,
helped people organize the political revolutions in Egypt, by helping activists organize protests,
communicate grievances, and disseminate information.[124]
Many have understood the Internet as an extension of the Habermasian notion of the public sphere,
observing how network communication technologies provide something like a global civic forum.
However, incidents of politically motivated Internet censorship have now been recorded in many
countries, including western democracies.[citation needed]
Philanthropy
The spread of low-cost Internet access in developing countries has opened up new possibilities
for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects
for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors
to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is
the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering
the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local
intermediary microfinance organizations which post stories and updates on behalf of the borrowers.
Lenders can contribute as little as $25 to loans of their choice, and receive their money back as
borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed
before being funded by lenders and borrowers do not communicate with lenders themselves. [125][126]
However, the recent spread of low-cost Internet access in developing countries has made genuine
international person-to-person philanthropy increasingly feasible. In 2009, the US-based
nonprofit Zidisha tapped into this trend to offer the first person-to-person microfinance platform to
link lenders and borrowers across international borders without intermediaries. Members can fund
loans for as little as a dollar, which the borrowers then use to develop business activities that
improve their families' incomes while repaying loans to the members with interest. Borrowers access
the Internet via public cybercafes, donated laptops in village schools, and even smart phones, then
create their own profile pages through which they share photos and information about themselves
and their businesses. As they repay their loans, borrowers continue to share updates and dialogue
with lenders via their profile pages. This direct web-based connection allows members themselves to
take on many of the communication and recording tasks traditionally performed by local
organizations, bypassing geographic barriers and dramatically reducing the cost of microfinance
services to the entrepreneurs.[127]

Security
Main article: Internet security

Internet resources, hardware, and software components are the target of criminal or malicious
attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or
access private information.

Malware
Malware is malicious software used and distributed via the Internet. It includes computer
viruses which are copied with the help of humans, computer worms which copy themselves
automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports
on the activity and typing of users. Usually, these activities constitute cybercrime. Defense theorists
have also speculated about the possibilities of cyber warfare using similar methods on a large scale.
[citation needed]

Surveillance
Main article: Computer and network surveillance

See also: Signals intelligence and Mass surveillance

The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet.
[128]
 In the United States for example, under the Communications Assistance For Law Enforcement
Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are
required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. [129]
[130][131]
 Packet capture is the monitoring of data traffic on a computer network. Computers
communicate over the Internet by breaking up messages (emails, images, videos, web pages, files,
etc.) into small chunks called "packets", which are routed through a network of computers, until they
reach their destination, where they are assembled back into a complete "message" again. Packet
Capture Appliance intercepts these packets as they are traveling through the network, in order to
examine their contents using other programs. A packet capture is an information gathering tool, but
not an analysis tool. That is it gathers "messages" but it does not analyze them and figure out what
they mean. Other programs are needed to perform traffic analysis and sift through intercepted data
looking for important/useful information. Under the Communications Assistance For Law
Enforcement Act all U.S. telecommunications providers are required to install packet sniffing
technology to allow Federal law enforcement and intelligence agencies to intercept all of their
customers' broadband Internet and voice over Internet protocol (VoIP) traffic.[132]
The large amount of data gathered from packet capturing requires surveillance software that filters
and reports relevant information, such as the use of certain words or phrases, the access of certain
types of web sites, or communicating via email or chat with certain parties. [133] Agencies, such as
the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to
develop, purchase, implement, and operate systems for interception and analysis of data. [134] Similar
systems are operated by Iranian secret police to identify and suppress dissidents. The required
hardware and software was allegedly installed by German Siemens AG and Finnish Nokia.[135]

Censorship

Internet censorship and surveillance by country(2018)[136][137][138][139][140]

  Pervasive   Selective

  Substantial   Little or none

  Unclassified / No data

Main articles: Internet censorship and Internet freedom

See also: Culture of fear and Great Firewall

Some governments, such as those of Burma, Iran, North Korea, the Mainland China, Saudi


Arabia and the United Arab Emirates restrict access to content on the Internet within their territories,
especially to political and religious content, with domain name and keyword filters. [141]
In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed
to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to
contain only known child pornography sites, the content of the list is secret. [142] Many countries,
including the United States, have enacted laws against the possession or distribution of certain
material, such as child pornography, via the Internet, but do not mandate filter software. Many free or
commercially available software programs, called content-control software are available to users to
block offensive websites on individual computers or networks, in order to limit access by children to
pornographic material or depiction of violence.

Performance

This section needs
expansion. You can help
by adding to it. (July 2014)

As the Internet is a heterogeneous network, the physical characteristics, including for example
the data transfer rates of connections, vary widely. It exhibits emergent phenomenathat depend on
its large-scale organization.[143]

Outages
An Internet blackout or outage can be caused by local signalling interruptions. Disruptions
of submarine communications cables may cause blackouts or slowdowns to large areas, such as in
the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small
number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging
for scrap metal severed most connectivity for the nation of Armenia. [144] Internet blackouts affecting
almost entire countries can be achieved by governments as a form of Internet censorship, as in the
blockage of the Internet in Egypt, whereby approximately 93%[145] of networks were without access in
2011 in an attempt to stop mobilization for anti-government protests.[146]

Energy use
In 2011, researchers estimated the energy used by the Internet to be between 170 and 307 GW,
less than two percent of the energy used by humanity. This estimate included the energy needed to
build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and
100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi
transmitters and cloud storage devices use when transmitting Internet traffic. [147][148] According to a
study published in 2018, nearly 4% of global CO2 emission could be attributed to global data transfer
and the necessary infrastructure. [149] The study also said that online video streaming alone accounted
for 60% of this data transfer and therefore contributed to over 300 million tons of CO 2 emission per
year.[150]
IP address
From Wikipedia, the free encyclopedia
Jump to navigationJump to search
For the Wikipedia user access level, see Wikipedia:User access levels#Unregistered (IP or not
logged in) users.
An Internet Protocol address (IP address) is a numerical label assigned to each device connected
to a computer network that uses the Internet Protocol for communication.[1][2] An IP address serves
two main functions: host or network interface identification and location addressing.
Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number.[2] However, because of
the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP (IPv6),
using 128 bits for the IP address, was developed in 1995, [3] and standardized in December 1998.[4] In
July 2017, a final definition of the protocol was published. [5] IPv6 deployment has been ongoing since
the mid-2000s.
IP addresses are usually written and displayed in human-readable notations, such
as 172.16.254.1 in IPv4, and 2001:db8:0:1234:0:567:8:1 in IPv6. The size of the routing prefix of the
address is designated in CIDR notation by suffixing the address with the number of significant bits,
e.g., 192.168.1.15/24, which is equivalent to the historically used subnet mask 255.255.255.0.
The IP address space is managed globally by the Internet Assigned Numbers Authority (IANA), and
by five regional Internet registries (RIRs) responsible in their designated territories for assignment to
end users and local Internet registries, such as Internet service providers. IPv4 addresses have
been distributed by IANA to the RIRs in blocks of approximately 16.8 million addresses each. Each
ISP or private network administrator assigns an IP address to each device connected to its network.
Such assignments may be on a static (fixed or permanent) or dynamic basis, depending on its
software and practices.

Function
An IP address serves two principal functions. It identifies the host, or more specifically its network
interface, and it provides the location of the host in the network, and thus the capability of
establishing a path to that host. Its role has been characterized as follows: "A name indicates what
we seek. An address indicates where it is. A route indicates how to get there." [2] The header of
each IP packet contains the IP address of the sending host, and that of the destination host.

IP versions
Two versions of the Internet Protocol are in common use in the Internet today. The original version
of the Internet Protocol that was first deployed in 1983 in the ARPANET, the predecessor of the
Internet, is Internet Protocol version 4 (IPv4).
The rapid exhaustion of IPv4 address space available for assignment to Internet service
providers and end user organizations by the early 1990s, prompted the Internet Engineering Task
Force (IETF) to explore new technologies to expand the addressing capability in the Internet. The
result was a redesign of the Internet Protocol which became eventually known as Internet Protocol
Version 6 (IPv6) in 1995.[3][4][5] IPv6 technology was in various testing stages until the mid-2000s,
when commercial production deployment commenced.
Today, these two versions of the Internet Protocol are in simultaneous use. Among other technical
changes, each version defines the format of addresses differently. Because of the historical
prevalence of IPv4, the generic term IP address typically still refers to the addresses defined by
IPv4. The gap in version sequence between IPv4 and IPv6 resulted from the assignment of version
5 to the experimental Internet Stream Protocol in 1979, which however was never referred to as
IPv5.

Subnetworks
IP networks may be divided into subnetworks in both IPv4 and IPv6. For this purpose, an IP address
is recognized as consisting of two parts: the network prefix in the high-order bits and the remaining
bits called the rest field, host identifier, or interface identifier (IPv6), used for host numbering within a
network.[1] The subnet mask or CIDR notation determines how the IP address is divided into network
and host parts.
The term subnet mask is only used within IPv4. Both IP versions however use the CIDR concept and
notation. In this, the IP address is followed by a slash and the number (in decimal) of bits used for
the network part, also called the routing prefix. For example, an IPv4 address and its subnet mask
may be 192.0.2.1 and 255.255.255.0, respectively. The CIDR notation for the same IP address and
subnet is 192.0.2.1/24, because the first 24 bits of the IP address indicate the network and subnet.

IPv4 addresses
Main article: IPv4 §  Addressing

Decomposition of an IPv4 address from dot-decimal notation to its binary value.

An IPv4 address has a size of 32 bits, which limits the address space to 4294967296 (232)


addresses. Of this number, some addresses are reserved for special purposes such as private
networks (~18 million addresses) and multicast addressing (~270 million addresses).
IPv4 addresses are usually represented in dot-decimal notation, consisting of four decimal numbers,
each ranging from 0 to 255, separated by dots, e.g., 172.16.254.1. Each part represents a group of 8
bits (an octet) of the address. In some cases of technical writing, [specify] IPv4 addresses may be
presented in various hexadecimal, octal, or binary representations.

Subnetting history
In the early stages of development of the Internet Protocol, the network number was always the
highest order octet (most significant eight bits). Because this method allowed for only 256 networks,
it soon proved inadequate as additional networks developed that were independent of the existing
networks already designated by a network number. In 1981, the addressing specification was
revised with the introduction of classful network architecture.[2]
Classful network design allowed for a larger number of individual network assignments and fine-
grained subnetwork design. The first three bits of the most significant octet of an IP address were
defined as the class of the address. Three classes (A, B, and C) were defined for
universal unicast addressing. Depending on the class derived, the network identification was based
on octet boundary segments of the entire address. Each class used successively additional octets in
the network identifier, thus reducing the possible number of hosts in the higher order classes
(B and C). The following table gives an overview of this now obsolete system.

Historical classful network architecture

Size Size
Leading  of network  of res Number of
Clas Number Start
t addresses End address
s of networks address
bits number bit bit per network
field field

16777216 (224 127.255.255.25
A 0 8 24 128 (27) 0.0.0.0
) 5

128.0.0. 191.255.255.25
B 10 16 16 16384 (214) 65536 (216)
0 5

2097152 (221 192.0.0. 223.255.255.25


C 110 24 8 256 (28)
) 0 5

Classful network design served its purpose in the startup stage of the Internet, but it
lacked scalability in the face of the rapid expansion of networking in the 1990s. The class system of
the address space was replaced with Classless Inter-Domain Routing (CIDR) in 1993. CIDR is
based on variable-length subnet masking (VLSM) to allow allocation and routing based on arbitrary-
length prefixes. Today, remnants of classful network concepts function only in a limited scope as the
default configuration parameters of some network software and hardware components (e.g.
netmask), and in the technical jargon used in network administrators' discussions.

Private addresses
Early network design, when global end-to-end connectivity was envisioned for communications with
all Internet hosts, intended that IP addresses be globally unique. However, it was found that this was
not always necessary as private networks developed and public address space needed to be
conserved.
Computers not connected to the Internet, such as factory machines that communicate only with each
other via TCP/IP, need not have globally unique IP addresses. Today, such private networks are
widely used and typically connect to the Internet with network address translation (NAT), when
needed.
Three non-overlapping ranges of IPv4 addresses for private networks are reserved. [6] These
addresses are not routed on the Internet and thus their use need not be coordinated with an IP
address registry. Any user may use any of the reserved blocks. Typically, a network administrator
will divide a block into subnets; for example, many home routers automatically use a default address
range of 192.168.0.0 through 192.168.0.255 (192.168.0.0/24).

Reserved private IPv4 network ranges[6]

Number of
Name CIDR block Address range Classful description
addresses

24-bit 10.0.0.0 –
10.0.0.0/8 16777216 Single Class A.
block 10.255.255.255

20-bit 172.16.0.0 – Contiguous range of 16 Class


172.16.0.0/12 1048576
block 172.31.255.255 B blocks.

16-bit 192.168.0.0/1 192.168.0.0 – Contiguous range of 256


65536
block 6 192.168.255.255 Class C blocks.

IPv6 addresses
Main article: IPv6 address

Decomposition of an IPv6 address from hexadecimal representation to its binary value.

In IPv6, the address size was increased from 32 bits in IPv4 to 128 bits, thus providing up to
2128 (approximately 3.403×1038) addresses. This is deemed sufficient for the foreseeable future.
The intent of the new design was not to provide just a sufficient quantity of addresses, but also
redesign routing in the Internet by allowing more efficient aggregation of subnetwork routing
prefixes. This resulted in slower growth of routing tables in routers. The smallest possible
individual allocation is a subnet for 264 hosts, which is the square of the size of the entire IPv4
Internet. At these levels, actual address utilization ratios will be small on any IPv6 network
segment. The new design also provides the opportunity to separate the addressing
infrastructure of a network segment, i.e. the local administration of the segment's available
space, from the addressing prefix used to route traffic to and from external networks. IPv6 has
facilities that automatically change the routing prefix of entire networks, should the global
connectivity or the routing policy change, without requiring internal redesign or manual
renumbering.
The large number of IPv6 addresses allows large blocks to be assigned for specific purposes
and, where appropriate, to be aggregated for efficient routing. With a large address space, there
is no need to have complex address conservation methods as used in CIDR.
All modern desktop and enterprise server operating systems include native support for the IPv6
protocol, but it is not yet widely deployed in other devices, such as residential networking
routers, voice over IP (VoIP) and multimedia equipment, and some networking hardware.

Private addresses
Just as IPv4 reserves addresses for private networks, blocks of addresses are set aside in IPv6.
In IPv6, these are referred to as unique local addresses (ULAs). The routing prefix fc00::/7 is
reserved for this block,[7] which is divided into two /8 blocks with different implied policies. The
addresses include a 40-bit pseudorandom number that minimizes the risk of address collisions if
sites merge or packets are misrouted.
Early practices used a different block for this purpose (fec0::), dubbed site-local addresses.
[8]
 However, the definition of what constituted a site remained unclear and the poorly defined
addressing policy created ambiguities for routing. This address type was abandoned and must
not be used in new systems.[9]
Addresses starting with fe80::, called link-local addresses, are assigned to interfaces for
communication on the attached link. The addresses are automatically generated by the
operating system for each network interface. This provides instant and automatic communication
between all IPv6 host on a link. This feature is used in the lower layers of IPv6 network
administration, such as for the Neighbor Discovery Protocol.
Private and link-local address prefixes may not be routed on the public Internet.

IP address assignment
IP addresses are assigned to a host either dynamically as they join the network, or persistently
by configuration of the host hardware or software. Persistent configuration is also known as
using a static IP address. In contrast, when a computer's IP address is assigned each time it
restarts, this is known as using a dynamic IP address.
Dynamic IP addresses are assigned by network using Dynamic Host Configuration
Protocol (DHCP). DHCP is the most frequently used technology for assigning addresses. It
avoids the administrative burden of assigning specific static addresses to each device on a
network. It also allows devices to share the limited address space on a network if only some of
them are online at a particular time. Typically, dynamic IP configuration is enabled by default in
modern desktop operating systems.
The address assigned with DHCP is associated with a lease and usually has an expiration
period. If the lease is not renewed by the host before expiry, the address may be assigned to
another device. Some DHCP implementations attempt to reassign the same IP address to a
host (based on its MAC address) each time it joins the network. A network administrator may
configure DHCP by allocating specific IP addresses based on MAC address.
DHCP is not the only technology used to assign IP addresses dynamically. Bootstrap Protocol is
a similar protocol and predecessor to DHCP. Dialup and some broadband networks use
dynamic address features of the Point-to-Point Protocol.
Computers and equipment used for the network infrastructure, such as routers and mail servers,
are typically configured with static addressing.
In the absence or failure of static or dynamic address configurations, an operating system may
assign a link-local address to a host using stateless address autoconfiguration.

Sticky dynamic IP address


A sticky dynamic IP address is an informal term used by cable and DSL Internet access
subscribers to describe a dynamically assigned IP address which seldom changes. The
addresses are usually assigned with DHCP. Since the modems are usually powered on for
extended periods of time, the address leases are usually set to long periods and simply
renewed. If a modem is turned off and powered up again before the next expiration of the
address lease, it often receives the same IP address.

Address autoconfiguration
Address block 169.254.0.0/16 is defined for the special use in link-local addressing for IPv4
networks.[10] In IPv6, every interface, whether using static or dynamic address assignments, also
receives a link-local address automatically in the block fe80::/10.[10] These addresses are only
valid on the link, such as a local network segment or point-to-point connection, to which a host is
connected. These addresses are not routable and, like private addresses, cannot be the source
or destination of packets traversing the Internet.
When the link-local IPv4 address block was reserved, no standards existed for mechanisms of
address autoconfiguration. Filling the void, Microsoft developed a protocol called Automatic
Private IP Addressing (APIPA), whose first public implementation appeared in Windows 98.
[11]
 APIPA has been deployed on millions of machines and became a de facto standard in the
industry. In May 2005, the IETF defined a formal standard for it.[12]

Addressing conflicts
An IP address conflict occurs when two devices on the same local physical or wireless network
claim to have the same IP address. A second assignment of an address generally stops the IP
functionality of one or both of the devices. Many modern operating systems notify the
administrator of IP address conflicts.[13][14] When IP addresses are assigned by multiple people
and systems with differing methods, any of them may be at fault. [15][16][17][18][19] If one of the devices
involved in the conflict is the default gateway access beyond the LAN for all devices on the LAN
may be impaired.

Routing
IP addresses are classified into several classes of operational characteristics: unicast, multicast,
anycast and broadcast addressing.

Unicast addressing
The most common concept of an IP address is in unicast addressing, available in both IPv4 and
IPv6. It normally refers to a single sender or a single receiver, and can be used for both sending
and receiving. Usually, a unicast address is associated with a single device or host, but a device
or host may have more than one unicast address. Sending the same data to multiple unicast
addresses requires the sender to send all the data many times over, once for each recipient.
Broadcast addressing
Broadcasting is an addressing technique available in IPv4 to address data to all possible
destinations on a network in one transmission operation as an all-hosts broadcast. All receivers
capture the network packet. The address 255.255.255.255 is used for network broadcast. In
addition, a more limited directed broadcast uses the all-ones host address with the network
prefix. For example, the destination address used for directed broadcast to devices on the
network 192.0.2.0/24 is 192.0.2.255.
IPv6 does not implement broadcast addressing, and replaces it with multicast to the specially
defined all-nodes multicast address.

Multicast addressing
A multicast address is associated with a group of interested receivers. In IPv4,
addresses 224.0.0.0 through 239.255.255.255 (the former Class D addresses) are designated
as multicast addresses.[20] IPv6 uses the address block with the prefix ff00::/8 for multicast
applications. In either case, the sender sends a single datagram from its unicast address to the
multicast group address and the intermediary routers take care of making copies and sending
them to all receivers that have joined the corresponding multicast group.

Anycast addressing
Like broadcast and multicast, anycast is a one-to-many routing topology. However, the data
stream is not transmitted to all receivers, just the one which the router decides is logically
closest in the network. Anycast address is an inherent feature of only IPv6. In IPv4, anycast
addressing implementations typically operate using the shortest-path metric of BGP routing and
do not take into account congestion or other attributes of the path. Anycast methods are useful
for global load balancing and are commonly used in distributed DNSsystems.

Geolocation
A host may use geolocation software to deduce the geolocation of its communicating peer.[21][22]

Public address
A public IP address, in common parlance, is a globally routable unicast IP address, meaning that
the address is not an address reserved for use in private networks, such as those reserved
by RFC 1918, or the various IPv6 address formats of local scope or site-local scope, for
example for link-local addressing. Public IP addresses may be used for communication between
hosts on the global Internet.

Firewalling
For security and privacy considerations, network administrators often desire to restrict public
Internet traffic within their private networks. The source and destination IP addresses contained
in the headers of each IP packet are a convenient means to discriminate traffic by IP address
blocking or by selectively tailoring responses to external requests to internal servers. This is
achieved with firewall software running on the network's gateway router. A database of IP
addresses of permissible traffic may be maintained in blacklists or whitelists.

Address translation
Multiple client devices can appear to share an IP address, either because they are part of
a shared hosting web server environment or because an IPv4 network address translator(NAT)
or proxy server acts as an intermediary agent on behalf of the client, in which case the real
originating IP address might be masked from the server receiving a request. A common practice
is to have a NAT mask a large number of devices in a private network. Only the "outside"
interface(s) of the NAT needs to have an Internet-routable address. [23]
Commonly, the NAT device maps TCP or UDP port numbers on the side of the larger, public
network to individual private addresses on the masqueraded network.
In residential networks, NAT functions are usually implemented in a residential gateway. In this
scenario, the computers connected to the router have private IP addresses and the router has a
public address on its external interface to communicate on the Internet. The internal computers
appear to share one public IP address.

Diagnostic tools
Computer operating systems provide various diagnostic tools to examine network interfaces and
address configuration. Microsoft Windows provides the command-line
interface tools ipconfig and netsh and users of Unix-like systems may use ifconfig, netstat, route,
lanstat, fstat, and iproute2 utilities to accomplish the task.

Domain name
From Wikipedia, the free encyclopedia
Jump to navigationJump to search
This article is about domain names in the Internet. For other uses, see Domain (disambiguation).

The hierarchy of labels in a fully qualified domain name.

A domain name is an identification string that defines a realm of administrative autonomy, authority


or control within the Internet. Domain names are used in various networking contexts and for
application-specific naming and addressing purposes. In general, a domain name identifies
a network domain, or it represents an Internet Protocol (IP) resource, such as a personal computer
used to access the Internet, a server computer hosting a web site, or the web site itself or any other
service communicated via the Internet. In 2017, 330.6 million domain names had been registered. [1]
Domain names are formed by the rules and procedures of the Domain Name System (DNS). Any
name registered in the DNS is a domain name. Domain names are organized in subordinate levels
(subdomains) of the DNS root domain, which is nameless. The first-level set of domain names are
the top-level domains (TLDs), including the generic top-level domains (gTLDs), such as the
prominent domains com, info, net, edu, and org, and the country code top-level domains(ccTLDs).
Below these top-level domains in the DNS hierarchy are the second-level and third-level domain
names that are typically open for reservation by end-users who wish to connect local area networks
to the Internet, create other publicly accessible Internet resources or run web sites.
The registration of these domain names is usually administered by domain name registrars who sell
their services to the public.
A fully qualified domain name (FQDN) is a domain name that is completely specified with all labels in
the hierarchy of the DNS, having no parts omitted. Labels in the Domain Name System are case-
insensitive, and may therefore be written in any desired capitalization method, but most commonly
domain names are written in lowercase in technical contexts

Domain name
From Wikipedia, the free encyclopedia
Jump to navigationJump to search
This article is about domain names in the Internet. For other uses, see Domain (disambiguation).

The hierarchy of labels in a fully qualified domain name.

A domain name is an identification string that defines a realm of administrative autonomy, authority


or control within the Internet. Domain names are used in various networking contexts and for
application-specific naming and addressing purposes. In general, a domain name identifies
a network domain, or it represents an Internet Protocol (IP) resource, such as a personal computer
used to access the Internet, a server computer hosting a web site, or the web site itself or any other
service communicated via the Internet. In 2017, 330.6 million domain names had been registered. [1]
Domain names are formed by the rules and procedures of the Domain Name System (DNS). Any
name registered in the DNS is a domain name. Domain names are organized in subordinate levels
(subdomains) of the DNS root domain, which is nameless. The first-level set of domain names are
the top-level domains (TLDs), including the generic top-level domains (gTLDs), such as the
prominent domains com, info, net, edu, and org, and the country code top-level domains(ccTLDs).
Below these top-level domains in the DNS hierarchy are the second-level and third-level domain
names that are typically open for reservation by end-users who wish to connect local area networks
to the Internet, create other publicly accessible Internet resources or run web sites.
The registration of these domain names is usually administered by domain name registrars who sell
their services to the public.
A fully qualified domain name (FQDN) is a domain name that is completely specified with all labels in
the hierarchy of the DNS, having no parts omitted. Labels in the Domain Name System are case-
insensitive, and may therefore be written in any desired capitalization method, but most commonly
domain names are written in lowercase in technical contexts

Purpose
Domain names serve to identify Internet resources, such as computers, networks, and services, with
a text-based label that is easier to memorize than the numerical addresses used in the Internet
protocols. A domain name may represent entire collections of such resources or individual
instances. Individual Internet host computers use domain names as host identifiers, also called host
names. The term host name is also used for the leaf labels in the domain name system, usually
without further subordinate domain name space. Host names appear as a component in Uniform
Resource Locators (URLs) for Internet resources such as web sites (e.g., en.wikipedia.org).
Domain names are also used as simple identification labels to indicate ownership or control of a
resource. Such examples are the realm identifiers used in the Session Initiation Protocol (SIP),
the Domain Keys used to verify DNS domains in e-mail systems, and in many other Uniform
Resource Identifiers (URIs).
An important function of domain names is to provide easily recognizable and memorizable names to
numerically addressed Internet resources. This abstraction allows any resource to be moved to a
different physical location in the address topology of the network, globally or locally in an intranet.
Such a move usually requires changing the IP address of a resource and the corresponding
translation of this IP address to and from its domain name.
Domain names are used to establish a unique identity. Organizations can choose a domain name
that corresponds to their name, helping Internet users to reach them easily.
A generic domain is a name that defines a general category, rather than a specific or personal
instance, for example, the name of an industry, rather than a company name. Some examples of
generic names are books.com, music.com, and travel.info. Companies have created brands based
on generic names, and such generic domain names may be valuable [3]
Domain names are often simply referred to as domains and domain name registrants are frequently
referred to as domain owners, although domain name registration with a registrar does not confer
any legal ownership of the domain name, only an exclusive right of use for a particular duration of
time. The use of domain names in commerce may subject them to trademark law.

History
The practice of using a simple memorable abstraction of a host's numerical address on a computer
network dates back to the ARPANET era, before the advent of today's commercial Internet. In the
early network, each computer on the network retrieved the hosts file (host.txt) from a computer at
SRI (now SRI International),[4][5] which mapped computer host names to numerical addresses. The
rapid growth of the network made it impossible to maintain a centrally organized hostname registry
and in 1983 the Domain Name System was introduced on the ARPANET and published by
the Internet Engineering Task Force as RFC 882 and RFC 883.

Domain name space


Today, the Internet Corporation for Assigned Names and Numbers (ICANN) manages the top-level
development and architecture of the Internet domain name space. It authorizes domain name
registrars, through which domain names may be registered and reassigned.
The hierarchical domain name system, organized into zones, each served by domain name servers.

The domain name space consists of a tree of domain names. Each node in the tree holds
information associated with the domain name. The tree sub-divides into zones beginning at the DNS
root zone.

Domain name syntax


A domain name consists of one or more parts, technically called labels, that are conventionally
concatenated, and delimited by dots, such as example.com.

 The right-most label conveys the top-level domain; for example, the domain
name www.example.com belongs to the top-level domain com.
 The hierarchy of domains descends from the right to the left label in the name; each label to
the left specifies a subdivision, or subdomainof the domain to the right. For example: the
label example specifies a node example.com as a subdomain of the com domain, and www is a
label to create www.example.com, a subdomain of example.com. Each label may contain from 1
to 63 octets. The empty label is reserved for the root node and when fully qualified is expressed
as the empty label terminated by a dot. The full domain name may not exceed a total length of
253 ASCII characters in its textual representation. [6] Thus, when using a single character per
label, the limit is 127 levels: 127 characters plus 126 dots have a total length of 253. In practice,
some domain registries may have shorter limits.
 A hostname is a domain name that has at least one associated IP address. For example, the
domain names www.example.com and example.com are also hostnames, whereas
the com domain is not. However, other top-level domains, particularly country code top-level
domains, may indeed have an IP address, and if so, they are also hostnames.
 Hostnames impose restrictions on the characters allowed in the corresponding domain
name. A valid hostname is also a valid domain name, but a valid domain name may not
necessarily be valid as a hostname.
Top-level domains
The top-level domains (TLDs) such as com, net and org are the highest level of domain names of
the Internet. Top-level domains form the DNS root zone of the hierarchical Domain Name System.
Every domain name ends with a top-level domain label.
When the Domain Name System was devised in the 1980s, the domain name space was divided
into two main groups of domains.[7] The country code top-level domains (ccTLD) were primarily
based on the two-character territory codes of ISO-3166 country abbreviations. In addition, a group of
seven generic top-level domains (gTLD) was implemented which represented a set of categories of
names and multi-organizations.[8] These were the domains gov, edu, com, mil, org, net, and int.
During the growth of the Internet, it became desirable to create additional generic top-level domains.
As of October 2009, 21 generic top-level domains and 250 two-letter country-code top-level domains
existed.[9] In addition, the ARPA domain serves technical purposes in the infrastructure of the Domain
Name System.
During the 32nd International Public ICANN Meeting in Paris in 2008, [10] ICANN started a new
process of TLD naming policy to take a "significant step forward on the introduction of new generic
top-level domains." This program envisions the availability of many new or already proposed
domains, as well as a new application and implementation process.[11]Observers believed that the
new rules could result in hundreds of new top-level domains to be registered. [12] In 2012, the program
commenced, and received 1930 applications.[13]By 2016, the milestone of 1000 live gTLD was
reached.
The Internet Assigned Numbers Authority (IANA) maintains an annotated list of top-level domains in
the DNS root zone database.[14]
For special purposes, such as network testing, documentation, and other applications, IANA also
reserves a set of special-use domain names.[15] This list contains domain names such
as example, local, localhost, and test. Other top-level domain names containing trade marks are
registered for corporate use. Cases include brands such as BMW, Google, and Canon. [16]

Second-level and lower level domains


Below the top-level domains in the domain name hierarchy are the second-level domain (SLD)
names. These are the names directly to the left of .com, .net, and the other top-level domains. As an
example, in the domain example.co.uk, co is the second-level domain.
Next are third-level domains, which are written immediately to the left of a second-level domain.
There can be fourth- and fifth-level domains, and so on, with virtually no limitation. An example of an
operational domain name with four levels of domain labels is sos.state.oh.us. Each label is
separated by a full stop (dot). 'sos' is said to be a sub-domain of 'state.oh.us', and 'state' a sub-
domain of 'oh.us', etc. In general, subdomains are domains subordinate to their parent domain. An
example of very deep levels of subdomain ordering are the IPv6 reverse resolution DNS zones, e.g.,
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa, which is the reverse DNS
resolution domain name for the IP address of a loopback interface, or the localhost name.
Second-level (or lower-level, depending on the established parent hierarchy) domain names are
often created based on the name of a company (e.g., bbc.co.uk), product or service
(e.g. hotmail.com). Below these levels, the next domain name component has been used to
designate a particular host server. Therefore, ftp.example.com might be an FTP
server, www.example.com would be a World Wide Web server, and mail.example.com could be an
email server, each intended to perform only the implied function. Modern technology allows multiple
physical servers with either different (cf. load balancing) or even identical addresses (cf. anycast) to
serve a single hostname or domain name, or multiple domain names to be served by a single
computer. The latter is very popular in Web hosting service centers, where service providers host
the websites of many organizations on just a few servers.
The hierarchical DNS labels or components of domain names are separated in a fully qualified name
by the full stop (dot, .).

Internationalized domain names


Main article: Internationalized domain name

The character set allowed in the Domain Name System is based on ASCII and does not allow the
representation of names and words of many languages in their native scripts or
alphabets. ICANN approved the Internationalized domain name (IDNA) system, which
maps Unicode strings used in application user interfaces into the valid DNS character set by an
encoding called Punycode. For example, københavn.eu is mapped to xn--kbenhavn-54a.eu.
Many registries have adopted IDNA.

Domain name registration


History
The first commercial Internet domain name, in the TLD com, was registered on 15 March 1985 in the
name symbolics.com by Symbolics Inc., a computer systems firm in Cambridge, Massachusetts.
By 1992, fewer than 15,000 com domains had been registered.
In the first quarter of 2015, 294 million domain names had been registered. [17] A large fraction of them
are in the com TLD, which as of December 21, 2014, had 115.6 million domain names, [18] including
11.9 million online business and e-commerce sites, 4.3 million entertainment sites, 3.1 million
finance related sites, and 1.8 million sports sites.[19] As of July 2012 the com TLD had more
registrations than all of the ccTLDs combined. [20]

Administration
The right to use a domain name is delegated by domain name registrars, which are accredited by
the Internet Corporation for Assigned Names and Numbers (ICANN), the organization charged with
overseeing the name and number systems of the Internet. In addition to ICANN, each top-level
domain (TLD) is maintained and serviced technically by an administrative organization operating a
registry. A registry is responsible for maintaining the database of names registered within the TLD it
administers. The registry receives registration information from each domain name registrar
authorized to assign names in the corresponding TLD and publishes the information using a special
service, the WHOISprotocol.
Registries and registrars usually charge an annual fee for the service of delegating a domain name
to a user and providing a default set of name servers. Often, this transaction is termed a sale or
lease of the domain name, and the registrant may sometimes be called an "owner", but no such
legal relationship is actually associated with the transaction, only the exclusive right to use the
domain name. More correctly, authorized users are known as "registrants" or as "domain holders".
ICANN publishes the complete list of TLD registries and domain name registrars. Registrant
information associated with domain names is maintained in an online database accessible with the
WHOIS protocol. For most of the 250 country code top-level domains (ccTLDs), the domain
registries maintain the WHOIS (Registrant, name servers, expiration dates, etc.) information.
Some domain name registries, often called network information centers (NIC), also function as
registrars to end-users. The major generic top-level domain registries, such as for
the com, net, org, info domains and others, use a registry-registrar model consisting of hundreds of
domain name registrars (see lists at ICANN[21] or VeriSign).[22] In this method of management, the
registry only manages the domain name database and the relationship with the registrars.
The registrants (users of a domain name) are customers of the registrar, in some cases through
additional layers of resellers.
There are also a few other alternative DNS root providers that tries to compete or complement
ICANN's role of domain name administration, however, most of them failed to receive wide
recognition, and thus domain names offered by those alternative roots cannot be used universally on
most other internet-connecting machines without additional dedicated configurations

Browser
browser or browsing may refer to:
 Web browser, used to access the World Wide Web
 Hardware browser, for displaying under the server or network hardware devices, and allows
users to interact with the hardware of these tools. used to sense, detect, display and control
hardware devices in the network.
 File browser, also known as a file manager, used to manage files and related objects
 Help browser, for reading online help
 Browser service, a feature of Microsoft Windows to let users browse and locate shared
resources in neighboring computers
 Code browser, for navigating source code

Web search engine


A web search engine or Internet search engine is a software system that is designed to carry
out web search (Internet search), which means to search the World Wide Web in a systematic way
for particular information specified in a textual web search query. The search results are generally
presented in a line of results, often referred to as search engine results pages(SERPs). The
information may be a mix of links to web pages, images, videos, infographics, articles, research
papers, and other types of files. Some search engines also mine data available in databases or open
directories. Unlike web directories, which are maintained only by human editors, search engines also
maintain real-time information by running an algorithm on a web crawler. Internet content that is not
capable of being searched by a web search engine is generally described as the deep web.

The results of a search for the term "lunar eclipse" in a web-based image search engine

. Internet access
Internet access is the ability of individuals and organizations to connect to
the Internet using computer terminals, computers, and other devices; and to access services such
as email and the World Wide Web. Internet access is sold by Internet service providers (ISPs)
delivering connectivity at a wide range of data transfer rates via various networking technologies.
Many organizations, including a growing number of municipal entities, also provide cost-free wireless
access.
Availability of Internet access was once limited, but has grown rapidly. In 1995, only 0.04 percent of
the world's population had access, with well over half of those living in the United States, [1] and
consumer use was through dial-up. By the first decade of the 21st century, many consumers in
developed nations used faster broadband technology, and by 2014, 41 percent of the world's
population had access,[2] broadband was almost ubiquitous worldwide, and global average
connection speeds exceeded one megabit per second

History[edit]
Main article: History of the Internet

The Internet developed from the ARPANET, which was funded by the US government to support
projects within the government and at universities and research laboratories in the US – but grew
over time to include most of the world's large universities and the research arms of many technology
companies.[4][5][6] Use by a wider audience only came in 1995 when restrictions on the use of the
Internet to carry commercial traffic were lifted. [7]
In the early to mid-1980s, most Internet access was from personal
computers and workstations directly connected to local area networks or from dial-up
connections using modemsand analog telephone lines. LANs typically operated at 10 Mbit/s, while
modem data-rates grew from 1200 bit/s in the early 1980s, to 56 kbit/s by the late 1990s. Initially,
dial-up connections were made from terminals or computers running terminal emulation
software to terminal servers on LANs. These dial-up connections did not support end-to-end use of
the Internet protocols and only provided terminal to host connections. The introduction of network
access servers supporting the Serial Line Internet Protocol (SLIP) and later the point-to-point
protocol (PPP) extended the Internet protocols and made the full range of Internet services available
to dial-up users; although slower, due to the lower data rates available using dial-up.
Broadband Internet access, often shortened to just broadband, is simply defined as "Internet access
that is always on, and faster than the traditional dial-up access"[8][9] and so covers a wide range of
technologies. Broadband connections are typically made using a computer's built
in Ethernet networking capabilities, or by using a NIC expansion card.
Most broadband services provide a continuous "always on" connection; there is no dial-in process
required, and it does not interfere with voice use of phone lines. [10] Broadband provides improved
access to Internet services such as:

 Faster world wide web browsing


 Faster downloading of documents, photographs, videos, and other large files
 Telephony, radio, television, and videoconferencing
 Virtual private networks and remote system administration
 Online gaming, especially massively multiplayer online role-playing games which are
interaction-intensive
In the 1990s, the National Information Infrastructure initiative in the U.S. made broadband Internet
access a public policy issue.[11] In 2000, most Internet access to homes was provided using dial-up,
while many businesses and schools were using broadband connections. In 2000 there were just
under 150 million dial-up subscriptions in the 34 OECD countries [12] and fewer than 20 million
broadband subscriptions. By 2004, broadband had grown and dial-up had declined so that the
number of subscriptions were roughly equal at 130 million each. In 2010, in the OECD countries,
over 90% of the Internet access subscriptions used broadband, broadband had grown to more than
300 million subscriptions, and dial-up subscriptions had declined to fewer than 30 million. [13]
The broadband technologies in widest use are ADSL and cable Internet access. Newer technologies
include VDSL and optical fibre extended closer to the subscriber in both telephone and cable
plants. Fibre-optic communication, while only recently being used in premises and to the
curb schemes, has played a crucial role in enabling broadband Internet access by making
transmission of information at very high data rates over longer distances much more cost-effective
than copper wire technology.
In areas not served by ADSL or cable, some community organizations and local governments are
installing Wi-Fi networks. Wireless, satellite and microwave Internet are often used in rural,
undeveloped, or other hard to serve areas where wired Internet is not readily available.
Newer technologies being deployed for fixed (stationary) and mobile broadband access
include WiMAX, LTE, and fixed wireless, e.g., Motorola Canopy.
Starting in roughly 2006, mobile broadband access is increasingly available at the consumer level
using "3G" and "4G" technologies such as HSPA, EV-DO, HSPA+, and LTE.

Availability[edit]
In addition to access from home, school, and the workplace Internet access may be available
from public places such as libraries and Internet cafes, where computers with Internet connections
are available. Some libraries provide stations for physically connecting users' laptops to local area
networks (LANs).
Wireless Internet access points are available in public places such as airport halls, in some cases
just for brief use while standing. Some access points may also provide coin-operated computers.
Various terms are used, such as "public Internet kiosk", "public access terminal", and
"Web payphone". Many hotels also have public terminals, usually fee based.
Coffee shops, shopping malls, and other venues increasingly offer wireless access to computer
networks, referred to as hotspots, for users who bring their own wireless-enabled devices such as
a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A Wi-
Fi hotspot need not be limited to a confined location since multiple ones combined can cover a
whole campus or park, or even an entire city can be enabled.
Additionally, Mobile broadband access allows smart phones and other digital devices to connect to
the Internet from any location from which a mobile phone call can be made, subject to the
capabilities of that mobile network.

Speed[edit]

Data rate units (SI)

Unit   Symbol Bits Bytes

Kilobit per second (103) kbit/s 1,000 bit/s 125 B/s

Megabit/s (106) Mbit/s 1,000 kbit/s   125 kbit/s      

Gigabit/s (109) Gbit/s 1,000 Mbit/s   125 Mbit/s      

Terabit/s (1012) Tbit/s 1,000 Gbit/s   125 GB/s      

Petabit/s (1015) Pbit/s 1,000 Tbit/s   125 TB/s      


 

Unit   Symbol Bits Bytes

Kilobyte per second  (103) kB/s 8,000 bit/s 1,000 B/s

Megabyte/s (106) MB/s 8,000 kbit/s       1,000 kbit/s      

Gigabyte/s (109) GB/s 8,000 Mbit/s       1,000 Mbit/s      

Terabyte/s (1012) TB/s 8,000 Gbit/s       1,000 GB/s      

Petabyte/s (1015) PB/s 8,000 Tbit/s       1,000 TB/s      

Main articles: Data rates, Bit rates, Bandwidth (computing), and Device data rates

The bit rates for dial-up modems range from as little as 110 bit/s in the late 1950s, to a maximum of
from 33 to 64 kbit/s (V.90 and V.92) in the late 1990s. Dial-up connections generally require the
dedicated use of a telephone line. Data compression can boost the effective bit rate for a dial-up
modem connection to from 220 (V.42bis) to 320 (V.44) kbit/s.[14] However, the effectiveness of data
compression is quite variable, depending on the type of data being sent, the condition of the
telephone line, and a number of other factors. In reality, the overall data rate rarely exceeds 150
kbit/s.[15]
Broadband technologies supply considerably higher bit rates than dial-up, generally without
disrupting regular telephone use. Various minimum data rates and maximum latencies have been
used in definitions of broadband, ranging from 64 kbit/s up to 4.0 Mbit/s.[16] In 1988
the CCITT standards body defined "broadband service" as requiring transmission channels capable
of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2 Mbit/s.[17] A
2006 Organisation for Economic Co-operation and Development (OECD) report defined broadband
as having download data transfer rates equal to or faster than 256 kbit/s.[18] And in 2015 the
U.S. Federal Communications Commission (FCC) defined "Basic Broadband" as data transmission
speeds of at least 25 Mbit/s downstream (from the Internet to the user's computer) and 3 Mbit/s
upstream (from the user's computer to the Internet). [19] The trend is to raise the threshold of the
broadband definition as higher data rate services become available. [20]
The higher data rate dial-up modems and many broadband services are "asymmetric"—supporting
much higher data rates for download (toward the user) than for upload (toward the Internet).
Data rates, including those given in this article, are usually defined and advertised in terms of the
maximum or peak download rate. In practice, these maximum data rates are not always reliably
available to the customer.[21] Actual end-to-end data rates can be lower due to a number of factors.
[22]
 In late June 2016, internet connection speeds averaged about 6 Mbit/s globally. [23] Physical link
quality can vary with distance and for wireless access with terrain, weather, building construction,
antenna placement, and interference from other radio sources. Network bottlenecks may exist at
points anywhere on the path from the end-user to the remote server or service being used and not
just on the first or last link providing Internet access to the end-user.
Network congestion[edit]
Users may share access over a common network infrastructure. Since most users do not use their
full connection capacity all of the time, this aggregation strategy (known as contended service)
usually works well and users can burst to their full data rate at least for brief periods. However, peer-
to-peer (P2P) file sharing and high-quality streaming video can require high data-rates for extended
periods, which violates these assumptions and can cause a service to become oversubscribed,
resulting in congestion and poor performance. The TCP protocol includes flow-control mechanisms
that automatically throttle back on the bandwidth being used during periods of network congestion.
This is fair in the sense that all users that experience congestion receive less bandwidth, but it can
be frustrating for customers and a major problem for ISPs. In some cases the amount of bandwidth
actually available may fall below the threshold required to support a particular service such as video
conferencing or streaming live video–effectively making the service unavailable.
When traffic is particularly heavy, an ISP can deliberately throttle back the bandwidth available to
classes of users or for particular services. This is known as traffic shaping and careful use can
ensure a better quality of service for time critical services even on extremely busy networks.
However, overuse can lead to concerns about fairness and network neutrality or even charges
of censorship, when some types of traffic are severely or completely blocked.

Outages[edit]
An Internet blackout or outage can be caused by local signaling interruptions. Disruptions
of submarine communications cables may cause blackouts or slowdowns to large areas, such as in
the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small
number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging
for scrap metal severed most connectivity for the nation of Armenia. [24] Internet blackouts affecting
almost entire countries can be achieved by governments as a form of Internet censorship, as in the
blockage of the Internet in Egypt, whereby approximately 93%[25] of networks were without access in
2011 in an attempt to stop mobilization for anti-government protests.[26]
On April 25, 1997, due to a combination of human error and software bug, an incorrect routing table
at MAI Network Service (a Virginia Internet service provider) propagated across backbone routers
and caused major disruption to Internet traffic for a few hours. [27]
See also: AS 7007 incident and List of web host service outages

Technologies[edit]
When the Internet is accessed using a modem, digital data is converted to analog for transmission
over analog networks such as the telephone and cable networks.[10] A computer or other device
accessing the Internet would either be connected directly to a modem that communicates with
an Internet service provider (ISP) or the modem's Internet connection would be shared via a Local
Area Network (LAN) which provides access in a limited area such as a home, school, computer
laboratory, or office building.
Although a connection to a LAN may provide very high data-rates within the LAN, actual Internet
access speed is limited by the upstream link to the ISP. LANs may be wired or
wireless. Ethernet over twisted pair cabling and Wi-Fi are the two most common technologies used
to build LANs today, but ARCNET, Token Ring, Localtalk, FDDI, and other technologies were used
in the past.
Ethernet is the name of the IEEE 802.3 standard for physical LAN communication[28] and Wi-Fi is a
trade name for a wireless local area network (WLAN) that uses one of the IEEE 802.11 standards.
[29]
 Ethernet cables are interconnected via switches & routers. Wi-Fi networks are built using one or
more wireless antenna called access points.
Many "modems" provide the additional functionality to host a LAN so most Internet access today is
through a LAN[citation needed], often a very small LAN with just one or two devices attached. And while
LANs are an important form of Internet access, this raises the question of how and at what data rate
the LAN itself is connected to the rest of the global Internet. The technologies described below are
used to make these connections.

Hardwired broadband access[edit]


The term broadband includes a broad range of technologies, all of which provide higher data rate
access to the Internet. The following technologies use wires or cables in contrast to wireless
broadband described later.
Dial-up access[edit]
"Dial up modem noises"

MENU

0:00
Typical noises of a dial-up
modem while establishing
connection with a
local ISPin order to get
access to the Internet.

Problems playing this file? See  media


help.

Dial-up Internet access uses a modem and a phone call placed over the public switched telephone
network (PSTN) to connect to a pool of modems operated by an ISP. The modem converts a
computer's digital signal into an analog signal that travels over a phone line's local loop until it
reaches a telephone company's switching facilities or central office (CO) where it is switched to
another phone line that connects to another modem at the remote end of the connection. [30]
Operating on a single channel, a dial-up connection monopolizes the phone line and is one of the
slowest methods of accessing the Internet. Dial-up is often the only form of Internet access available
in rural areas as it requires no new infrastructure beyond the already existing telephone network, to
connect to the Internet. Typically, dial-up connections do not exceed a speed of 56 kbit/s, as they
are primarily made using modems that operate at a maximum data rate of 56 kbit/s downstream
(towards the end user) and 34 or 48 kbit/s upstream (toward the global Internet). [10]
Multilink dial-up[edit]
Multilink dial-up provides increased bandwidth by channel bonding multiple dial-up connections and
accessing them as a single data channel.[31] It requires two or more modems, phone lines, and dial-
up accounts, as well as an ISP that supports multilinking – and of course any line and data charges
are also doubled. This inverse multiplexing option was briefly popular with some high-end users
before ISDN, DSL and other technologies became available. Diamond and other vendors created
special modems to support multilinking.[32]
Integrated Services Digital Network[edit]
Integrated Services Digital Network (ISDN) is a switched telephone service capable of transporting
voice and digital data, and is one of the oldest Internet access methods. ISDN has been used for
voice, video conferencing, and broadband data applications. ISDN was very popular in Europe, but
less common in North America. Its use peaked in the late 1990s before the availability
of DSL and cable modem technologies.[33]
Basic rate ISDN, known as ISDN-BRI, has two 64 kbit/s "bearer" or "B" channels. These channels
can be used separately for voice or data calls or bonded together to provide a 128 kbit/s service.
Multiple ISDN-BRI lines can be bonded together to provide data rates above 128 kbit/s. Primary rate
ISDN, known as ISDN-PRI, has 23 bearer channels (64 kbit/s each) for a combined data rate of
1.5 Mbit/s (US standard). An ISDN E1 (European standard) line has 30 bearer channels and a
combined data rate of 1.9 Mbit/s.
Leased lines[edit]
Leased lines are dedicated lines used primarily by ISPs, business, and other large enterprises to
connect LANs and campus networks to the Internet using the existing infrastructure of the public
telephone network or other providers. Delivered using wire, optical fiber, and radio, leased lines are
used to provide Internet access directly as well as the building blocks from which several other forms
of Internet access are created. [34]
T-carrier technology dates to 1957 and provides data rates that range from 56 and 64 kbit/s (DS0)
to 1.5 Mbit/s (DS1 or T1), to 45 Mbit/s (DS3 or T3). A T1 line carries 24 voice or data channels (24
DS0s), so customers may use some channels for data and others for voice traffic or use all 24
channels for clear channel data. A DS3 (T3) line carries 28 DS1 (T1) channels. Fractional T1 lines
are also available in multiples of a DS0 to provide data rates between 56 and 1500 kbit/s. T-carrier
lines require special termination equipment that may be separate from or integrated into a router or
switch and which may be purchased or leased from an ISP.[35] In Japan the equivalent standard is
J1/J3. In Europe, a slightly different standard, E-carrier, provides 32 user channels (64 kbit/s) on an
E1 (2.0 Mbit/s) and 512 user channels or 16 E1s on an E3 (34.4 Mbit/s).
Synchronous Optical Networking (SONET, in the U.S. and Canada) and Synchronous Digital
Hierarchy (SDH, in the rest of the world) are the standard multiplexing protocols used to carry high-
data-rate digital bit-streams over optical fiber using lasers or highly coherent light from light-emitting
diodes (LEDs). At lower transmission rates data can also be transferred via an electrical interface.
The basic unit of framing is an OC-3c (optical) or STS-3c (electrical) which carries 155.520 Mbit/s.
Thus an OC-3c will carry three OC-1 (51.84 Mbit/s) payloads each of which has enough capacity to
include a full DS3. Higher data rates are delivered in OC-3c multiples of four providing OC-
12c (622.080 Mbit/s), OC-48c(2.488 Gbit/s), OC-192c (9.953 Gbit/s), and OC-768c (39.813 Gbit/s).
The "c" at the end of the OC labels stands for "concatenated" and indicates a single data stream
rather than several multiplexed data streams.[34]
The 1, 10, 40, and 100 gigabit Ethernet (GbE, 10 GbE, 40/100 GbE) IEEE standards (802.3) allow
digital data to be delivered over copper wiring at distances to 100 m and over optical fiber at
distances to 40 km.[36]
Cable Internet access[edit]
Main article: Cable Internet access

Cable Internet provides access using a cable modem on hybrid fiber coaxial wiring originally
developed to carry television signals. Either fiber-optic or coaxial copper cable may connect a node
to a customer's location at a connection known as a cable drop. In a cable modem termination
system, all nodes for cable subscribers in a neighborhood connect to a cable company's central
office, known as the "head end." The cable company then connects to the Internet using a variety of
means – usually fiber optic cable or digital satellite and microwave transmissions. [37] Like DSL,
broadband cable provides a continuous connection with an ISP.
Downstream, the direction toward the user, bit rates can be as much as 400 Mbit/s for business
connections, and 320 Mbit/s for residential service in some countries. Upstream traffic, originating at
the user, ranges from 384 kbit/s to more than 20 Mbit/s. Broadband cable access tends to service
fewer business customers because existing television cable networks tend to service residential
buildings and commercial buildings do not always include wiring for coaxial cable networks. [38] In
addition, because broadband cable subscribers share the same local line, communications may be
intercepted by neighboring subscribers. Cable networks regularly provide encryption schemes for
data traveling to and from customers, but these schemes may be thwarted. [37]
Digital subscriber line (DSL, ADSL, SDSL, and VDSL)[edit]
Digital subscriber line (DSL) service provides a connection to the Internet through the telephone
network. Unlike dial-up, DSL can operate using a single phone line without preventing normal use of
the telephone line for voice phone calls. DSL uses the high frequencies, while the low (audible)
frequencies of the line are left free for regular telephonecommunication.[10] These frequency bands
are subsequently separated by filters installed at the customer's premises.
DSL originally stood for "digital subscriber loop". In telecommunications marketing, the term digital
subscriber line is widely understood to mean asymmetric digital subscriber line(ADSL), the most
commonly installed variety of DSL. The data throughput of consumer DSL services typically ranges
from 256 kbit/s to 20 Mbit/s in the direction to the customer (downstream), depending on DSL
technology, line conditions, and service-level implementation. In ADSL, the data throughput in the
upstream direction, (i.e. in the direction to the service provider) is lower than that in the downstream
direction (i.e. to the customer), hence the designation of asymmetric. [39] With a symmetric digital
subscriber line (SDSL), the downstream and upstream data rates are equal. [40]
Very-high-bit-rate digital subscriber line (VDSL or VHDSL, ITU G.993.1)[41] is a digital subscriber line
(DSL) standard approved in 2001 that provides data rates up to 52 Mbit/s downstream and 16 Mbit/s
upstream over copper wires[42] and up to 85 Mbit/s down- and upstream on coaxial cable. [43] VDSL is
capable of supporting applications such as high-definition television, as well as telephone services
(voice over IP) and general Internet access, over a single physical connection.
VDSL2 (ITU-T G.993.2) is a second-generation version and an enhancement of VDSL. [44] Approved
in February 2006, it is able to provide data rates exceeding 100 Mbit/s simultaneously in both the
upstream and downstream directions. However, the maximum data rate is achieved at a range of
about 300 meters and performance degrades as distance and loop attenuation increases.
DSL Rings[edit]
DSL Rings (DSLR) or Bonded DSL Rings is a ring topology that uses DSL technology over existing
copper telephone wires to provide data rates of up to 400 Mbit/s.[45]
Fiber to the home[edit]
Fiber-to-the-home (FTTH) is one member of the Fiber-to-the-x (FTTx) family that includes Fiber-to-
the-building or basement (FTTB), Fiber-to-the-premises (FTTP), Fiber-to-the-desk (FTTD), Fiber-to-
the-curb (FTTC), and Fiber-to-the-node (FTTN).[46] These methods all bring data closer to the end
user on optical fibers. The differences between the methods have mostly to do with just how close to
the end user the delivery on fiber comes. All of these delivery methods are similar to hybrid fiber-
coaxial (HFC) systems used to provide cable Internet access.
The use of optical fiber offers much higher data rates over relatively longer distances. Most high-
capacity Internet and cable television backbones already use fiber optic technology, with data
switched to other technologies (DSL, cable, POTS) for final delivery to customers.[47]
Australia began rolling out its National Broadband Network across the country using fiber-optic
cables to 93 percent of Australian homes, schools, and businesses.[48] The project was abandoned
by the subsequent LNP government, in favour of a hybrid FTTN design, which turned out to be more
expensive and introduced delays. Similar efforts are underway in Italy, Canada, India, and many
other countries (see Fiber to the premises by country).[49][50][51][52]
Power-line Internet[edit]
Power-line Internet, also known as Broadband over power lines (BPL), carries Internet data on a
conductor that is also used for electric power transmission.[53] Because of the extensive power line
infrastructure already in place, this technology can provide people in rural and low population areas
access to the Internet with little cost in terms of new transmission equipment, cables, or wires. Data
rates are asymmetric and generally range from 256 kbit/s to 2.7 Mbit/s. [54]
Because these systems use parts of the radio spectrum allocated to other over-the-air
communication services, interference between the services is a limiting factor in the introduction of
power-line Internet systems. The IEEE P1901 standard specifies that all power-line protocols must
detect existing usage and avoid interfering with it. [54]
Power-line Internet has developed faster in Europe than in the U.S. due to a historical difference in
power system design philosophies. Data signals cannot pass through the step-down transformers
used and so a repeater must be installed on each transformer. [54] In the U.S. a transformer serves a
small cluster of from one to a few houses. In Europe, it is more common for a somewhat larger
transformer to service larger clusters of from 10 to 100 houses. Thus a typical U.S. city requires an
order of magnitude more repeaters than in a comparable European city.[55]
ATM and Frame Relay[edit]
Asynchronous Transfer Mode (ATM) and Frame Relay are wide-area networking standards that can
be used to provide Internet access directly or as building blocks of other access technologies. For
example, many DSL implementations use an ATM layer over the low-level bitstream layer to enable
a number of different technologies over the same link. Customer LANs are typically connected to an
ATM switch or a Frame Relay node using leased lines at a wide range of data rates. [56][57]
While still widely used, with the advent of Ethernet over optical fiber, MPLS, VPNs and broadband
services such as cable modem and DSL, ATM and Frame Relay no longer play the prominent role
they once did.

Wireless broadband access[edit]


Wireless broadband is used to provide both fixed and mobile Internet access with the following
technologies.
Satellite broadband[edit]

Satellite Internet access via VSATin Ghana

Satellite Internet access provides fixed, portable, and mobile Internet access.[58] Data rates range
from 2 kbit/s to 1 Gbit/s downstream and from 2 kbit/s to 10 Mbit/s upstream. In the northern
hemisphere, satellite antenna dishes require a clear line of sight to the southern sky, due to the
equatorial position of all geostationary satellites. In the southern hemisphere, this situation is
reversed, and dishes are pointed north. [59][60] Service can be adversely affected by moisture, rain, and
snow (known as rain fade).[59][60][61] The system requires a carefully aimed directional antenna. [60]
Satellites in geostationary Earth orbit (GEO) operate in a fixed position 35,786 km (22,236 miles)
above the Earth's equator. At the speed of light (about 300,000 km/s or 186,000 miles per second), it
takes a quarter of a second for a radio signal to travel from the Earth to the satellite and back. When
other switching and routing delays are added and the delays are doubled to allow for a full round-trip
transmission, the total delay can be 0.75 to 1.25 seconds. This latency is large when compared to
other forms of Internet access with typical latencies that range from 0.015 to 0.2 seconds. Long
latencies negatively affect some applications that require real-time response, particularly online
games, voice over IP, and remote control devices.[62][63] TCP tuning and TCP acceleration techniques
can mitigate some of these problems. GEO satellites do not cover the Earth's polar regions.
[59]
 HughesNet, Exede, AT&T and Dish Network have GEO systems.[64][65][66][67]
Satellites in low Earth orbit (LEO, below 2000 km or 1243 miles) and medium Earth orbit (MEO,
between 2000 and 35,786 km or 1,243 and 22,236 miles) are less common, operate at lower
altitudes, and are not fixed in their position above the Earth. Lower altitudes allow lower latencies
and make real-time interactive Internet applications more feasible. LEO systems
include Globalstar and Iridium. The O3b Satellite Constellation is a proposed MEO system with a
latency of 125 ms. COMMStellation™ is a LEO system, scheduled for launch in 2015, that is
expected to have a latency of just 7 ms.
Mobile broadband[edit]

Service mark for GSMA

Mobile broadband is the marketing term for wireless Internet access delivered through mobile phone
towers to computers, mobile phones (called "cell phones" in North America and South Africa, and
"hand phones" in Asia), and other digital devices using portable modems. Some mobile services
allow more than one device to be connected to the Internet using a single cellular connection using a
process called tethering. The modem may be built into laptop computers, tablets, mobile phones,
and other devices, added to some devices using PC cards, USB modems, and USB
sticks or dongles, or separate wireless modems can be used.[68]
New mobile phone technology and infrastructure is introduced periodically and generally involves a
change in the fundamental nature of the service, non-backwards-compatible transmission
technology, higher peak data rates, new frequency bands, wider channel frequency bandwidth in
Hertz becomes available. These transitions are referred to as generations. The first mobile data
services became available during the second generation (2G).

Second generation (2G) from 1991:

Speeds in kbit/s down and up


 · GSM CSD 9.6 kbit/s

 · CDPD up to 19.2 kbit/s

 · GSM GPRS (2.5G) 56 to 115 kbit/s

 · GSM EDGE (2.75G)  up to 237 kbit/s

Third generation (3G) from 2001:

Speeds in Mbit/s down up

 · UMTS W-CDMA 0.4 Mbit/s

 · UMTS HSPA 14.4 5.8

 · UMTS TDD 16 Mbit/s

 · CDMA2000 1xRTT 0.3 0.15

 · CDMA2000 EV-DO 2.5–4.9 0.15–1.8

 · GSM EDGE-
1.6 0.5
Evolution 

Fourth generation (4G) from 2006:

Speeds in Mbit/s down up

 · HSPA+ 21–672 5.8–168

 · Mobile WiMAX (802.16) 37–365 17–376

 · LTE 100–300 50–75

 · LTE-Advanced:  
   · moving at higher speeds 100 Mbit/s

 · not moving or moving at lower


  up to 1000 Mbit/s
speeds

 · MBWA (802.20) 80 Mbit/s

The download (to the user) and upload (to the Internet) data rates given above are peak or
maximum rates and end users will typically experience lower data rates.
WiMAX was originally developed to deliver fixed wireless service with wireless mobility added in
2005. CDPD, CDMA2000 EV-DO, and MBWA are no longer being actively developed.
In 2011, 90% of the world's population lived in areas with 2G coverage, while 45% lived in areas with
2G and 3G coverage.[69]
WiMAX[edit]
Worldwide Interoperability for Microwave Access (WiMAX) is a set of interoperable implementations
of the IEEE 802.16 family of wireless-network standards certified by the WiMAX Forum. WiMAX
enables "the delivery of last mile wireless broadband access as an alternative to cable and DSL".
[70]
 The original IEEE 802.16 standard, now called "Fixed WiMAX", was published in 2001 and
provided 30 to 40 megabit-per-second data rates. [71] Mobility support was added in 2005. A 2011
update provides data rates up to 1 Gbit/s for fixed stations. WiMax offers a metropolitan area
network with a signal radius of about 50 km (30 miles), far surpassing the 30-metre (100-foot)
wireless range of a conventional Wi-Fi local area network (LAN). WiMAX signals also penetrate
building walls much more effectively than Wi-Fi.
Wireless ISP[edit]

Wi-Fi logo

Wireless Internet service providers (WISPs) operate independently of mobile phone operators.


WISPs typically employ low-cost IEEE 802.11 Wi-Fi radio systems to link up remote locations over
great distances (Long-range Wi-Fi), but may use other higher-power radio communications systems
as well.
Traditional 802.11a/b/g/n/ac is an unlicensed omnidirectional service designed to span between 100
and 150 m (300 to 500 ft). By focusing the radio signal using a directional antenna (where allowed
by regulations), 802.11 can operate reliably over a distance of many km(miles), although the
technology's line-of-sight requirements hamper connectivity in areas with hilly or heavily foliated
terrain. In addition, compared to hard-wired connectivity, there are security risks (unless robust
security protocols are enabled); data rates are usually slower (2 to 50 times slower); and the network
can be less stable, due to interference from other wireless devices and networks, weather and line-
of-sight problems.[72]
With the increasing popularity of unrelated consumer devices operating on the same 2.4 GHz band,
many providers have migrated to the 5GHz ISM band. If the service provider holds the necessary
spectrum license, it could also reconfigure various brands of off the shelf Wi-Fi hardware to operate
on its own band instead of the crowded unlicensed ones. Using higher frequencies carries various
advantages:

 usually regulatory bodies allow for more power and using (better-) directional antennae,
 there exists much more bandwidth to share, allowing both better throughput and improved
coexistence,
 there are less consumer devices that operate over 5 GHz than on 2.4 GHz, hence less
interferers are present,
 the shorter wavelengths propagate much worse through walls and other structure, so much
less interference leaks outside of the homes of consumers.
Proprietary technologies like Motorola Canopy & Expedience can be used by a WISP to offer
wireless access to rural and other markets that are hard to reach using Wi-Fi or WiMAX. There are a
number of companies that provide this service.[73]
Local Multipoint Distribution Service[edit]
Local Multipoint Distribution Service (LMDS) is a broadband wireless access technology that uses
microwave signals operating between 26 GHz and 29 GHz.[74] Originally designed for digital
television transmission (DTV), it is conceived as a fixed wireless, point-to-multipoint technology for
utilization in the last mile. Data rates range from 64 kbit/s to 155 Mbit/s. [75] Distance is typically limited
to about 1.5 miles (2.4 km), but links of up to 5 miles (8 km) from the base station are possible in
some circumstances.[76]
LMDS has been surpassed in both technological and commercial potential by the LTE and WiMAX
standards.

Hybrid Access Networks[edit]


See also: Hybrid Access Networks

In some regions, notably in rural areas, the length of the copper lines makes it difficult for network
operators to provide high bandwidth services. An alternative is to combine a fixed access network,
typically XDSL, with a wireless network, typically LTE. The Broadband Forum has standardised an
architecture for such Hybrid Access Networks.

Non-commercial alternatives for using Internet services[edit]


See also: Project Loon

Grassroots wireless networking movements[edit]


Deploying multiple adjacent Wi-Fi access points is sometimes used to create city-wide wireless
networks.[77] It is usually ordered by the local municipality from commercial WISPs.
Grassroots efforts have also led to wireless community networks widely deployed at numerous
countries, both developing and developed ones. Rural wireless-ISP installations are typically not
commercial in nature and are instead a patchwork of systems built up by hobbyists mounting
antennas on radio masts and towers, agricultural storage silos, very tall trees, or whatever other tall
objects are available.
Where radio spectrum regulation is not community-friendly, the channels are crowded or when
equipment can not be afforded by local residents, free-space optical communicationcan also be
deployed in a similar manner for point to point transmission in air (rather than in fiber optic cable).
Packet radio[edit]
Main articles: Packet radio and AMPRNet

Packet radio connects computers or whole networks operated by radio amateurs with the option to
access the Internet. Note that as per the regulatory rules outlined in the HAM license, Internet
access and e-mail should be strictly related to the activities of hardware amateurs.
Sneakernet[edit]
Main article: Sneakernet

The term, a tongue-in-cheek play on net(work) as in Internet or Ethernet, refers to the wearing


of sneakers as the transport mechanism for the data.
For those who do not have access to or can not afford broadband at home, downloading large files
and disseminating information is done by transmission through workplace or library networks, taken
home and shared with neighbors by sneakernet.
There are various decentralized, delay tolerant peer to peer applications which aim to fully automate
this using any available interface, including both wireless (Bluetooth, Wi-Fi mesh, P2P or hotspots)
and physically connected ones (USB storage, ethernet, etc.).
Sneakernets may also be used in tandem with computer network data transfer to increase data
security or overall throughput for big data use cases. Innovation continues in the area to this day, for
example AWS has recently announced Snowball, and bulk data processing is also done in a similar
fashion by many research institutes and government agencies.

How to Install a Modem


.
PART 1
Preparing to Install

1
Make sure that your modem will work with your Internet subscription. While
rare, some modems encounter issues when paired with a specific Internet
company (e.g., Comcast). Double-check your modem's compatibility with
your current Internet subscription before buying (if possible).
o If you find that your modem won't work with your current

subscription, try to exchange the modem for a different one that will work, or
switch your Internet

2
Find your room's cable output. The cable output resembles a metal cylinder
with a small hole in the middle and screw threads all around the sides. You'll
usually find cable outputs in the wall near the floor in living rooms and
bedrooms.

o In some cases, there will already be a cable connected to the

cable outlet.

3
Decide on a place to mount the modem. The modem should be relatively
high up (e.g., on top of a bookshelf), and it will need to be close enough to
the cable output that you can connect it without stretching or bending the
cable.

o You'll also need to have a power outlet nearby.

4
Make sure that you have all of the required cables. A modem generally
requires a coaxial cable to connect to the cable output, as well as a power
cable to connect to an electrical outlet. Both of these cables should come
with your modem, but if you bought it used, you may need to find
replacement cables.
o If you plan on attaching the modem to a router, you will also

need an Ethernet cable.

o Consider buying a longer coaxial cable if the one that you have is

too short to allow you to mount properly your modem.

5
Read your modem's instructions. Each modem is unique, and yours may
require additional setup outside of this article's capacity. Reading your
modem's manual will help make you aware of any additional steps that you
have to take to install the modem.

Part 2
Installing

1
Attach one end of the coaxial cable to the cable output. The coaxial cable
has a connection that resembles a needle on each end. This will plug into the
cable output. Make sure that you screw the coaxial cable onto the cable
outlet to ensure that the connection is solid.

2
Attach the other end of the cable to the input on your modem. On the back
of the modem, you should see an input that resembles the cable output
cylinder. Attach the free end of the coaxial cable to this input, making sure
to tighten as needed.
3
Plug your modem's power cable into an electrical outlet. A wall socket or a
surge protector will do. It's important to plug the cable into the power outlet
before connecting it to the modem, since connecting the power cable to the
modem first can cause damage.

1.

4
Insert the modem power cable's free end into the modem. You'll usually find
the power cable input port at the bottom of the back of the modem, but
check your modem's documentation to confirm if you can't find the power
port.

5
Place your modem in its spot. With the cables attached, gently move your
modem into its designated position. You shouldn't feel any resistance from
the cables.

6
Attach the modem to a router. If you have a Wi-Fi router that you want to use
in conjunction with your modem, plug one end of an Ethernet cable into the
square port on the back of the modem, then plug the other end into the
"INTERNET" (or similarly labeled) square port on the back of the router. As
long as the router is plugged into a power source, the router should
immediately light up.

o Give your modem and router a few minutes to boot up before


attempting to connect to Wi-Fi.
o You can also connect your computer directly to your modem via
Ethernet if you have a Windows computer (or an Ethernet to USB-C adapter
for a Mac)

Creating a Connection Profile


A connection profile contains the connection property information needed to connect to a data source in
your enterprise.
1. From the main menu, select File > New > Other .
2. Under Connection Profiles, select Connection Profile and click Next.
3. Select the connection profile type.
4. Enter a unique Name for the connection profile.
5. (Optional) Enter a Description and click Next.
6. Complete the required information in the wizard for your connection profile type.
7. Click Test Connection to ping the server and to verify that the connection profile is working.
8. Click Finish to create the connection profile.

How to change the default network


connection priority in Windows
This is a step-by-step article.

Summary

This article briefly explains how to change the priority of network connections in Windows 7
so that they follow a specific connection order.
Steps to change the network connection priority in Windows 7

1. Click Start, and in the search field, type View network connections.

2. Press the ALT key, click Advanced Options and then click Advanced Settings...


3. Select Local Area Connection and click the green arrows to give priority to the
desired connection.
 

4. After organizing the network connections available according to your preferences,


click OK.
5. The computer will now follow an order of priority when detecting available
connections.

Steps to change wireless network connection priority in Windows 7

The computer can detect more than one wireless network at a time. This article explains
how to prioritize the wireless network you want to connect to first.
 
1. Click Start, and in the search field, type Network and Sharing Center.

2. In Network and Sharing Center, click Manage wireless networks.


 

3. Click the connection to be given priority (e.g. Connection 2 has less priority


than Connection 1), and then click Move up.
 

4. The next time it is detecting the networks, the computer will give more priority to
Connection 2 than to Connection 1.

You might also like